id
stringlengths 10
10
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| content
stringlengths 3.91k
873k
| references
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1703.04908 | Emergence of Grounded Compositional Language in Multi-Agent Populations | By capturing statistical patterns in large corpora, machine learning has
enabled significant advances in natural language processing, including in
machine translation, question answering, and sentiment analysis. However, for
agents to intelligently interact with humans, simply capturing the statistical
patterns is insufficient. In this paper we investigate if, and how, grounded
compositional language can emerge as a means to achieve goals in multi-agent
populations. Towards this end, we propose a multi-agent learning environment
and learning methods that bring about emergence of a basic compositional
language. This language is represented as streams of abstract discrete symbols
uttered by agents over time, but nonetheless has a coherent structure that
possesses a defined vocabulary and syntax. We also observe emergence of
non-verbal communication such as pointing and guiding when language
communication is unavailable. | http://arxiv.org/pdf/1703.04908 | Igor Mordatch, Pieter Abbeel | cs.AI, cs.CL | null | null | cs.AI | 20170315 | 20180724 | 8 1 0 2
# l u J
4 2
] I A . s c [ 2 v 8 0 9 4 0 . 3 0 7 1 : v i X r a
# Emergence of Grounded Compositional Language in Multi-Agent Populations
# Igor Mordatch OpenAI San Francisco, California, USA
# Pieter Abbeel UC Berkeley Berkeley, California, USA
# Abstract
By capturing statistical patterns in large corpora, machine learning has enabled signiï¬cant advances in natural language processing, including in machine translation, question an- swering, and sentiment analysis. However, for agents to in- telligently interact with humans, simply capturing the statis- tical patterns is insufï¬cient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic com- positional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a deï¬ned vocabulary and syntax. We also observe emergence of non- verbal communication such as pointing and guiding when language communication is unavailable.
# Introduction
Recently there has been a surge of renewed interest in the pragmatic aspects of language use and it is also the focus of our work. We adopt a view of (Gauthier and Mordatch 2016) that an agent possesses an understanding of language when it can use language (along with other tools such as non-verbal communication or physical acts) to accomplish goals in its environment. This leads to evaluation criteria that can be measured precisely and without human involvement. In this paper, we propose a physically-situated multi- agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete sym- bols uttered by agents over time, but nonetheless has a co- herent structure that possesses a deï¬ned vocabulary and syn- tax. The agents utter communication symbols alongside per- forming actions in the physical environment to cooperatively accomplish goals deï¬ned by a joint reward function shared between all agents. There are no pre-designed meanings as- sociated with the uttered symbols - the agents form concepts relevant to the task and environment and assign arbitrary symbols to communicate them.
Development of agents that are capable of communication and ï¬exible language use is one of the long-standing chal- lenges facing the ï¬eld of artiï¬cial intelligence. Agents need to develop communication if they are to successfully coor- dinate as a collective. Furthermore, agents will need some language capacity if they are to interact and productively collaborate with humans or make decisions that are inter- pretable by humans. If such a capacity were to arise artiï¬- cially, it could also offer important insights into questions surrounding development of human language and cognition. But if we wish to arrive at formation of communication from ï¬rst principles, it must form out of necessity. The ap- proaches that learn to plausibly imitate language from ex- amples of human language, while tremendously useful, do not learn why language exists. Such supervised approaches can capture structural and statistical relationships in lan- guage, but they do not capture its functional aspects, or that language happens for purposes of successful coordina- tion between humans. Evaluating success of such imitation- based approaches on the basis of linguistic plausibility also presents challenges of ambiguity and requirement of human involvement.
Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
There are similarly no explicit language usage goals, such as making correct utterances, and no explicit roles agents are assigned, such as speaker or listener, or explicit turn- taking dialogue structure as in traditional language games. There may be an arbitrary number of agents in a popula- tion communicating at the same time and part of the dif- ï¬culty is learning to refer speciï¬c agents. A population of agents is situated as moving particles in a continuous two-dimensional environment, possessing properties such as color and shape. The goals of the population are based on non-linguistic objectives, such as moving to a location and language arises from the need to coordinate on those goals. We do not rely on any supervision such as human demon- strations or text corpora.
Similar to recent work,we formulate the discovery the ac- tion and communication protocols for our agents jointly as a reinforcement learning problem. Agents perform physical actions and communication utterances according to an iden- tical policy that is instantiated for all agents and fully de- termines the action and communication protocols. The poli- cies are based on neural network models with an architec- ture composed of dynamically-instantiated recurrent mod- ules. This allows decentralized execution with a variable
number of agents and communication streams. The joint dynamics of all agents and environment, including discrete communication streams are fully-differentiable, the agentsâ policy is trained end-to-end with backpropagation through time.
The languages formed exhibit interpretable compositional structure that in general assigns symbols to separately refer to environment landmarks, action verbs, and agents. How- ever, environment variation leads to a number of specialized languages, omitting words that are clear from context. For example, when there is only one type of action to take or one landmark to go to, words for those concepts do not form in the language. Considerations of the physical environment also have an impact on language structure. For example, a symbol denoting go action is typically uttered ï¬rst because the listener can start moving before even hearing the desti- nation. This effect only arises when linguistic and physical behaviors are treated jointly and not in isolation.
The presence of a physical environment also allows for alternative strategies aside from language use to accom- plish goals. A visual sensory modality provides an alterna- tive medium for communication and we observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable. When even non-verbal communication is unavailable, strategies such as direct pushing may be employed to succeed at the task. It is important to us to build an environment with a diverse set of capabilities which language use develops alongside with.
By compositionality we mean the combination of mul- tiple words to create meaning, as opposed to holistic lan- guages that have a unique word for every possible meaning (Kirby 2001). Our work offers insights into why such com- positional structure emerges. In part, we ï¬nd it to emerge when we explicitly encourage active vocabulary sizes to be small through a soft penalty. This is consistent with analy- sis in evolutionary linguistics (Nowak, Plotkin, and Jansen 2000) that ï¬nds composition to emerge only when number of concepts to be expressed becomes greater than a factor of agentâs symbol vocabulary capacity. Another important component leading to composition is training on a variety of tasks and environment conï¬gurations simultaneously. Train- ing on cases where most information is clear from context (such as when there is only one landmark) leads to forma- tion of atomic concepts that are reused compositionally in more complicated cases.
Related Work Recent years have seen substantial progress in practical natural language applications such as machine translation (Sutskever, Vinyals, and Le 2014; Bahdanau, Cho, and Ben- gio 2014), sentiment analysis (Socher et al. 2013), document summarization (Durrett, Berg-Kirkpatrick, and Klein 2016), and domain-speciï¬c dialogue (Dhingra et al. 2016). Much of this success is a result of intelligently designed statistical models trained on large static datasets. However, such ap- proaches do not produce an understanding of language that can lead to productive cooperation with humans.
An interest in pragmatic view of language understand- ing has been longstanding (Austin 1962; Grice 1975) and
has recently argued for in (Gauthier and Mordatch 2016; Lake et al. 2016; Lazaridou, Pham, and Baroni 2016). Prag- matic language use has been proposed in the context of two- player reference games (Golland, Liang, and Klein 2010; Vogel et al. 2014; Andreas and Klein 2016) focusing on the task of identifying object references through a learned language. (Winograd 1973; Wang, Liang, and Manning 2016) ground language in a physical environment and fo- cusing on language interaction with humans for comple- tion of tasks in the physical environment. In such a prag- matic setting, language use for communication of spatial concepts has received particular attention in (Steels 1995; Ullman, Xu, and Goodman 2016).
Aside from producing agents that can interact with hu- mans through language, research in pragmatic language un- derstanding can be informative to the ï¬elds of linguistics and cognitive science. Of particular interest in these ï¬elds has been the question of how syntax and compositional structure in language emerged, and why it is largely unique to human languages (Kirby 1999; Nowak, Plotkin, and Jansen 2000; Steels 2005). Models such as Rational Speech Acts (Frank and Goodman 2012) and Iterated Learning (Kirby, Grifï¬ths, and Smith 2014) have been popular in cognitive science and evolutionary linguistics, but such approaches tend to rely on pre-speciï¬ed procedures or models that limit their general- ity.
The recent work that is most similar to ours is the applica- tion of reinforcement learning approaches towards the pur- poses of learning a communication protocol, as exempliï¬ed by (Bratman et al. 2010; Foerster et al. 2016; Sukhbaatar, Szlam, and Fergus 2016; Lazaridou, Peysakhovich, and Ba- roni 2016).
# Problem Formulation
The setting we are considering is a cooperative partially ob- servable Markov game (inman 1994), which is a multi- agent extension of a Markov decision process. A Markov game for N agents is defined by set of states S describ- ing the possible configurations of all agents, a set of ac- tions A,,...,Ay and a set of observations O,,...,Oy for each agent. Initial states are determined by a distribution p: S++ (0, 1]. State transitions are determined by a function T:SxA, x... x An © S. For each agent 7, rewards are given by function r; : S x A; +> R, observations are given by function 0; : S ++ O;. To choose actions, each agent i uses a stochastic policy 7; : O; x A; +> [0,1].
In this work, we assume all agents have identical action and observation spaces, and all agents act according to the same policy Ï and receive a shared reward. We consider a ï¬- nite horizon setting, with episode length T . In a cooperative setting, the problem is to ï¬nd a policy that maximizes the expected shared return for all agents, which can be solved as a joint minimization problem:
T N max R(7), where Rm) =| 7) ris'.a))] t=0 i=0
agent 1 landmark ° B landmark landmark v @ agent 3 agent 2
Figure 1: An example of environments we consider.
# Grounded Communication Environment
As argued in the introduction, grounding multi-agent com- munication in a physical environment is crucial for interest- ing communication behaviors to emerge. In this work, we consider a physically-simulated two-dimensional environ- ment in continuous space and discrete time. This environ- ment consists of N agents and M landmarks. Both agent and landmark entities inhabit a physical location in space p and posses descriptive physical characteristics, such as color and shape type. In addition, agents can direct their gaze to a loca- tion v.Agents can act to move in the environment and direct their gaze, but may also be affected by physical interactions with other agents. We denote the physical state of an entity (including descriptive characteristics) by x and describe its precise details and transition dynamics in the Appendix.
In addition to performing physical actions, agents utter verbal communication symbols c at every timestep. These utterances are discrete elements of an abstract symbol vo- cabulary C of size K. We do not assign any signiï¬cance or meaning to these symbols. They are treated as abstract cate- gorical variables that are emitted by each agent and observed by all other agents. It is up to agents at training time to as- sign meaning to these symbols. As shown in Section , these symbols become assigned to interpretable concepts. Agents may also choose not to utter anything at a given timestep, and there is a cost to making an utterance, loosely represent- ing the metabolic effort of vocalization. We denote a vector representing one-hot encoding of symbol c with boldface c. Each agent has internal goals speciï¬ed by vector g that are private and not observed by other agents. These goals are grounded in the physical environment and include tasks such as moving to or gazing at a location. These goals may involve other agents (requiring the other agent to move to a location, for example) but are not observed by them and thus necessitate coordination and communication between agents. Verbal utterances are one tool which the agents can use to cooperatively accomplish all goals, but we also ob- serve emergent use of non-verbal signals and altogether non- communicative strategies.
To aid in accomplishing goals, each agent has internal re- current memory bank m that is also private and not observed by other agents. This memory bank has no pre-designed be- havior and it is up to the agents to learn to utilize it appro- priately.
The full state of the environment is given by s = [x1 jes (N+M) ©1,...,N M1,....N 81,.. ] ⬠S. Each agent observes physical states of all entities in the environment, verbal utterances of all agents, and its own private mem- ory and goal vector. The observation for agent i is 0;(s) = [ @X1,...,(W+a2) C1,....N Mj Bi ] . Where ;x, is the observa- tion of entity 7âs physical state in agent iâs reference frame (see Appendix for details). More intricate observation mod- els are possible, such as physical o| pixels or verbal observations from These models would require agents sual processing and source separati nal to this work. Despite the dimens: varying with the number of physical bservations solely from a single input channel. learning to perform vi- on, which are orthogo- ionality of observations entities and communi- cation streams, our policy architecture as described in Sec- tion allows a single policy parameterization across these variations.
Figure 2: The transition dynamics of N agents from time t â 1 to t. Dashed lines indicate one-to-one dependencies between agents and solid lines indicate all-to-all dependen- cies.
Policy Learning with Backpropagation Each agent acts by sampling actions from a stochastic pol- icy Ï, which is identical for all agents and deï¬ned by pa- rameters θ. There are several common options for ï¬nding optimal policy parameters. The model-free framework of Q- learning can be used to ï¬nd the optimal state-action value function, and employ a policy that acts greedily to accord- ing to the value function. Unfortunately, Q function dimen- sionality scales quadratically with communication vocabu- lary size, which can quickly become intractably large. Alter- natively it is possible to directly learn a policy function using
model-free policy gradient methods, which use sampling to estimate the gradient of policy return dR dθ . The gradient es- timates from these methods can exhibit very high variance and credit assignment becomes an especially difï¬cult prob- lem in the presence of sequential communication actions.
Instead of using model-free reinforcement learning meth- ods, we build an end-to-end differentiable model of all agent and environment state dynamics over time and calculate dR dθ with backpropagation. At every optimization iteration, we sample a new batch of 1024 random environment instan- tiations and backpropagate their dynamics through time to calculate the total return gradient. Figure 2 shows the de- pendency chain between two timesteps. A similar approach was employed by (Foerster et al. 2016; Sukhbaatar, Szlam, and Fergus 2016) to compute gradients for communication actions, although the latter still employed model-free meth- ods for physical action computation. The physical state dynamics,
including discontinuous contact events can be made differentiable with smoothing. However, communication actions require emission of dis- crete symbols, which present difï¬culties for backpropaga- tion.
Discrete Communication and Gumbel-Softmax Estimator In order to use categorical communication emissions c in our setting, it must be possible to differentiate through them. There has been a wealth of work in machine learn- ing on differentiable models with discrete variables, but we found recent approach in (Jang, Gu, and Poole 2016; Maddison, Mnih, and Teh 2016) to be particularly effective in our setting. The approach proposes a Gumbel-Softmax distribution, which is a continuous relaxation of a discrete categorical distribution. Given K-categorical distribution parameters p, a differentiable K-dimensional one-hot en- coding sample G from the Gumbel-Softmax distribution can be calculated as:
G(logp), exp ((logp +e)/r) Yj=0 exp((logp; + â¬)/T)
Where ε are i.i.d. samples from Gumbel(0, 1) distribution, ε = âlog(âlog(u)), u â¼ U[0, 1] and Ï is a softmax tem- perature parameter. We did not ï¬nd it necessary to anneal the temperature and set it to 1 in all our experiments for train- ing and sample directly from the categorical distribution at test time. To emit a communication symbol, our policy is trained to directly output logp â RK, which is transformed to a symbol emission sample c â¼ G(logp). The resulting gradient can be estimated as dc
Policy Architecture The policy class we consider in this work are stochastic neu- ral networks. The policy outputs samples of an agentâs phys- ical actions u, communication symbol utterance c, and in- ternal memory updates âm. The policy must consolidate multiple incoming communication symbol streams emitted by other agents, as well as incoming observations of physi- cal entities. Importantly, the number of agents (and thus the
Figure 3: Overview of our policy architecture, mapping ob- servations to actions at every point time time. FC indicates a fully-connected processing module that shares weights with all others of its label. pool indicates a softmax pooling layer.
number of communication streams) and number of physi- cal entities can vary between environment instantiations. To support this, the policy instantiates a collection of identi- cal processing modules for each communication stream and each observed physical entity. Each processing module is a fully-connected multi-layer perceptron. The weights be- tween all communication processing and physical observa- tion modules are shared. The outputs of individual process- ing modules are pooled with a softmax operation into feature vectors Ïc and Ïx for communication and physical observa- tion streams, respectively. Such weight sharing and pooling makes it possible to apply the same policy parameters to any number of communication and physical observations.
The pooled features and agentâs private goal vector are passed to the ï¬nal processing module that outputs distribu- tion parameters [ Ïu Ïc ] from which action samples are generated as u = Ïu + ε and c â¼ G(Ïc), where ε is a zero-mean Gaussian noise.
Unlike communication games where agents only emit a single utterance, our agents continually emit a stream of symbols over time. Thus processing modules that read and write communication utterance streams beneï¬t greatly from recurrent memory that can capture meaning of a stream over time. To this end, we augment each communication process- ing and output module with an independent internal mem- ory state m, and each module outputs memory state updates âm. In this work we use simple additive memory updates mt = tanh(mtâ1 + âmtâ1 + ε) for simplicity and in- terpretability, but other memory architectures such LSTMs can be used. We build all fully-connected modules with 256 hidden units and 2 layers each in all our experiments, us- ing exponential-linear units and dropout with a rate of 0.1 between all hidden layers. Size is feature vectors Ï is 256 and size of each memory module is 32. The overall policy architecture is shown in Figure 3.
Auxiliary Prediction Reward To help policy training avoid local minima in more com- plex environments, we found it helpful to include auxiliary goal prediction tasks, similar to recent work in reinforce- ment learning (Dosovitskiy and Koltun 2016; Silver et al. 2016). In agent iâs policy, each communication processing module j additionally outputs a prediction Ëgi,j of agent jâs goals. We do not use Ëg as an input in calculating actions. It is only used for the purposes of auxiliary prediction task. At the end of the episode, we add a reward for predicting other agentâs goals, which in turn encourages communication ut- terances that convey the agentâs goals clearly to other agents. Across all agents this reward has the form:
rg=- > \efj)-2F |? {i,j|iA5}
Compositionality and Vocabulary Size What leads to compositional syntax formation? One known constructive hypothesis requires modeling the process of language transmission and acquisition from one generation of agents to the next iteratively as in (Kirby, Grifï¬ths, and Smith 2014). In such iterated learning setting, composition- ality emerges due to poverty of stimulus - one generation will only observe a limited number of symbol utterances from the previous generation and must infer meaning of un- seen symbols. This approach requires modeling language acquisition between agents, but when implemented with pre- designed rules was shown over multiple iterations between generations to lead to formation of a compositional vocabu- lary.
Alternatively, (Nowak, Plotkin, and Jansen 2000) ob- served that emergence of compositionality requires the num- ber of concepts describable by a language to be above a fac- tor of vocabulary size. In our preliminary environments the number of concepts to communicate is still fairly small and is within the capacity of a non-compositional language. We use a maximum vocabulary size K = 20 in all our exper- iments. We tested a smaller maximum vocabulary size, but found that policy optimization became stuck in a poor lo- cal minima where concepts became conï¬ated. Instead, we propose to use a large vocabulary size limit but use a soft penalty function to prevent the formation of unnecessarily large vocabularies. This allows the intermediate stages of policy optimization to explore large vocabularies, but then converge on an appropriate active vocabulary size. As shown in Figure 6, this is indeed what happens.
How do we penalize large vocabulary sizes? (Nowak, Plotkin, and Jansen 2000) proposed a word population dy- namics model that deï¬nes reproductive ratios of words to be proportional to their frequency, making already popu- lar words more likely to survive. Inspired by these rich-get- richer dynamics, we model the communication symbols as being generated from a Dirichlet Process (Teh 2011). Each communication symbol has a probability of being symbol ck as
p(ck) = nk α + n â 1
Where nk is the number of times symbol ck has been uttered and n is the total number of symbols uttered. These counts are accumulated over agents, timesteps, and batch entries. α is a Dirichlet Process hyperparameter corresponding to the probability of observing an out-of-vocabulary word. The re- sulting reward across all agents is the log-likelihood of all communication utterances to independently have been gen- erated by a Dirichlet Process:
rc = 1[ct i = ck]logp(ck) i,t,k
Maximizing this reward leads to consolidation of symbols and the formation of compositionality. This approach is sim- ilar to encouraging code population sparsity in autoencoders (Ng 2011), which was shown to give rise to compositional representations for images.
# Experiments
We experimentally investigate how variation in goals, envi- ronment conï¬guration, and agents physical capabilities lead to different communication strategies. In this work, we con- sider three types of actions an agent needs to perform: go to location, look at location, and do nothing. Goal for agent i consists of an action to perform, a location to perform it on ¯r, and an agent r that should perform that action. These goal properties are accumulated into goal description vector g. These goals are private to each agent, but may involve other agents. For example, agent i may want agent r to go to location ¯r. This goal is not observed by agent r, and re- quires communication between agents i and r. The goals are assigned to agents such that no agent receives conï¬icting goals. We do however show generalization in the presence of conï¬icting goals in Section .
Agents can only communicate in discrete symbols and have individual reference frames without a shared global po- sitioning reference (see Appendix), so cannot directly send goal position vector. What makes the task possible is that we place goal locations ¯r on landmark locations of which are observed by all agents (in their invidiaul reference frames). The strategy then is for agent i to unambiguously commu- nicate landmark reference to agent r. Importantly, we do not provide explicit association between goal positions and landmark reference. It is up to the agents to learn to asso- ciate a position vector with a set of landmark properties and communicate them with discrete symbols.
In the results that follow, agents do not observe other agents. This disallows capacity for non-verbal communica- tion, necessitating the use of language. In section we report what happens when agents are able to observe each other and capacity for non-verbal communication is available.
Despite training with continuous relaxation of the cate- gorical distribution, we observe very similar reward perfor- mance at test time. No communication is provided as a base- line (again, non-verbal communication is not possible). The no-communication strategy is for all agents go towards the centroid of all landmarks.
Condition No Communication Communication Train Reward Test Reward -0.919 -0.332 -0.920 -0.392
Table 1: Training and test physical reward for setting with and without communication.
GoTo ° ° e ° ° ° e . BLUE Goto GREEN ° . ° â GoTo ad BLUE-AGENT e ° ® ° RED-AGENT GREEN *sLooKar °.. ° ° e ° BLUE-AGENT a . G DONOTHING RED F ; â 5 ° ° ° eo ° é GREEN-AGENT Goto Aa BLUE Goro RED-AGENT cE . BLUE-AGENT âa RED GoTo 7 coro, oRED © GREEN-AGENT || © . RED-AGENT . ° ° . © GOTO ° BLUE ° ° t=0 te1 t=2 t=3
Figure 4: A collection of typical sequences of events in our environments shown over time. Each row is an independent trial. Large circles represent agents and small circles repre- sent landmarks. Communication symbols are shown next to the agent making the utterance. The labels for abstract com- munication symbols are chosen purely for visualization and ... represents silence symbol.
Syntactic Structure We observe a compositional syntactic structure emerging in the stream of symbol uttered by agents. When trained on environments with only two agents, but multiple landmarks and actions, we observe symbols forming for each of the landmark colors and each of the action types. A typical con- versation and physical agent conï¬guration is shown in ï¬rst row of Figure 4 and is as follows:
Green Agent: GOTO, GREEN, ... Blue Agent: GOTO, BLUE,
The labels for abstract symbols are chosen by us purely for interpretability and visualization and carry no mean- ing for training. While there is recent work on interpreting continuous machine languages (Andreas, Dragan, and Klein 2017), the discrete nature and small size of our symbol vo- cabulary makes it possible to manually labels to the sym- bols. See results in supplementary video for consistency of the vocabulary usage.
Physical environment considerations play a part in the syntactic structure. The action type verb GOTO is uttered ï¬rst because actions take time to accomplish in the grounded
environment. When the agent receives GOTO symbol it starts moving toward the centroid of all the landmarks (to be equidistant from all of them) and then moves towards the speciï¬c landmark when it receives its color identity.
When the environment conï¬guration can contain more than three agents, agents need to form symbols for referring to each other. Three new symbols form to refer to agent col- ors that are separate in meaning from landmark colors. The typical conversations are shown in second and third rows of Figure 4.
Red Agent: GOTO, RED, BLUE-AGENT, ... Green Agent: ..., ..., ..., ... Blue Agent: RED-AGENT, GREEN, LOOKAT, ...
Agents may not omit any utterances when they are the subject of their private goal, in which case they have access to that information and have no need to announce it. In this language, there is no set ordering to word utterances. Each symbol contributes to sentence meaning independently, sim- ilar to case marking grammatical strategies used in many hu- man languages (Beuls and Steels 2013).
The agents largely settle on using a consistent set of sym- bols for each meaning, due to vocabulary size penalties and that discourage synonyms. We show the aggregate streams of communication utterances in Figure 5.
Before Training AfterTraining vocabulary symbol
Figure 5: Communication symbol streams emitted by agents over time before and after training accumulated over 10 thousand test trials.
In simpliï¬ed environment conï¬gurations when there is only one landmark or one type of action to take, no sym- bols are formed to refer to those concepts because they are clear from context.
Symbol Vocabulary Usage We ï¬nd word activation counts to settle on the appropriate compositional word counts. That early during training large vocabulary sizes are being taken advantage of to explore the space of communication possibilities before settling on the appropriate effective vocabulary sizes as shown in Figure 6. In this ï¬gure, 1x1x3 case refers to environment with two agents and a single action, which requires only communi- cating one of three landmark identities. 1x2x3 contains two types of actions, and 3x3x3 case contains three agents that require explicit referencing.
Generalization to Unseen Conï¬gurations One of the advantages of decentralised execution policies is that trained agents can be placed into arbitrarily-sized groups and still function reasonably. When there are addi- tional agents in the environment with the same color iden- tity, all agents of the same color will perform the same task if they are being referred to. Additionally, when agents of a
20 \ â 1xb3 â 1x2x3 ââ 3x3x3 1s - 10 a | {MLL ALUM active vocabulary size HY tt) | ot | A tt 0 1000 2000 3000 4000 5000 iteration
Figure 6: Word activations counts for different environment conï¬gurations over training iterations.
particular color are asked to perform two conï¬icting tasks (such as being asked go to two different landmarks by two different agents), they will perform the average of the con- ï¬icting goals assigned to them. Such cases occur despite never having been seen during training.
Due to the modularized observation architecture, the num- ber of landmarks in the environment can also vary between training and execution. The agents perform sensible behav- iors with different numbers of landmarks, despite not being trained in such environments. For example, when there are distractor landmarks of novel colors, the agents never go to- wards them. When there are multiple landmarks of the same color, the agent communicating the goal still utters landmark color (because the goal is the position of one of the land- marks). However, the agents receiving the landmark color utterance go towards the centroid of all landmark of the same color, showing a very sensible generalization strategy. An example of such case is shown in fourth row of Figure 4.
Non-verbal Communication and Other Strategies The presence of a physical environment also allows for al- ternative strategies aside from language use to accomplish goals. In this set of experiments we enable agents to observe other agentsâ position and gaze location, and in turn dis- able communication capability via symbol utterances. When agents can observe each otherâs gaze, a pointing strategy forms where the agent can communicate a landmark location by gazing in its direction, which the recipient correctly inter- prets and moves towards. When gazes of other agents cannot be observed, we see behavior of goal sender agent moving towards the location assigned to goal recipient agent (despite receiving no explicit reward for doing so), in order to guide the goal recipient to that location. Lastly, when neither visual not verbal observation is available on part of the goal recipi- ent, we observe the behavior of goal sender directly pushing the recipient to the target location. Examples of such strate- gies are shown in Figure 7 and supplementary video. It is important to us to build an environment with a diverse set of
capabilities which language use develops alongside with.
Figure 7: Examples of non-verbal communication strategies, such as pointing, guiding, and pushing.
Conclusion We have presented a multi-agent environment and learning methods that brings about emergence of an abstract compo- sitional language from grounded experience. This abstract language is formed without any exposure to human language use. We investigated how variation in environment conï¬gu- ration and physical capabilities of agents affect the commu- nication strategies that arise.
In the future, we would like experiment with larger num- ber of actions that necessitate more complex syntax and larger vocabularies. We would also like integrate exposure to human language to form communication strategies that are compatible with human use.
Acknowledgements We thank OpenAI team for helpful comments and fruitful discussions. This work was funded in part by ONR PECASE N000141612723.
References [Andreas and Klein 2016] Andreas, J., and Klein, D. 2016. Reasoning about pragmatics with neural listeners and speak- In Proceedings of the 2016 Conference on Empirical ers. Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, 1173â1182. [Andreas, Dragan, and Klein 2017] Andreas, J.; Dragan, A.; and Klein, D. 2017. Translating neuralese. [Austin 1962] Austin, J. 1962. How to Do Things with Words. Oxford. [Bahdanau, Cho, and Bengio 2014] Bahdanau, D.; Cho, K.; 2014. Neural machine translation by and Bengio, Y. arXiv preprint jointly learning to align and translate. arXiv:1409.0473. [Beuls and Steels 2013] Beuls, K., and Steels, L. 2013. Agent-based models of strategies for the emergence and evo- lution of grammatical agreement. PloS one 8(3):e58960. [Bratman et al. 2010] Bratman, J.; Shvartsman, M.; Lewis, R. L.; and Singh, S. 2010. A new approach to exploring lan- guage emergence as boundedly optimal control in the face of environmental and cognitive constraints. In Proceedings of the 10th International Conference on Cognitive Modeling, 7â12. Citeseer.
[Dhingra et al. 2016] Dhingra, B.; Li, L.; Li, X.; Gao, J.; Chen, Y.-N.; Ahmed, F.; and Deng, L. 2016. End-to-End Reinforcement Learning of Dialogue Agents for Informa- tion Access. arXiv:1609.00777 [cs]. arXiv: 1609.00777. [Dosovitskiy and Koltun 2016] Dosovitskiy, A., and Koltun, V. 2016. Learning to act by predicting the future. arXiv preprint arXiv:1611.01779. [Durrett, Berg-Kirkpatrick, and Klein 2016] Durrett, G.; Berg-Kirkpatrick, T.; and Klein, D. 2016. Learning-based single-document summarization with compression and anaphoricity constraints. arXiv preprint arXiv:1603.08887. [Foerster et al. 2016] Foerster, J. N.; Assael, Y. M.; de Fre- itas, N.; and Whiteson, S. 2016. Learning to Communicate with Deep Multi-Agent Reinforcement Learning. [Frank and Goodman 2012] Frank, M. C., and Goodman, N. D. 2012. Predicting Pragmatic Reasoning in Language Games. Science 336(6084):998. [Gauthier and Mordatch 2016] Gauthier, J., and Mordatch, I. 2016. A paradigm for situated and goal-driven language learning. CoRR abs/1610.03585. [Golland, Liang, and Klein 2010] Golland, D.; Liang, P.; and Klein, D. 2010. A game-theoretic approach to generating In Proceedings of the 2010 Confer- spatial descriptions. ence on Empirical Methods in Natural Language Process- ing, EMNLP â10, 410â419. Stroudsburg, PA, USA: Associ- ation for Computational Linguistics. [Grice 1975] Grice, H. P. 1975. Logic and conversation. In Cole, P., and Morgan, J. L., eds., Syntax and Semantics: Vol. 3: Speech Acts, 41â58. San Diego, CA: Academic Press. [Jang, Gu, and Poole 2016] Jang, E.; Gu, S.; and Poole, B. 2016. Categorical Reparameterization with Gumbel- Softmax. ArXiv e-prints. [Kirby, Grifï¬ths, and Smith 2014] Kirby, S.; Grifï¬ths, T.; and Smith, K. 2014. Iterated learning and the evolution of language. Current opinion in neurobiology 28:108â114. [Kirby 1999] Kirby, S. 1999. Syntax out of Learning: the cultural evolution of structured communication in a popula- tion of induction algorithms. [Kirby 2001] Kirby, S. 2001. Spontaneous evolution of lin- guistic structure-an iterated learning model of the emergence of regularity and irregularity. IEEE Transactions on Evolu- tionary Computation 5(2):102â110. [Lake et al. 2016] Lake, B. M.; Ullman, T. D.; Tenenbaum, J. B.; and Gershman, S. J. 2016. Building machines that learn and think like people. CoRR abs/1604.00289. [Lazaridou, Peysakhovich, and Baroni 2016] Lazaridou, A.; Peysakhovich, A.; and Baroni, M. 2016. Multi-agent co- operation and the emergence of (natural) language. arXiv preprint arXiv:1612.07182. [Lazaridou, Pham, and Baroni 2016] Lazaridou, A.; Pham, N. T.; and Baroni, M. Towards Multi- Agent Communication-Based Language Learning. arXiv: 1605.07133. [Littman 1994] Littman, M. L. 1994. Markov games as a framework for multi-agent reinforcement learning. In Pro-
ceedings of the eleventh international conference on ma- chine learning, volume 157, 157â163. [Maddison, Mnih, and Teh 2016] Maddison, C. J.; Mnih, A.; and Teh, Y. W. 2016. The concrete distribution: A con- tinuous relaxation of discrete random variables. CoRR abs/1611.00712. [Ng 2011] Ng, A. 2011. Sparse autoencoder. CS294A Lec- ture notes 72(2011):1â19. [Nowak, Plotkin, and Jansen 2000] Nowak, M. A.; Plotkin, J. B.; and Jansen, V. A. A. 2000. The evolution of syntactic communication. Nature 404(6777):495â498. [Silver et al. 2016] Silver, D.; van Hasselt, H.; Hessel, M.; Schaul, T.; Guez, A.; Harley, T.; Dulac-Arnold, G.; Reichert, D.; Rabinowitz, N.; Barreto, A.; et al. 2016. The pre- dictron: End-to-end learning and planning. arXiv preprint arXiv:1612.08810. [Socher et al. 2013] Socher, R.; Perelygin, A.; Wu, J. Y.; Chuang, J.; Manning, C. D.; Ng, A. Y.; Potts, C.; et al. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the conference on em- pirical methods in natural language processing (EMNLP), volume 1631, 1642. Citeseer. [Steels 1995] Steels, L. 1995. A self-organizing spatial vo- cabulary. Artif. Life 2(3):319â332. [Steels 2005] Steels, L. 2005. What triggers the emergence of grammar? In AISBâ05: Proceedings of the Second In- ternational Symposium on the Emergence and Evolution of Linguistic Communication (EELCâ05), 143â150. University of Hertfordshire. [Sukhbaatar, Szlam, and Fergus 2016] Sukhbaatar, S.; Szlam, A.; and Fergus, R. 2016. Learning multiagent com- In Advances in Neural munication with backpropagation. Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, 2244â2252. [Sutskever, Vinyals, and Le 2014] Sutskever, I.; Vinyals, O.; and Le, Q. V. 2014. Sequence to sequence learning with neural networks. In Ghahramani, Z.; Welling, M.; Cortes, C.; Lawrence, N. D.; and Weinberger, K. Q., eds., Advances in Neural Information Processing Systems 27. Curran Asso- ciates, Inc. 3104â3112. [Teh 2011] Teh, Y. W. 2011. Dirichlet process. In Encyclo- pedia of machine learning. Springer. 280â287. [Ullman, Xu, and Goodman 2016] Ullman, T.; Xu, Y.; and Goodman, N. 2016. The pragmatics of spatial language. In Proceedings of the Cognitive Science Society. [Vogel et al. 2014] Vogel, A.; G´omez Emilsson, A.; Frank, M. C.; Jurafsky, D.; and Potts, C. 2014. Learning to reason pragmatically with cognitive limitations. In Proceedings of the 36th Annual Meeting of the Cognitive Science Society, 3055â3060. Wheat Ridge, CO: Cognitive Science Society. [Wang, Liang, and Manning 2016] Wang, S. I.; Liang, P.; and Manning, C. 2016. Learning language games through In Association for Computational Linguistics interaction. (ACL).
[Winograd 1973] Winograd, T. 1973. A procedural model of language understanding.
# Appendix: Physical State and Dynamics
is speciï¬ed by x = The physical state of the agent [ p Ëp v d ] where Ëp is the velocity of p. d â R3 is the color associted with the agent. Landmarks have similar state, but without gaze and velocity components. The physical state transition dynamics for a single agent i are given by:
p+ pat 1 YP + (up + £(x1,...,x))At Uy t ,_{P x,=|]P] = vi. i i
Where f (x1, ..., xN ) are the physical interaction forces (such as collision) between all agents in the environment and any obstacles, ât is the simulation timestep (we use 0.1), and (1 â γ) is a damping coefï¬cient (we use 0.5). The action space of the agent is a = [ up uv c ]. The ob- servation of any location pj in reference frame of agent i is ipj = Ri(pj â pi), where Ri is the random rotation matrix of agent i. Giving each agent a private random orientation prevents identifying landmarks in a shared coordinate frame (using words such as top-most or left-most). | {
"id": "1603.08887"
} |
1703.04933 | Sharp Minima Can Generalize For Deep Nets | Despite their overwhelming capacity to overfit, deep learning architectures
tend to generalize relatively well to unseen data, allowing them to be deployed
in practice. However, explaining why this is the case is still an open area of
research. One standing hypothesis that is gaining popularity, e.g. Hochreiter &
Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the
loss function found by stochastic gradient based methods results in good
generalization. This paper argues that most notions of flatness are problematic
for deep models and can not be directly applied to explain generalization.
Specifically, when focusing on deep networks with rectifier units, we can
exploit the particular geometry of parameter space induced by the inherent
symmetries that these architectures exhibit to build equivalent models
corresponding to arbitrarily sharper minima. Furthermore, if we allow to
reparametrize a function, the geometry of its parameters can change drastically
without affecting its generalization properties. | http://arxiv.org/pdf/1703.04933 | Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio | cs.LG | 8.5 pages of main content, 2.5 of bibliography and 1 page of appendix | null | cs.LG | 20170315 | 20170515 | 7 1 0 2
y a M 5 1 ] G L . s c [
2 v 3 3 9 4 0 . 3 0 7 1 : v i X r a
# Sharp Minima Can Generalize For Deep Nets
# Laurent Dinh 1 Razvan Pascanu 2 Samy Bengio 3 Yoshua Bengio 1 4
Abstract Despite their overwhelming capacity to overï¬t, deep learning architectures tend to generalize rel- atively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the ï¬atness of minima of the loss function found by stochastic gradient based methods results in good generalization. This pa- per argues that most notions of ï¬atness are prob- lematic for deep models and can not be directly applied to explain generalization. Speciï¬cally, when focusing on deep networks with rectiï¬er units, we can exploit the particular geometry of pa- rameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper min- ima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its general- ization properties.
approximate certain functions (e.g. Montufar et al., 2014; Raghu et al., 2016). Other works (e.g Dauphin et al., 2014; Choromanska et al., 2015) have looked at the structure of the error surface to analyze how trainable these models are. Finally, another point of discussion is how well these mod- els can generalize (Nesterov & Vial, 2008; Keskar et al., 2017; Zhang et al., 2017). These correspond, respectively, to low approximation, optimization and estimation error as described by Bottou (2010).
Our work focuses on the analysis of the estimation error. In particular, different approaches had been used to look at the question of why stochastic gradient descent results in solu- tions that generalize well (Bottou & LeCun, 2005; Bottou & Bousquet, 2008). For example, Duchi et al. (2011); Nesterov & Vial (2008); Hardt et al. (2016); Bottou et al. (2016); Go- nen & Shalev-Shwartz (2017) rely on the concept of stochas- tic approximation or uniform stability (Bousquet & Elisseeff, 2002). Another conjecture that was recently (Keskar et al., 2017) explored, but that could be traced back to Hochreiter & Schmidhuber (1997), relies on the geometry of the loss function around a given solution. It argues that ï¬at minima, for some deï¬nition of ï¬atness, lead to better generalization. Our work focuses on this particular conjecture, arguing that there are critical issues when applying the concept of ï¬at minima to deep neural networks, which require rethinking what ï¬atness actually means.
# Introduction
Deep learning techniques have been very successful in several domains, like object recognition in images (e.g Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; Szegedy et al., 2015; He et al., 2016), machine transla- tion (e.g. Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015; Wu et al., 2016; Gehring et al., 2016) and speech recognition (e.g. Graves et al., 2013; Hannun et al., 2014; Chorowski et al., 2015; Chan et al., 2016; Collobert et al., 2016). Several arguments have been brought forward to jus- tify these empirical results. From a representational point of view, it has been argued that deep networks can efï¬ciently
1Université of Montréal, Montréal, Canada 2DeepMind, Lon- don, United Kingdom 3Google Brain, Mountain View, United States 4CIFAR Senior Fellow. Correspondence to: Laurent Dinh <laurent.dinh@umontreal.ca>.
While the concept of ï¬at minima is not well deï¬ned, having slightly different meanings in different works, the intuition is relatively simple. If one imagines the error as a one- dimensional curve, a minimum is ï¬at if there is a wide region around it with roughly the same error, otherwise the minimum is sharp. When moving to higher dimen- sional spaces, deï¬ning ï¬atness becomes more complicated. In Hochreiter & Schmidhuber (1997) it is deï¬ned as the size of the connected region around the minimum where the training loss is relatively similar. Chaudhari et al. (2017) relies, in contrast, on the curvature of the second order struc- ture around the minimum, while Keskar et al. (2017) looks at the maximum loss in a bounded neighbourhood of the minimum. All these works rely on the fact that ï¬atness results in robustness to low precision arithmetic or noise in the parameter space, which, using an minimum description length-based argument, suggests a low expected overï¬tting.
Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, 2017. JMLR: W&CP. Copyright 2017 by the author(s).
Sharp Minima Can Generalize For Deep Nets
However, several common architectures and parametriza- tions in deep learning are already at odds with this conjec- ture, requiring at least some degree of reï¬nement in the statements made. In particular, we show how the geome- try of the associated parameter space can alter the ranking between prediction functions when considering several mea- sures of ï¬atness/sharpness. We believe the reason for this contradiction stems from the Bayesian arguments about KL- divergence made to justify the generalization ability of ï¬at minima (Hinton & Van Camp, 1993). Indeed, Kullback- Liebler divergence is invariant to change of parameters whereas the notion of "ï¬atness" is not. The demonstrations of Hochreiter & Schmidhuber (1997) are approximately based on a Gibbs formalism and rely on strong assumptions and approximations that can compromise the applicability of the argument, including the assumption of a discrete function space.
the literature.
Hochreiter & Schmidhuber (1997) deï¬nes a ï¬at minimum as "a large connected region in weight space where the error remains approximately constant". We interpret this formulation as follows:
Definition 1. Given « > 0, a minimum 6, and a loss L, we define C(L, 0, â¬) as the largest (using inclusion as the partial order over the subsets of 0) connected set containing 6 such that V6â ⬠C(L,6,â¬),L(0') < L(@) +. The e- flatness will be defined as the volume of C(L, 0, â¬). We will call this measure the volume ¢-flatness.
In Figure 1, C(L, 0, â¬) will be the purple line at the top of the red area if the height is ⬠and its volume will simply be the length of the purple line.
# 2 Deï¬nitions of ï¬atness/sharpness
Figure 1: An illustration of the notion of flatness. The loss L as a function of 6 is plotted in black. If the height of the red area is ¢, the width will represent the volume e-flatness. If the width is 2¢, the height will then represent the e-sharpness. Best seen with colors.
For conciseness, we will restrict ourselves to supervised scalar output problems, but several conclusions in this pa- per can apply to other problems as well. We will consider a function f that takes as input an element x from an in- put space Â¥ and outputs a scalar y. We will denote by fg the prediction function. This prediction function will be parametrized by a parameter vector @ in a parameter space ©. Often, this prediction function will be over-parametrized and two parameters (0, 6â) ⬠©? that yield the same pre- diction function everywhere, Va ⬠4â, fo(a) = for (x), are called observationally equivalent. The model is trained to minimize a continuous loss function L which takes as argu- ment the prediction function fg. We will often think of the loss L as a function of 6 and adopt the notation L(@).
Flatness can also be deï¬ned using the local curvature of the loss function around the minimum if it is a critical point 1. Chaudhari et al. (2017); Keskar et al. (2017) suggest that this information is encoded in the eigenvalues of the Hessian. However, in order to compare how ï¬at one minimum versus another, the eigenvalues need to be reduced to a single number. Here we consider the spectral norm and trace of the Hessian, two typical measurements of the eigenvalues of a matrix.
Additionally Keskar et al. (2017) defines the notion of e- sharpness. In order to make proofs more readable, we will slightly modify their definition. However, because of norm equivalence in finite dimensional space, our results will transfer to the original definition in full space as well. Our modified definition is the following:
Definition 2. Let Bz(â¬,) be an Euclidean ball centered on a minimum 6 with radius â¬. Then, for a non-negative valued loss function L, the e-sharpness will be defined as proportional to
mMaxX/⬠Bo (c,0) (L(6â) â L(6)) 1+ L(0) ,
In Figure 1, if the width of the red area is 2e then the height of the red area is maxg<p,(c,9) (L(6") â L(6)).
e-sharpness can be related to the spectral norm of the Hes- sian. Indeed, a second-order Taylor expansion of L around a critical point minimum is written
L(6â) = L(0) + ; (0' â 0) (V7L)(0)(0' â 0) + 0(||â â ||).
The notion of ï¬atness/sharpness of a minimum is relative, therefore we will discuss metrics that can be used to com- pare the relative ï¬atness between two minima. In this sec- tion we will formalize three used deï¬nitions of ï¬atness in
In this second order approximation, the e-sharpness at 0
1In this paper, we will often assume that is the case when dealing with Hessian-based measures in order to have them well- deï¬ned.
Sharp Minima Can Generalize For Deep Nets
would be
Iw? Z)llhoe? 2(1+L(0)) ©
# 3 Properties of Deep Rectiï¬ed Networks
Before moving forward to our results, in this section we ï¬rst introduce the notation used in the rest of paper. Most of our results, for clarity, will be on the deep rectiï¬ed feedforward networks with a linear output layer that we describe below, though they can easily be extended to other architectures (e.g. convolutional, etc.).
9.
Definition 3. Given K weight matrices (0x )p<K with ny, = dim(vec(@,)) and n = vy nx, the output y of a deep rectified feedforward networks with a linear output layer is:
y= rect (Srecr( +++ brect(@ +01) ++ â) : x1) OK,
where
# o
Figure 2: An illustration of the effects of non-negative ho- mogeneity. The graph depicts level curves of the behavior of the loss L embedded into the two dimensional param- eter space with the axis given by θ1 and θ2. Speciï¬cally, each line of a given color corresponds to the parameter as- signments (θ1, θ2) that result observationally in the same prediction function fθ. Best seen with colors.
⢠x is the input to the model, a high-dimensional vector
⢠Ïrect is the rectiï¬ed elementwise activation func- tion (Jarrett et al., 2009; Nair & Hinton, 2010; Glorot et al., 2011), which is the positive part (zi)i
⢠vec reshapes a matrix into a vector.
Note that in our deï¬nition we excluded the bias terms, usu- ally found in any neural architecture. This is done mainly for convenience, to simplify the rendition of our arguments. However, the arguments can be extended to the case that includes biases (see Appendix B). Another choice is that of the linear output layer. Having an output activation func- tion does not affect our argument either: since the loss is a function of the output activation, it can be rephrased as a function of linear pre-activation.
Deep rectiï¬er models have certain properties that allows us in section 4 to arbitrary manipulate the ï¬atness of a minimum.
An important topic for optimization of neural networks is understanding the non-Euclidean geometry of the param- eter space as imposed by the neural architecture (see, for example Amari, 1998). In principle, when we take a step in parameter space what we expect to control is the change in the behavior of the model (i.e. the mapping of the input x to the output y). In principle we are not interested in the parameters per se, but rather only in the mapping they represent.
If one deï¬nes a measure for the change in the behavior of the model, which can be done under some assumptions, then, it can be used to deï¬ne, at any point in the parameter space, a metric that says what is the equivalent change in the parameters for a unit of change in the behavior of the model. As it turns out, for neural networks, this metric is not constant over Î. Intuitively, the metric is related to the curvature, and since neural networks can be highly non- linear, the curvature will not be constant. See Amari (1998); Pascanu & Bengio (2014) for more details. Coming back to the concept of ï¬atness or sharpness of a minimum, this metric should deï¬ne the ï¬atness.
However, the geometry of the parameter space is more com- plicated. Regardless of the measure chosen to compare two instantiations of a neural network, because of the structure of the model, it also exhibits a large number of symmet- ric conï¬gurations that result in exactly the same behavior. Because the rectiï¬er activation has the non-negative homo- geneity property, as we will see shortly, one can construct a continuum of points that lead to the same behavior, hence the metric is singular. Which means that one can exploit these directions in which the model stays unchanged to shape the neighbourhood around a minimum in such a way that, by most deï¬nitions of ï¬atness, this property can be controlled. See Figure 2 for a visual depiction, where the ï¬atness (given here as the distance between the different level curves) can be changed by moving along the curve.
Sharp Minima Can Generalize For Deep Nets
Let us redeï¬ne, for convenience, the non-negative homo- geneity property (Neyshabur et al., 2015; Lafond et al., 2016) below. Note that beside this property, the reason for study- ing the rectiï¬ed linear activation is for its widespread adop- tion (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; Szegedy et al., 2015; He et al., 2016).
Deï¬nition 4. A given a function Ï is non-negative homoge- neous if
# 4 Deep Rectiï¬ed networks and ï¬at minima
In this section we exploit the resulting strong non- identiï¬ability to showcase a few shortcomings of some deï¬nitions of ï¬atness. Although α-scale transformation does not affect the function represented, it allows us to sig- niï¬cantly decrease several measures of ï¬atness. For another deï¬nition of ï¬atness, α-scale transformation show that all minima are equally ï¬at.
â(z, α) â R à R+, Ï(αz) = αÏ(z)
.
# 4.1 Volume «-flatness
Theorem 1. The rectiï¬ed function Ïrect(x) = max(x, 0) is non-negative homogeneous.
Theorem 2. For a one-hidden layer rectiï¬ed neural network of the form
y = Ïrect(x · θ1) · θ2,
Proof. Follows trivially from the constraint that α > 0, given that x > 0 â αx > 0, iff α > 0.
and a minimum 6 = (61,62), such that 0, 4 0 and 62 # 0, Ve > 0 C(L, 6, â¬) has an infinite volume.
For a deep rectiï¬ed neural network it means that:
# brect
(x + (a61)) 02 = brect( * 01) - (92),
meaning that for this one (hidden) layer neural network, the parameters (αθ1, θ2) is observationally equivalent to (θ1, αθ2). This observational equivalence similarly holds for convolutional layers.
We will not consider the solution @ where any of the weight matrices 6),62 is zero, 6; = 0 or 02 = 0, as it results in a constant function which we will assume to give poor training performance. For a > 0, the a-scale transformation To : (01,02) (61,0710) has Jacobian determinant a2, where once again n; = dim(vec(61)) and nz = dim(vec(62)). Note that the Jacobian determinant of this linear transformation is the change in the volume induced by T,, and T,, o Tg = Tyg. We show below that there is a connected region containing 6 with infinite volume and where the error remains approximately constant.
Given this non-negative homogeneity, if (0,,42) 4 (0,0) then {(a01, 07102), a > o} is an infinite set of obser- vationally equivalent parameters, inducing a strong non- identifiability in this learning scenario. Other models like deep linear networks (Saxe et al., 2013), leaky rectifiers (He et al., 2015) or maxout networks (Goodfellow et al., 2013) also have this non-negative homogeneity property.
In what follows we will rely on such transformations, in particular we will rely on the following deï¬nition:
Proof. We will first introduce a small region with approxi- mately constant error around @ with non-zero volume. Given ⬠> 0 and if we consider the loss function continuous with respect to the parameter, C(L, 0, â¬) is an open set containing 9. Since we also have 6; 4 0 and 62 Â¥ 0, let r > 0 such that the £.. ball Boo (r, 0) is in C(L,0,â¬) and has empty intersection with {0,0, = 0}. Let v = (2r)â¢*"2 > 0 the volume of B,,(r, 9).
Deï¬nition 5. For a single hidden layer rectiï¬er feedforward network we deï¬ne the family of transformations
-1 Ta : (01,02) + (a1, 07°82)
which we refer to as a α-scale transformation.
Note that a α-scale transformation will not affect the gener- alization, as the behavior of the function is identical. Also while the transformation is only deï¬ned for a single layer rectiï¬ed feedforward network, it can trivially be extended to any architecture having a single rectiï¬ed network as a submodule, e.g. a deep rectiï¬ed feedforward network. For simplicity and readability we will rely on this deï¬nition.
Since the Jacobian determinant of T,, is the multiplicative change of induced by T,,, the volume of Ty, (Boo(r, 9)) is vaâ¢â"2, If ny A ng, we can arbitrarily grow the volume of Ta(Boo(r, 4)), with error within an ¢-interval of L(0), by having a tends to +00 if n > nz or to 0 otherwise.
If ny = no, Vaâ > 0,Ty (Bar, 6)) has volume v. Let Co = Ugo La (Bor, 6). Câ is a connected region where the error remains approximately constant, i.e. within an e-interval of L(@).
â 9 ll@llo+r oF Leta = eer Since Boo(7, 0) = Boo(r, 01) X Boo(r, 02),
Sharp Minima Can Generalize For Deep Nets
T,(Bz.(r',8)) T.(Boe(râ,8))
curvature (e.g. Desjardins et al., 2015; Salimans & Kingma, 2016). In this section we look at two widely used measures of the Hessian, the spectral radius and trace, showing that either of these values can be manipulated without actually changing the behavior of the function. If the ï¬atness of a minimum is deï¬ned by any of these quantities, then it could also be easily manipulated.
Theorem 3. The gradient and Hessian of the loss L with respect to θ can be modiï¬ed by Tα.
Proof.
L(θ1, θ2) = L(αθ1, αâ1θ2),
we have then by differentiation
Figure 3: An illustration of how we build different dis- joint volumes using 7T,,. In this two-dimensional exam- ple, Ty (Boo(râ,4)) and B.o(râ, 9) have the same volume. Boo (r", 9), Ta (Boo(râ, 9)),T3(Boo(râ, 0)),... will there- fore be a sequence of disjoint constant volumes. Câ will therefore have an infinite volume. Best seen with colors.
(7L)(61,05) = (VL)(at, 0M) | 0 0 ath, atl © (VL)(a,,0- Ma) = (VE)( 0102) | 0 ol |
and
where à is the Cartesian set product, we have
# Ta
# (Boo(r,0)) = Boar, a) X Bo(aatr, 762).
(V?L)(a61,07 162) antl, 0 2 atl, 0 = [Oo gt, |(772N6.8)/ oO ge |:
Therefore, Ty (Bo(r, 9) 1 Boo (7,9) = 0 (see Figure 3). Similarly, Bao (r,0), Tax (Boo(r,0)),T?(Boo(r,9)),.-. are disjoint and have volume v. We have also TE (Boo(1â,0)) = Tyr (Boo(râ,0)) ⬠Câ. The vol- ume of Câ is then lower bounded by 0 < vu+u+u+-:+ and is therefore infinite. C'(L, 0, â¬) has then infinite volume too, making the volume e-flatness of 0 infinite.
Sharpest direction Through these transformations we can easily ï¬nd, for any critical point which is a minimum with non-zero Hessian, an observationally equivalent param- eter whose Hessian has an arbitrarily large spectral norm.
Theorem 4. For a one-hidden layer rectiï¬ed neural network of the form
This theorem can generalize to rectified neural networks in general with a similar proof. Given that every minimum has an infinitely large region (volume-wise) in which the error remains approximately constant, that means that every minimum would be infinitely flat according to the volume e-flatness. Since all minima are equally flat, it is not possible to use volume ¢-flatness to gauge the generalization property of a minimum.
# 4.2 Hessian-based measures
The non-Euclidean geometry of the parameter space, cou- pled with the manifolds of observationally equal behavior of the model, allows one to move from one region of the param- eter space to another, changing the curvature of the model without actually changing the function. This approach has been used with success to improve optimization, by moving from a region of high curvature to a region of well behaved
y = Ïrect(x · θ1) · θ2,
(01,02) being a minimum 0, VM > 0,da > |(V?L)(Ta(4)) ll], és and critical point 0 = for L, such that (V?L)(@) 0, ||| (V?L) (Ta(9)) |||, = M4 where | the spectral norm of (V?L) (Ta(9)).
Proof. The trace of a symmetric matrix is the sum of its eigenvalues and a real symmetric matrix can be diagonalized in R, therefore if the Hessian is non-zero, there is one non- zero positive diagonal element. Without loss of generality, we will assume that this non-zero element of value y > 0 corresponds to an element in 0;. Therefore the Frobenius norm |||(V?L)(T.(4)) ||| - of
(V?L) (a1, 07102) _ ath, 0 2 aly, 0 = 0. aly (V°L) (41, 42) 0 ala, |
Sharp Minima Can Generalize For Deep Nets
is lower bounded by αâ2γ.
Since all norms are equivalent in finite dimension, there exists a constant r > 0 such that r|||.A]l| , < |||All], for al symmetric matrices A. So by picking a < \/ 77, we are guaranteed that |||(V?L)(Ta(9)) |||, = M.
Any minimum with non-zero Hessian will be observation- ally equivalent to a minimum whose Hessian has an arbi- trarily large spectral norm. Therefore for any minimum in the loss function, if there exists another minimum that generalizes better then there exists another minimum that generalizes better and is also sharper according the spectral norm of the Hessian. The spectral norm of critical pointsâ Hessian becomes as a result less relevant as a measure of potential generalization error. Moreover, since the spectral norm lower bounds the trace for a positive semi-deï¬nite symmetric matrix, the same conclusion can be drawn for the trace.
0,4da > 0 such that (r - ming<x(Mx)) eigenvalues are greater than M.
â
Proof, For simplicity, we will note VM the principal square root of a symmetric positive-semidefinite matrix M. The eigenvalues of VM are the square root of the eigenvalues of M and are its singular values. By defini- tion, the singular values of \/(V?L)(0)Daq are the square root of the eigenvalues of D,(V?L)(9)D,. Without loss of generality, we consider ming< (Me) = nx and choose Vk < K,o, = 8! andagx = 6*~-1. Since Dy and \/(V?L)(@) are positive symmetric semi-definite matrices, we can apply the multiplicative Horn inequalities (Klyachko, 2000) on singular values of the product \/(V?L)(@)Da:
Vi < nj <(nânk), Nias ((V2L)(8)D2) > As((V2L)(0)) ?.
Many directions However, some notion of sharpness might take into account the entire eigenspectrum of the Hessian as opposed to its largest eigenvalue, for instance, Chaudhari et al. (2017) describe the notion of wide valleys, allowing the presence of very few large eigenvalues. We can generalize the transformations between observationally equivalent parameters to deeper neural networks with K â 1 hidden layers: for a; > 0,Ta : (Ox )k<K > (AnOk)kew with []{_, ax = 1. If we define
M By choosing β > λr
M ((V2L)(8)) â Ax((V2L)(8)) > 0 we can since we have
Vi < r,Ai((V7L)(0)) > Ax((V2L)(8)) > 0 we can conclude that
Vi <(rânk), di((W2L)(0)D2) > Aran, ((V?L) (8) 8 > d.((W?L)(8)) 6? > M.
Dα = In1 αâ1 1 0 ... 0 0 In2 αâ1 2 ... 0 0 · · · 0 · · · ... . . . InK · · · αâ1 K
then the first and second derivatives at T,,(@) will be (VL)(Ta(0)) =(VE)(0)Da (V°L)(Ta(8)) =Da(V?L)(8)Da-
It means that there exists an observationally equivalent pa- rameter with at least (r âming< x (nx)) arbitrarily large eigenvalues. Since Sagun et al. (2016) seems to suggests that rank deficiency in the Hessian is due to over-parametrization of the model, one could conjecture that (r - ming<x (nx) can be high for thin and deep neural networks, resulting in a majority of large eigenvalues. Therefore, it would still be possible to obtain an equivalent parameter with large Hessian eigenvalues, i.e. sharp in multiple directions.
We will show to which extent you can increase several eigenvalues of (V?L)(Tq(0)) by varying a. Definition 6. For each n x n matrix A, we define the vector (A) of sorted singular values of A with their multiplicity Ai (A) > A2(A) > +++ > An(A).
# 4.3. ¢-sharpness
We have redefined for ¢ > 0 the e-sharpness of Keskar et al. (2017) as follow
If A is symmetric positive semi-deï¬nite, λ(A) is also the vector of its sorted eigenvalues.
Theorem 5. For a (K â 1)-hidden layer rectiï¬ed neural network of the form
y = Ïrect(Ïrect(· · · Ïrect(x · θ1) · · · ) · θKâ1) · θK,
and critical point 0 = (0k)k<K being a minimum for L, such that (V?L)(0) has rank r = rank((V?L)(9)), VM >
maxyrepa(eo) (L(6') â L(8)) 1+ LO)
where B2(e,6) is the Euclidean ball of radius ⬠centered on 6. This modification will demonstrate more clearly the issues of that metric as a measure of probable generaliza- tion. If we use K = 2 and (6), 2) corresponding to a non-constant function, i.e. 6; 4 0 and 62 4 0, then we can
Sharp Minima Can Generalize For Deep Nets
parametrization of the model.
4; 0 S. a
Figure 4: An illustration of how we exploit non- identifiability and its particular geometry to obtain sharper minima: although 0 is far from the 62 = 0 line, the observa- tionally equivalent parameter 6â is closer. The green and red circle centered on each of these points have the same radius. Best seen with colors.
# 5.1 Model reparametrization
One thing that needs to be considered when relating ï¬atness of minima to their probable generalization is that the choice of parametrization and its associated geometry are arbitrary. Since we are interested in ï¬nding a prediction function in a given family of functions, no reparametrization of this fam- ily should inï¬uence generalization of any of these functions. Given a bijection g onto θ, we can deï¬ne new transformed parameter η = gâ1(θ). Since θ and η represent in different space the same prediction function, they should generalize as well.
define a = Ta: We will now consider the observation- ally equivalent parameter T,,(01,02) = (eq a~16). Given that ||@;:||2 < ||@l|2, we have that (0,a7'@2) ⬠Bo(e,To(9)), making the maximum loss in this neighbor- hood at least as high as the best constant-valued function, incurring relatively high sharpness. Figure 4 provides a visualization of the proof.
Letâs call Lη = L ⦠g the loss function with respect to the new parameter η. We generalize the derivation of Subsec- tion 4.2:
L,,(n) = L(g(n)) = (VLy)(n) = (VL)(g9(m)) (V9)(n) => (V?Ln)(n) = (Vg)(m)" (VL) (9(n)) (V9) (n) + (VL) (9(m)) (V9) (n)-
For rectified neural network every minimum is observation- ally equivalent to a minimum that generalizes as well but with high e-sharpness. This also applies when using the full-space ¢-sharpness used by Keskar et al. (2017). We can prove this similarly using the equivalence of norms in finite dimensional vector spaces and the fact that for c>0,⬠> 0,⬠< e(c + 1) (see Keskar et al. (2017)). We have not been able to show a similar problem with random subspace ¢-sharpness used by Keskar et al. (2017), ie. a restriction of the maximization to a random subspace, which could relate to the notion of wide valleys described by Chaudhari et al. (2017).
By exploiting the non-Euclidean geometry and non- identiï¬ability of rectiï¬ed neural networks, we were able to demonstrate some of the limits of using typical deï¬nitions of minimumâs ï¬atness as core explanation for generalization.
At a differentiable critical point, we have by definition (VL)(g(n)) = 0, therefore the transformed Hessian at a critical point becomes
(V?Ln)(n) = (V9)(n)" (VL) (9(n)) (V9) (n)-
This means that by reparametrizing the problem we can modify to a large extent the geometry of the loss function so as to have sharp minima of L in θ correspond to ï¬at minima of Lη in η = gâ1(θ) and conversely. Figure 5 illustrates that point in one dimension. Several practical (Dinh et al., 2014; Rezende & Mohamed, 2015; Kingma et al., 2016; Dinh et al., 2016) and theoretical works (Hyvärinen & Pajunen, 1999) show how powerful bijections can be. We can also note that the formula for the transformed Hessian at a critical point also applies if g is not invertible, g would just need to be surjective over Î in order to cover exactly the same family of prediction functions
# 5 Allowing reparametrizations
{fθ, θ â Î} = {fg(η), η â gâ1(Î)}.
In the previous section 4 we explored the case of a ï¬xed parametrization, that of deep rectiï¬er models. In this section we demonstrate a simple observation. If we are allowed to change the parametrization of some function f , we can obtain arbitrarily different geometries without affecting how the function evaluates on unseen data. The same holds for reparametrization of the input space. The implication is that the correlation between the geometry of the parameter space (and hence the error surface) and the behavior of a given function is meaningless if not preconditioned on the speciï¬c
We show in Appendix A, bijections that allow us to perturb the relative ï¬atness between a ï¬nite number of minima.
Instances of commonly used reparametrization are batch normalization (Ioffe & Szegedy, 2015), or the virtual batch normalization variant (Salimans et al., 2016), and weight normalization (Badrinarayanan et al., 2015; Salimans & Kingma, 2016; Arpit et al., 2016). Im et al. (2016) have plotted how the loss function landscape was affected by batch normalization. However, we will focus on weight nor- malization reparametrization as the analysis will be simpler,
Sharp Minima Can Generalize For Deep Nets
e every minimum has infinite volume e-sharpness;
(a) Loss function with default parametrization
⢠every minimum is observationally equivalent to an inï¬nitely sharp minimum and to an inï¬nitely ï¬at min- imum when considering nonzero eigenvalues of the Hessian;
© every minimum is observationally equivalent to a mini- mum with arbitrarily low full-space and random sub- space e-sharpness and a minimum with high full-space e-sharpness.
(b) Loss function with reparametrization
This further weakens the link between the ï¬atness of a minimum and the generalization property of the associated prediction function when a speciï¬c parameter space has not been speciï¬ed and explained beforehand.
# Input representation
As we conclude that the notion of ï¬atness for a minimum in the loss function by itself is not sufï¬cient to determine its generalization ability in the general case, we can choose to focus instead on properties of the prediction function instead. Motivated by some work in adversarial examples (Szegedy et al., 2014; Goodfellow et al., 2015) for deep neural net- works, one could decide on its generalization property by analyzing the gradient of the prediction function on exam- ples. Intuitively, if the gradient is small on typical points from the distribution or has a small Lipschitz constant, then a small change in the input should not incur a large change in the prediction.
(c) Loss function with another reparametrization
Figure 5: A one-dimensional example on how much the geometry of the loss function depends on the parameter space chosen. The x-axis is the parameter value and the y-axis is the loss. The points correspond to a regular grid in the default parametrization. In the default parametrization, all minima have roughly the same curvature but with a careful choice of reparametrization, it is possible to turn a minimum signiï¬cantly ï¬atter or sharper than the others. Reparametrizations in this ï¬gure are of the form η = (|θ â Ëθ|2 + b)a(θ â Ëθ) where b ⥠0, a > â 1 2 and Ëθ is shown with the red vertical line.
but the intuition with batch normalization will be similar. Weight normalization reparametrizes a nonzero weight w as w= °Toe with the new parameter being the scale s and the unnormalized weight v 4 0.
But this inï¬nitesimal reasoning is once again very dependent of the local geometry of the input space. For an invertible preprocessing ξâ1, e.g. feature standardization, whitening or gaussianization (Chen & Gopinath, 2001), we will call fξ = f ⦠ξ the prediction function on the preprocessed input u = ξâ1(x). We can reproduce the derivation in Section 5 to obtain
âf âxT As we can alter signiï¬cantly the relative magnitude of the gradient at each point, analyzing the amplitude of the gradi- ent of the prediction function might prove problematic if the choice of the input space have not been explained before- hand. This remark applies in applications involving images, sound or other signals with invariances (Larsen et al., 2015). For example, Theis et al. (2016) show for images how a small drift of one to four pixels can incur a large difference in terms of L2 norm.
Since we can observe that w is invariant to scaling of v, reasoning similar to Section 3 can be applied with the sim- pler transformations Tâ, : v ++ av for a # 0. Moreover, since this transformation is a simpler isotropic scaling, the conclusion that we can draw can be actually more powerful with respect to v:
# 6 Discussion
It has been observed empirically that minima found by stan- dard deep learning algorithms that generalize well tend to be ï¬atter than found minima that did not generalize
Sharp Minima Can Generalize For Deep Nets
well (Chaudhari et al., 2017; Keskar et al., 2017). How- ever, when following several deï¬nitions of ï¬atness, we have shown that the conclusion that ï¬at minima should generalize better than sharp ones cannot be applied as is without fur- ther context. Previously used deï¬nitions fail to account for the complex geometry of some commonly used deep archi- tectures. In particular, the non-identiï¬ability of the model induced by symmetries, allows one to alter the ï¬atness of a minimum without affecting the function it represents. Addi- tionally the whole geometry of the error surface with respect to the parameters can be changed arbitrarily under different parametrizations. In the spirit of (Swirszcz et al., 2016), our work indicates that more care is needed to deï¬ne ï¬atness to avoid degeneracies of the geometry of the model under study. Also such a concept can not be divorced from the particular parametrization of the model or input space.
# Acknowledgements
The authors would like to thank Grzegorz ´Swirszcz for an insightful discussion of the paper, Harm De Vries, Yann Dauphin, Jascha Sohl-Dickstein and César Laurent for use- ful discussions about optimization, Danilo Rezende for ex- plaining universal approximation using normalizing ï¬ows and Kyle Kastner, Adriana Romero, Junyoung Chung, Nico- las Ballas, Aaron Courville, George Dahl, Yaroslav Ganin, Prajit Ramachandran, ÃaËglar Gülçehre, Ahmed Touati and the ICML reviewers for useful feedback.
# References
Roweis, S. (eds.), Advances in Neural Information Process- ing Systems, volume 20, pp. 161â168. NIPS Foundation (http://books.nips.cc), 2008. URL http://leon.bottou. org/papers/bottou-bousquet-2008.
Bottou, Léon and LeCun, Yann. On-line learning for very large datasets. Applied Stochastic Models in Business and Industry, 21(2):137â151, 2005. URL http://leon.bottou.org/ papers/bottou-lecun-2004a.
Bottou, Léon, Curtis, Frank E, and Nocedal, Jorge. Optimiza- tion methods for large-scale machine learning. arXiv preprint arXiv:1606.04838, 2016.
Bousquet, Olivier and Elisseeff, André. Stability and generaliza- tion. Journal of Machine Learning Research, 2(Mar):499â526, 2002.
Chan, William, Jaitly, Navdeep, Le, Quoc V., and Vinyals, Oriol. Listen, attend and spell: A neural network for large vocab- In 2016 IEEE In- ulary conversational speech recognition. ternational Conference on Acoustics, Speech and Signal Pro- cessing, ICASSP 2016, Shanghai, China, March 20-25, 2016, pp. 4960â4964. IEEE, 2016. ISBN 978-1-4799-9988-0. doi: 10.1109/ICASSP.2016.7472621. URL http://dx.doi. org/10.1109/ICASSP.2016.7472621.
Chaudhari, Pratik, Choromanska, Anna, Soatto, Stefano, Le- Cun, Yann, Baldassi, Carlo, Borgs, Christian, Chayes, Jen- nifer, Sagun, Levent, and Zecchina, Riccardo. Entropy-sgd: In ICLRâ2017, Biasing gradient descent into wide valleys. arXiv:1611.01838, 2017.
Chen, Scott Saobing and Gopinath, Ramesh A. Gaussianization. In Leen, T. K., Dietterich, T. G., and Tresp, V. (eds.), Advances in Neural Information Processing Systems 13, pp. 423â429. MIT Press, 2001. URL http://papers.nips.cc/paper/ 1856-gaussianization.pdf.
Amari, Shun-Ichi. Natural gradient works efï¬ciently in learning. Neural Comput., 10(2), 1998.
Arpit, Devansh, Zhou, Yingbo, Kota, Bhargava U, and Govin- daraju, Venu. Normalization propagation: A parametric tech- nique for removing internal covariate shift in deep networks. arXiv preprint arXiv:1603.01431, 2016.
Bach, Francis R. and Blei, David M. (eds.). Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Work- shop and Conference Proceedings, 2015. JMLR.org. URL http://jmlr.org/proceedings/papers/v37/.
Badrinarayanan, Vijay, Mishra, Bamdev, and Cipolla, Roberto. Understanding symmetries in deep networks. arXiv preprint arXiv:1511.01029, 2015.
Bahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neural machine translation by jointly learning to align and translate. In ICLRâ2015, arXiv:1409.0473, 2015.
Bottou, Léon. Large-scale machine learning with stochastic gradi- ent descent. In Proceedings of COMPSTATâ2010, pp. 177â186. Springer, 2010.
Bottou, Léon and Bousquet, Olivier. The tradeoffs of large In Platt, J.C., Koller, D., Singer, Y., and
Cho, Kyunghyun, van Merrienboer, Bart, Gülçehre, Ãaglar, Bah- danau, Dzmitry, Bougares, Fethi, Schwenk, Holger, and Ben- gio, Yoshua. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Mos- chitti, Alessandro, Pang, Bo, and Daelemans, Walter (eds.), Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pp. 1724â1734. ACL, 2014. ISBN 978-1- 937284-96-1. URL http://aclweb.org/anthology/ D/D14/D14-1179.pdf.
Choromanska, Anna, Henaff, Mikael, Mathieu, Michaël, Arous, Gérard Ben, and LeCun, Yann. The loss surfaces of multilayer networks. In AISTATS, 2015.
Chorowski, Jan K, Bahdanau, Dzmitry, Serdyuk, Dmitriy, Cho, Kyunghyun, and Bengio, Yoshua. Attention-based models for speech recognition. In Advances in Neural Information Process- ing Systems, pp. 577â585, 2015.
Collobert, Ronan, Puhrsch, Christian, and Synnaeve, Gabriel. Wav2letter: an end-to-end convnet-based speech recognition system. arXiv preprint arXiv:1609.03193, 2016.
Dauphin, Yann N., Pascanu, Razvan, Gülçehre, Ãaglar, Cho, KyungHyun, Ganguli, Surya, and Bengio, Yoshua. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. NIPS, 2014.
Sharp Minima Can Generalize For Deep Nets
Desjardins, Guillaume, Simonyan, Karen, Pascanu, Razvan, and Kavukcuoglu, Koray. Natural neural networks. NIPS, 2015.
Dinh, Laurent, Krueger, David, and Bengio, Yoshua. Nice: Non- linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014.
Hinton, Geoffrey E and Van Camp, Drew. Keeping the neural networks simple by minimizing the description length of the In Proceedings of the sixth annual conference on weights. Computational learning theory, pp. 5â13. ACM, 1993.
Hochreiter, Sepp and Schmidhuber, Jürgen. Flat minima. Neural Computation, 9(1):1â42, 1997.
Dinh, Laurent, Sohl-Dickstein, Jascha, and Bengio, Samy. Density estimation using real nvp. In ICLRâ2017, arXiv:1605.08803, 2016.
Hyvärinen, Aapo and Pajunen, Petteri. Nonlinear independent component analysis: Existence and uniqueness results. Neural Networks, 12(3):429â439, 1999.
Duchi, John, Hazan, Elad, and Singer, Yoram. Adaptive subgra- dient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121â2159, 2011.
Im, Daniel Jiwoong, Tao, Michael, and Branson, Kristin. An empirical analysis of deep network loss surfaces. arXiv preprint arXiv:1612.04010, 2016.
Gehring, Jonas, Auli, Michael, Grangier, David, and Dauphin, Yann N. A convolutional encoder model for neural machine translation. arXiv preprint arXiv:1611.02344, 2016.
Glorot, Xavier, Bordes, Antoine, and Bengio, Yoshua. Deep sparse rectiï¬er neural networks. In Aistats, volume 15, pp. 275, 2011.
Gonen, Alon and Shalev-Shwartz, Shai. Fast rates for empirical risk minimization of strict saddle problems. arXiv preprint arXiv:1701.04271, 2017.
Batch normaliza- tion: Accelerating deep network training by reducing in- ternal covariate shift. In Bach & Blei (2015), pp. 448â 456. URL http://jmlr.org/proceedings/papers/ v37/ioffe15.html.
Jarrett, Kevin, Kavukcuoglu, Koray, LeCun, Yann, et al. What is the best multi-stage architecture for object recognition? In Computer Vision, 2009 IEEE 12th International Conference on, pp. 2146â2153. IEEE, 2009.
Goodfellow, Ian J, Warde-Farley, David, Mirza, Mehdi, Courville, Aaron C, and Bengio, Yoshua. Maxout networks. ICML (3), 28: 1319â1327, 2013.
Keskar, Nitish Shirish, Mudigere, Dheevatsa, Nocedal, Jorge, Smelyanskiy, Mikhail, and Tang, Ping Tak Peter. On large- batch training for deep learning: Generalization gap and sharp minima. In ICLRâ2017, arXiv:1609.04836, 2017.
Goodfellow, Ian J, Shlens, Jonathon, and Szegedy, Christian. Ex- plaining and harnessing adversarial examples. In ICLRâ2015 arXiv:1412.6572, 2015.
Graves, Alex, Mohamed, Abdel-rahman, and Hinton, Geoffrey. Speech recognition with deep recurrent neural networks. In Acoustics, speech and signal processing (icassp), 2013 ieee international conference on, pp. 6645â6649. IEEE, 2013.
Hannun, Awni Y., Case, Carl, Casper, Jared, Catanzaro, Bryan, Diamos, Greg, Elsen, Erich, Prenger, Ryan, Satheesh, San- jeev, Sengupta, Shubho, Coates, Adam, and Ng, Andrew Y. Deep speech: Scaling up end-to-end speech recognition. CoRR, abs/1412.5567, 2014. URL http://arxiv.org/abs/ 1412.5567.
Kingma, Diederik P, Salimans, Tim, Jozefowicz, Rafal, Chen, Xi, Sutskever, Ilya, and Welling, Max. Improved variational infer- ence with inverse autoregressive ï¬ow. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 29, pp. 4743â4751. Curran Associates, Inc., 2016.
Klyachko, Alexander A. Random walks on symmetric spaces and inequalities for matrix spectra. Linear Algebra and its Applications, 319(1-3):37â59, 2000.
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Ima- genet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097â 1105, 2012.
Hardt, Moritz, Recht, Ben, and Singer, Yoram. Train faster, gener- alize better: Stability of stochastic gradient descent. In Balcan, Maria-Florina and Weinberger, Kilian Q. (eds.), Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, vol- ume 48 of JMLR Workshop and Conference Proceedings, pp. 1225â1234. JMLR.org, 2016. URL http://jmlr.org/ proceedings/papers/v48/hardt16.html.
Lafond, Jean, Vasilache, Nicolas, and Bottou, Léon. About diago- nal rescaling applied to neural nets. ICML Workshop on Opti- mization Methods for the Next Generation of Machine Learning, 2016.
Larsen, Anders Boesen Lindbo, Sønderby, Søren Kaae, and Winther, Ole. Autoencoding beyond pixels using a learned similarity metric. CoRR, abs/1512.09300, 2015. URL http: //arxiv.org/abs/1512.09300.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectiï¬ers: Surpassing human-level perfor- mance on imagenet classiï¬cation. In Proceedings of the IEEE international conference on computer vision, pp. 1026â1034, 2015.
Montufar, Guido F, Pascanu, Razvan, Cho, Kyunghyun, and Ben- gio, Yoshua. On the number of linear regions of deep neural networks. In Advances in neural information processing sys- tems, pp. 2924â2932, 2014.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pp. 770â778, 2016.
Nair, Vinod and Hinton, Geoffrey E. Rectiï¬ed linear units improve In Proceedings of the 27th restricted boltzmann machines. international conference on machine learning (ICML-10), pp. 807â814, 2010.
Sharp Minima Can Generalize For Deep Nets
Nesterov, Yurii and Vial, Jean-Philippe. Conï¬dence level solutions for stochastic programming. Automatica, 44(6):1559â1568, 2008.
Neyshabur, Behnam, Salakhutdinov, Ruslan R, and Srebro, Nati. Path-sgd: Path-normalized optimization in deep neural net- works. In Advances in Neural Information Processing Systems, pp. 2422â2430, 2015.
Pascanu, Razvan and Bengio, Yoshua. Revisiting natural gradient for deep networks. ICLR, 2014.
Wu, Yonghui, Schuster, Mike, Chen, Zhifeng, Le, Quoc V, Norouzi, Mohammad, Macherey, Wolfgang, Krikun, Maxim, Cao, Yuan, Gao, Qin, Macherey, Klaus, et al. Googleâs neural machine translation system: Bridging the gap between human and ma- chine translation. arXiv preprint arXiv:1609.08144, 2016.
Zhang, Chiyuan, Bengio, Samy, Hardt, Moritz, Recht, Benjamin, and Vinyals, Oriol. Understanding deep learning requires re- In ICLRâ2017, arXiv:1611.03530, thinking generalization. 2017.
Raghu, Maithra, Poole, Ben, Kleinberg, Jon, Ganguli, Surya, and Sohl-Dickstein, Jascha. On the expressive power of deep neural networks. arXiv preprint arXiv:1606.05336, 2016.
# A Radial transformations
Rezende, Danilo Jimenez and Mohamed, Shakir. Variational in- ference with normalizing ï¬ows. In Bach & Blei (2015), pp. 1530â1538. URL http://jmlr.org/proceedings/ papers/v37/rezende15.html.
We show an elementary transformation to locally perturb the geometry of a ï¬nite-dimensional vector space and therefore affect the relative ï¬atness between a ï¬nite number minima, at least in terms of spectral norm of the Hessian. We deï¬ne the function:
Sagun, Levent, Bottou, Léon, and LeCun, Yann. Singularity of the hessian in deep learning. arXiv preprint arXiv:1611.07476, 2016.
Salimans, Tim and Kingma, Diederik P. Weight normalization: A simple reparameterization to accelerate training of deep neu- ral networks. In Advances in Neural Information Processing Systems, pp. 901â901, 2016.
Salimans, Tim, Goodfellow, Ian, Zaremba, Wojciech, Cheung, Vicki, Radford, Alec, and Chen, Xi. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pp. 2226â2234, 2016.
V5 > 0,Vp â¬J0, 5,V(r,#) ⬠Ry x]0, 6, W(r,?,6,p) = I(r ⬠[0,d]) r+ 1(r ⬠[0,4]) p . +1(r â¬]F,6]) ((- 5) me +6) wi (r,#,5,p) = U(r ¢ [0,4]) + U(r ⬠(0,7) c + 1(r â¬l?,6]) £= °
Saxe, Andrew M., McClelland, James L., and Ganguli, Surya. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. CoRR, abs/1312.6120, 2013. URL http://arxiv.org/abs/1312.6120.
For a parameter Ëθ â Î and δ > 0, Ï â]0, δ[, Ër â]0, δ[, inspired by the radial ï¬ows (Rezende & Mohamed, 2015) in we can deï¬ne the radial transformations
Simonyan, Karen and Zisserman, Andrew. Very deep convolutional In ICLRâ2015, networks for large-scale image recognition. arXiv:1409.1556, 2015.
v(@â4ll.F50) ( * ) (0-6) +0 |9 â All vo ⬠O, g-*(0)
Sutskever, Ilya, Vinyals, Oriol, and Le, Quoc V. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104â3112, 2014.
Swirszcz, Grzegorz, Czarnecki, Wojciech Marian, and Pascanu, Razvan. Local minima in training of deep networks. CoRR, abs/1611.06310, 2016.
Szegedy, Christian, Zaremba, Wojciech, Sutskever, Ilya, Bruna, Joan, Erhan, Dumitru, Goodfellow, Ian, and Fergus, Rob. In ICLRâ2014, Intriguing properties of neural networks. arXiv:1312.6199, 2014.
with Jacobian
V0 â¬O, (Vgu1)(8) = w"(r, 7,5, p) In â U(r â¬]f, 5) Pe (0 6)"(4â) +1(r â¬}i,6)) â Tn,
with r = ||0 â O|lo.
Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Dumitru, Vanhoucke, Vincent, and Rabinovich, Andrew. Going deeper with convo- lutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1â9, 2015.
First, we can observe in Figure 6 that these transformations are purely local: they only have an effect inside the ball B2(Ëθ, δ). Through these transformations, you can arbitrarily perturb the ranking between several minima in terms of ï¬atness as described in Subsection 5.1.
Theis, Lucas, Oord, Aäron van den, and Bethge, Matthias. A note on the evaluation of generative models. In ICLRâ2016 arXiv:1511.01844, 2016.
Sharp Minima Can Generalize For Deep Nets
# in
y= drect (Greer ++ bpect(% +01 +b1) +++) OK + bx-1) On + 0K = brect (Greet +++ Greet (w+ 0x81 + arb) +++) K-1 -aK-19K-1 + Il axbic-1) -aKdK + dK k=l
for Ty a, = 1. This can decrease the amount of eigen- values of the Hessian that can be arbitrarily influenced.
(a) Ï(r, Ër, δ, Ï)
# C Rectiï¬ed neural network and Lipschitz continuity
Relative to recent works (Hardt et al., 2016; Gonen & Shalev-Shwartz, 2017) assuming Lipschitz continuity of the loss function to derive uniform stability bound, we make the following observation:
Theorem 6. For a one-hidden layer rectiï¬ed neural network of the form
y = Ïrect(x · θ1) · θ2,
if L is not constant, then it is not Lipschitz continuous.
(b) gâ1(θ)
Figure 6: An example of a radial transformation on a 2- dimensional space. We can see that only the area in blue and red, i.e. inside B2(Ëθ, δ), are affected. Best seen with colors.
Proof. Since a Lipschitz function is necessarily absolutely continuous, we will consider the cases where L is absolutely continuous. First, if L has zero gradient almost everywhere, then L is constant.
Now, if there is a point θ with non-zero gradient, then by writing
# B Considering the bias parameter
(âL)(θ1, θ2) = [(âθ1L)(θ1, θ2) (âθ2L)(θ1, θ2)],
When we consider the bias parameter for a one (hidden) layer neural network, the non-negative homogeneity prop- erty translates into
we have
(âL)(αθ1, αâ1θ2) = [αâ1(âθ1L)(θ1, θ2) α(âθ2L)(θ1, θ2)].
Without loss of generality, we consider (Vo, L)(01, 92) 4 0. Then the limit of the norm
y = Ïrect(x · θ1 + b1) · θ2 + b2 = Ïrect(x · αθ1 + αb1) · αâ1θ2 + b2,
I(VL)(01, a 02)||3 = a *||(Vo, L)(01, 62) 3 +07 ||(Vo,L)(01,42)I|3
of the gradient goes to +â as α goes to 0. Therefore, L is not Lipschitz continuous.
which results in conclusions similar to section 4.
For a deeper rectiï¬ed neural network, this property results
This result can be generalized to several other models con- taining a one-hidden layer rectiï¬ed neural network, includ- ing deeper rectiï¬ed networks.
Sharp Minima Can Generalize For Deep Nets
# D Euclidean distance and input representation
A natural consequence of Subsection 5.2 is that metrics re- lying on Euclidean metric like mean square error or Earth- mover distance will rank very differently models depending on the input representation chosen. Therefore, the choice of input representation is critical when ranking different models based on these metrics. Indeed, bijective transfor- mations as simple as feature standardization or whitening can change the metric signiï¬cantly.
On the contrary, ranking resulting from metrics like f- divergence and log-likelihood are not perturbed by bijective transformations because of the change of variables formula. | {
"id": "1609.03193"
} |
1703.03664 | Parallel Multiscale Autoregressive Density Estimation | PixelCNN achieves state-of-the-art results in density estimation for natural
images. Although training is fast, inference is costly, requiring one network
evaluation per pixel; O(N) for N pixels. This can be sped up by caching
activations, but still involves generating each pixel sequentially. In this
work, we propose a parallelized PixelCNN that allows more efficient inference
by modeling certain pixel groups as conditionally independent. Our new PixelCNN
model achieves competitive density estimation and orders of magnitude speedup -
O(log N) sampling instead of O(N) - enabling the practical generation of
512x512 images. We evaluate the model on class-conditional image generation,
text-to-image synthesis, and action-conditional video generation, showing that
our model achieves the best results among non-pixel-autoregressive density
models that allow efficient sampling. | http://arxiv.org/pdf/1703.03664 | Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas | cs.CV, cs.NE | null | null | cs.CV | 20170310 | 20170310 | 7 1 0 2
r a M 0 1 ] V C . s c [
1 v 4 6 6 3 0 . 3 0 7 1 : v i X r a
# Parallel Multiscale Autoregressive Density Estimation
# Scott Reed 1 A¨aron van den Oord 1 Nal Kalchbrenner 1 Sergio G´omez Colmenarejo 1 Ziyu Wang 1 Dan Belov 1 Nando de Freitas 1
# Abstract
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pix- els. This can be sped up by caching activations, but still involves generating each pixel sequen- In this work, we propose a parallelized tially. PixelCNN that allows more eï¬cient inference by modeling certain pixel groups as condition- ally independent. Our new PixelCNN model achieves competitive density estimation and or- ders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical genera- tion of 512 à 512 images. We evaluate the model on class-conditional image generation, text-to- image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive den- sity models that allow eï¬cient sampling.
âA yellow bird with a black head, orange eyes and an orange bill.
Figure 1. Samples from our model at resolutions from 4 Ã 4 to 256 Ã 256, conditioned on text and bird part locations in the CUB data set. See Fig. 4 and the supplement for more examples.
# 1. Introduction
case for WaveNet (Oord et al., 2016; Ramachandran et al., 2017). However, even with this optimization, generation is still in serial order by pixel.
Many autoregressive image models factorize the joint dis- tribution of images into per-pixel factors:
T pear) =| | posi) (1) t=1
Ideally we would generate multiple pixels in parallel, In the autore- which could greatly accelerate sampling. gressive framework this only works if the pixels are mod- eled as independent. Thus we need a way to judiciously break weak dependencies among pixels; for example im- mediately neighboring pixels should not be modeled as in- dependent since they tend to be highly correlated.
For example PixelCNN (van den Oord et al., 2016b) uses a deep convolutional network with carefully designed ï¬l- ter masking to preserve causal structure, so that all factors in equation 1 can be learned in parallel for a given image. However, a remaining diï¬culty is that due to the learned causal structure, inference proceeds sequentially pixel-by- pixel in raster order.
Multiscale image generation provides one such way to break weak dependencies. In particular, we can model cer- tain groups of pixels as conditionally independent given a lower resolution image and various types of context infor- mation, such as preceding frames in a video. The basic idea is obvious, but nontrivial design problems stand between the idea and a workable implementation.
In the naive case, this requires a full network evaluation per pixel. Caching hidden unit activations can be used to reduce the amount of computation per pixel, as in the 1D
1DeepMind. scot@google.com>. Correspondence to: Scott Reed <reed-
First, what is the right way to transmit global information from a low-resolution image to each generated pixel of the high-resolution image? Second, which pixels can we gen- erate in parallel? And given that choice, how can we avoid border artifacts when merging sets of pixels that were gen- erated in parallel, blind to one another?
Parallel Multiscale Autoregressive Density Estimation
In this work we show how a very substantial portion of the spatial dependencies in PixelCNN can be cut, with only modest degradation in performance. Our formulation al- lows sampling in O(log N) time for N pixels, instead of O(N) as in the original PixelCNN, resulting in orders of In the case of video, in magnitude speedup in practice. which we have access to high-resolution previous frames, we can even sample in O(1) time, with much better perfor- mance than comparably-fast baselines.
conditional image generation schemes such as text and spa- tial structure to image (Mansimov et al., 2015; Reed et al., 2016b;a; Wang & Gupta, 2016).
The addition of multiscale structure has also been shown Denton et al. to be useful in adversarial networks. (2015) used a Laplacian pyramid to generate images in a coarse-to-ï¬ne manner. Zhang et al. (2016) composed a low-resolution and high-resolution text-conditional GAN, yielding higher quality 256 à 256 bird and ï¬ower images.
At a high level, the proposed approach can be viewed as a way to merge per-pixel factors in equation 1. If we merge the factors for, e.g. xi and x j, then that dependency is âcutâ, so the model becomes slightly less expressive. However, we get the beneï¬t of now being able to sample xi and x j in parallel. If we divide the N pixels into G groups of T pixels each, the joint distribution can be written as a product of the corresponding G factors:
Generator networks can be combined with a trained model, such as an image classiï¬er or captioning network, to gen- erate high-resolution images via optimization and sam- pling procedures (Nguyen et al., 2016). Wu et al. (2017) state that it is diï¬cult to quantify GAN performance, and propose Monte Carlo methods to approximate the log- likelihood of GANs on MNIST images.
G pet) =| [pesky )) CO) gel
Above we assumed that each of the G groups contains ex- actly T pixels, but in practice the number can vary. In this work, we form pixel groups from successively higher- resolution views of an image, arranged into a sub-sampling pyramid, such that G â O(log N).
Both auto-regressive and non auto-regressive deep net- works have recently been applied successfully to image super-resolution. Shi et al. (2016) developed a sub-pixel convolutional network well-suited to this problem. Dahl et al. (2017) use a PixelCNN as a prior for image super- resolution with a convolutional neural network. Johnson et al. (2016) developed a perceptual loss function useful for both style transfer and super-resolution. GAN variants have also been successful in this domain (Ledig et al., 2016; Sønderby et al., 2017).
In section 3 we describe this group structure implemented as a deep convolutional network. In section 4 we show that the model excels in density estimation and can produce quality high-resolution samples at high speed.
# 2. Related work
Deep neural autoregressive models have been applied to image generation for many years, showing promise as a tractable yet expressive density model (Larochelle & Mur- ray, 2011; Uria et al., 2013). Autoregressive LSTMs have been shown to produce state-of-the-art performance in density estimation on large-scale datasets such as Ima- geNet (Theis & Bethge, 2015; van den Oord et al., 2016a).
Causally-structured convolutional networks such as Pixel- CNN (van den Oord et al., 2016b) and WaveNet (Oord et al., 2016) improved the speed and scalability of train- ing. These led to improved autoregressive models for video generation (Kalchbrenner et al., 2016b) and machine trans- lation (Kalchbrenner et al., 2016a).
Several other deep, tractable density models have recently been developed. Real NVP (Dinh et al., 2016) learns a mapping from images to a simple noise distribution, which is by construction trivially invertible. It is built from smaller invertible blocks called coupling layers whose Jacobian is lower-triangular, and also has a multiscale structure. Inverse Autoregressive Flows (Kingma & Sal- imans, 2016) use autoregressive structures in the latent space to learn more ï¬exible posteriors for variational auto- encoders. Autoregressive models have also been combined with VAEs as decoder models (Gulrajani et al., 2016).
The original PixelRNN paper (van den Oord et al., 2016a) actually included a multiscale autoregressive version, in which PixelRNNs or PixelCNNs were trained at multiple resolutions. The network producing a given resolution im- age was conditioned on the image at the next lower reso- lution. This work is similarly motivated by the usefulness of multiscale image structure (and the very long history of coarse-to-ï¬ne modeling).
Non-autoregressive convolutional generator networks have been successful and widely adopted for image generation as well. Instead of maximizing likelihood, Generative Ad- versarial Networks (GANs) train a generator network to fool a discriminator network adversary (Goodfellow et al., 2014). These networks have been used in a wide variety of
Our novel contributions in this work are (1) asymptotically and empirically faster inference by modeling conditional independence structure, (2) scaling to much higher reso- lution, (3) evaluating the model on a diverse set of chal- lenging benchmarks including class-, text- and structure- conditional image generation and video generation.
Parallel Multiscale Autoregressive Density Estimation
1 1 1--2--17-2 1),2/)1)2 1) 2)1) 2 v ti +7 Y 3 3 g3+44+344 > oR EH 1 1 1+ 2+-1+2 1),2)1)2 1) 2) 1) 2 tS ra 3 3 3+4+3t4
Figure 2. Example pixel grouping and ordering for a 4 Ã 4 image. The upper-left corners form group 1, the upper-right group 2, and so on. For clarity we only use arrows to indicate immediately-neighboring dependencies, but note that all pixels in preceding groups can be used to predict all pixels in a given group. For example all pixels in group 2 can be used to predict pixels in group 4. In our image experiments pixels in group 1 originate from a lower-resolution image. For video, they are generated given the previous frames.
TUT TTT TTT TTT TT] ResNet Split _ Bot ResNet, Split Shallow PixelCNN Merge TTT TTT | Split Merge
Figure 3. A simple form of causal upscaling network, mapping from a K à K image to K à 2K. The same procedure can be applied in the vertical direction to produce a 2K à 2K image. In reference to ï¬gure 2, the leftmost images could be considered âgroup 1â pixels; i.e. the upper-left corners. The network shown here produces âgroup 2â pixels; i.e. the upper-right corners, completing the top-corners half of the image. (A) In the simplest version, a deep convolutional network (in our case ResNet) directly produces the right image from the left image, and merges column-wise. (B) A more sophisticated version extracts features from a convolutional net, splits the feature map into spatially contiguous blocks, and feeds these in parallel through a shallow PixelCNN. The result is then merged as in (A).
# 3. Model
The main design principle that we follow in building the model is a coarse-to-ï¬ne ordering of pixels. Successively higher-resolution frames are generated conditioned on the previous resolution (See for example Figure 1). Pixels are grouped so as to exploit spatial locality at each resolution, which we describe in detail below.
Concretely, to create groups we tile the image with 2 Ã 2 blocks. The corners of these 2Ã2 blocks form the four pixel groups at a given scale; i.e. upper-left, upper-right, lower- left, lower-right. Note that some pairs of pixels both within each block and also across blocks can still be dependent. These additional dependencies are important for capturing local textures and avoiding border artifacts.
The training objective is to maximize log P(x; θ). Since the joint distribution factorizes over pixel groups and scales, the training can be trivially parallelized.
# 3.1. Network architecture
Figure 2 shows how we divide an image into disjoint groups of pixels, with autoregressive structure among the groups. The key property to notice is that no two adjacent pixels of the high-resolution image are in the same group. Also, pixels can depend on other pixels below and to the right, which would have been inaccessible in the standard PixelCNN. Each group of pixels corresponds to a factor in the joint distribution of equation 2.
Figure 3 shows an instantiation of one of these factors as a neural network. Similar to the case of PixelCNN, at train- ing time losses and gradients for all of the pixels within a group can be computed in parallel. At test time, infer- ence proceeds sequentially over pixel groups, in parallel within each group. Also as in PixelCNN, we model the color channel dependencies - i.e. green sees red, blue sees red and green - using channel masking.
In the case of type-A upscaling networks (See Figure 3A), sampling each pixel group thus requires 3 network evalua- tions 1. In the case of type-B upscaling, the spatial feature
1However, one could also use a discretized mixture of logistics as output instead of a softmax as in Salimans et al. (2017), in which case only one network evaluation is needed.
Parallel Multiscale Autoregressive Density Estimation
map for predicting a group of pixels is divided into contigu- ous M à M patches for input to a shallow PixelCNN (See ï¬gure 3B). This entails M2 very small network evaluations, for each color channel. We used M = 4, and the shallow PixelCNN weights are shared across patches.
The division into non-overlapping patches may appear to risk border artifacts when merging. However, this does not occur for several reasons. First, each predicted pixel is di- rectly adjacent to several context pixels fed into the upscal- ing network. Second, the generated patches are not directly adjacent in the 2K Ã2K output image; there is always a row or column of pixels on the border of any pair.
Note that the only learnable portions of the upscaling mod- ule are (1) the ResNet encoder of context pixels, and (2) the shallow PixelCNN weights in the case of type-B upscaling. The âmergeâ and âsplitâ operations shown in ï¬gure 3 only marshal data and are not associated with parameters.
Given the ï¬rst group of pixels, the rest of the groups at a given scale can be generated autoregressively. The ï¬rst group of pixels can be modeled using the same approach as detailed above, recursively, down to a base resolution at which we use a standard PixelCNN. At each scale, the number of evaluations is O(1), and the resolution doubles after each upscaling, so the overall complexity is O(log N) to produce images with N pixels.
# 3.2. Conditional image modeling
across 200 bird species, with 10 captions per image. As conditioning information we used a 32 Ã 32 spatial encoding of the 15 annotated bird part locations.
⢠MPII (Andriluka et al., 2014) has around 25K images of 410 human activities, with 3 captions per image. We kept only the images depicting a single person, and cropped the image centered around the person, leaving us about 14K images. We used a 32 à 32 en- coding of the 17 annotated human part locations.
⢠MS-COCO (Lin et al., 2014) has 80K training images with 5 captions per image. As conditioning we used the 80-class segmentation scaled to 32 à 32.
⢠Robot Pushing (Finn et al., 2016) contains sequences of 20 frames of size 64 à 64 showing a robotic arm pushing objects in a basket. There are 50, 000 training sequences and a validation set with the same objects but diï¬erent arm trajectories. One test set involves a subset of the objects seen during training and another involving novel objects, both captured on an arm and camera viewpoint not seen during training.
All models for ImageNet, CUB, MPII and MS-COCO were trained using RMSprop with hyperparameter ⬠= le â 8, with batch size 128 for 200K steps. The learning rate was set initially to le â 4 and decayed to le â 5.
Given some context information c, such as a text descrip- tion, a segmentation, or previous video frames, we maxi- mize the conditional likelihood log P(x|c; θ). Each factor in equation 2 simply adds c as an additional conditioning variable. The upscaling neural network corresponding to each factor takes c as an additional input.
For encoding text we used a character-CNN-GRU as in (Reed et al., 2016a). For spatially structured data such as segmentation masks we used a standard convolutional net- work. For encoding previous frames in a video we used a ConvLSTM as in (Kalchbrenner et al., 2016b).
For all of the samples we show, the queries are drawn from the validation split of the corresponding data set. That is, the captions, key points, segmentation masks, and low- resolution images for super-resolution have not been seen by the model during training.
When we evaluate negative log-likelihood, we only quan- tize pixel values to [0, ..., 255] at the target resolution, not separately at each scale. The lower resolution images are then created by sub-sampling this quantized image.
# 4.2. Text and location-conditional generation
# 4. Experiments
# 4.1. Datasets
We evaluate our model on ImageNet, Caltech-UCSD Birds (CUB), the MPII Human Pose dataset (MPII), the Mi- crosoft Common Objects in Context dataset (MS-COCO), and the Google Robot Pushing dataset.
⢠For ImageNet (Deng et al., 2009), we trained a class- conditional model using the 1000 leaf node classes.
⢠CUB (Wah et al., 2011) contains 11, 788 images
In this section we show results for CUB, MPII and MS- COCO. For each dataset we trained type-B upscaling net- works with 12 ResNet layers and 4 PixelCNN layers, with 128 hidden units per layer. The base resolution at which we train a standard PixelCNN was set to 4 Ã 4.
To encode the captions we padded to 201 characters, then fed into a character-level CNN with three convolutional layers, followed by a GRU and average pooling over time. Upscaling networks to 8 Ã 8, 16 Ã 16 and 32 Ã 32 shared a single text encoder. For higher-resolution upscaling net- works we trained separate text encoders. In principle all upscalers could share an encoder, but we trained separably to save memory and time.
Parallel Multiscale Autoregressive Density Estimation
Captions Keypoints Samples tail beak tail beak tail beak vail beak This is a large brown bird with a bright green head, yellow bill and orange feet. With long brown upper converts and giant white wings, the grey breasted bird flies through the air. Agrey bird witha small head and short beak with lighter grey wing bars and a bright rane . yellow belly. A white large bird with orange legs and gray secondaries and primaries, and a short yellow bill.
Figure 4. Text-to-image bird synthesis. The leftmost column shows the entire sampling process starting by generating 4 à 4 images, followed by six upscaling steps, to produce a 256 à 256 image. The right column shows the ï¬nal sampled images for several other queries. For each query the associated part keypoints and caption are shown to the left of the samples.
Captions A fisherman sitting along the edge of a creek preparing his equipment to cast. Two teams of players are competing in a game at a gym. Aman in blue pants and a blue t-shirt, wearing brown sneakers, is working on a roof. Awoman in black work out clothes is kneeling on an exercise mat. Keypoints nme Samples head head pelvis head pelvis head pelvis head pelvis
Figure 5. Text-to-image human synthesis.The leftmost column again shows the sampling process, and the right column shows the ï¬nal frame for several more examples. We ï¬nd that the samples are diverse and usually match the color and position constraints.
For CUB and MPII, we have body part keypoints for birds and humans, respectively. We encode these into a 32 Ã 32 Ã P binary feature map, where P is the number of parts; 17 for MPII and 15 for CUB. A 1 indicates the part is visible, and 0 indicates the part is not visible. For MS-COCO, we resize the class segmentation mask to 32 Ã 32 Ã 80.
the target resolution for an upscaler network is higher than 32 Ã 32, these conditioning features are randomly cropped along with the target image to a 32 Ã 32 patch. Because the network is fully convolutional, the network can still gen- erate the full resolution at test time, but we can massively save on memory and computation during training.
For all datasets, we then encode these spatial features us- ing a 12-layer ResNet. These features are then depth- concatenated with the text encoding and resized with bi- If linear interpolation to the spatial size of the image.
Figure 4 shows examples of text- and keypoint-to-bird image synthesis. Figure 5 shows examples of text- and keypoint-to-human image synthesis. Figure 6 shows ex- amples of text- and segmentation-to-image synthesis.
Parallel Multiscale Autoregressive Density Estimation
me me A young man riding on the back of a brown horse. ov Old time railroad caboose sitting on track with two people inside. Sn âfi L âd La 1 A large passenger jet taxis on an airport tarmac. uy rap A professional baseball player is ready to hit the ball. Aman sitting at a desk covered with papers.
Figure 6. Text and segmentation-to-image synthesis. The left column shows the full sampling trajectory from 4 Ã 4 to 256 Ã 256. The caption queries are shown beneath the samples. Beneath each image we show the image masked with the largest object in each scene; i.e. only the foreground pixels in the sample are shown. More samples with all categories masked are included in the supplement.
CUB PixelCNN Multiscale PixelCNN MPII PixelCNN Multiscale PixelCNN MS-COCO PixelCNN Multiscale PixelCNN Train Val 2.93 2.91 2.99 2.98 Train Val 2.92 2.90 2.91 3.03 Train Val 3.08 3.07 3.16 3.14 Test 2.92 2.98 Test 2.92 3.03 Test - -
The motivation for training the O(T ) model is that previous frames in a video provide very detailed cues for predicting the next frame, so that our pixel groups could be condition- ally independent even without access to a low-resolution image. Without the need to upscale from a low-resolution image, we can produce âgroup 1â pixels - i.e. the upper-left corner group - directly by conditioning on previous frames. Then a constant number of network evaluations are needed to sample the next three pixel groups at the ï¬nal scale.
Table 1. Text and structure-to image negative conditional log- likelihood in nats per sub-pixel.
Quantitatively, the Multiscale PixelCNN results are not far from those obtained using the original PixelCNN (Reed In addition, we in- et al., 2016c), as shown in Table 1. creased the sample resolution by 8Ã. Qualitatively, the sample quality appears to be on par, but with much greater realism due to the higher resolution.
# 4.3. Action-conditional video generation
The second version is our multi-step upscaler used in previ- ous experiments, conditioned on both previous frames and robot arm state and actions. The complexity of sampling from this model is O(T log N), because at every time step the upscaling procedure must be run, taking O(log N) time.
The models were trained for 200K steps with batch size 64, using the RMSprop optimizer with centering and ⬠= le-8. The learning rate was initialized to le â 4 and decayed by factor 0.3 after 83K steps and after 113K steps. For the O(T) model we used a mixture of discretized logistic out- puts (Salimans et al., 2017) and for the O(T log N) mode we used a softmax ouptut.
In this section we present results on Robot Pushing videos. All models were trained to perform future frame prediction conditioned on 2 starting frames and also on the robot arm actions and state, which are each 5-dimensional vectors.
We trained two versions of the model, both versions using type-A upscaling networks (See Fig. 3). The ï¬rst is de- signed to sample in O(T ) time, for T video frames. That is, the number of network evaluations per frame is constant with respect to the number of pixels.
Table 2 compares two variants of our model with the origi- nal VPN. Compared to the O(T ) baseline - a convolutional LSTM model without spatial dependencies - our O(T ) model performs dramatically better. On the validation set, in which the model needs to generalize to novel combina- tions of objects and arm trajectories, the O(T log N) model does much better than our O(T ) model, although not as well as the original O(T N) model.
Parallel Multiscale Autoregressive Density Estimation
8x8 â 128x128 8x8 > 512x512 | | 16x16 â 128x128 32x32 â 128x128
Figure 7. Upscaling low-resolution images to 128 Ã 128 and 512 Ã 512. In each group of images, the left column is made of real images, and the right columns of samples from the model.
= = Monastery Cardoon
Figure 8. Class-conditional 128 Ã 128 samples from a model trained on ImageNet.
On the testing sets, we observed that the O(T ) model per- formed as well as on the validation set, but the O(T log N) model showed a drop in performance. However, this drop does not occur due to the presence of novel objects (in fact this setting actually yields better results), but due to the novel arm and camera conï¬guration used during testing 2. It appears that the O(T log N) model may have overï¬t to the background details and camera position of the 10 train- ing arms, but not necessarily to the actual arm and object motions. It should be possible to overcome this eï¬ect with better regularization and perhaps data augmentation such as mirroring and jittering frames, or simply training on data with more diverse camera positions.
2From communication with the Robot Pushing dataset author.
The supplement contains example videos generated on the validation set arm trajectories from our O(T log N) model. We also trained 64 â 128 and 128 â 256 upscalers con- ditioned on low-resolution and a previous high-resolution frame, so that we can produce 256 Ã 256 videos.
# 4.4. Class-conditional generation
To compare against other image density models, we trained our Multiscale PixelCNN on ImageNet. We used type-B upscaling networks (Seee ï¬gure 3) with 12 ResNet (He et al., 2016) layers and 4 PixelCNN layers, with 256 hidden units per layer. For all PixelCNNs in the model, we used the same architecture as in (van den Oord et al., 2016b). We generated images with a base resolution of 8 à 8 and
Parallel Multiscale Autoregressive Density Estimation
Tr Model - O(T) baseline - O(TN) VPN O(T) VPN 1.03 O(T log N) VPN 0.74 Val 2.06 0.62 1.04 0.74 Ts-seen Ts-novel 2.08 0.64 1.04 1.06 2.07 0.64 1.04 0.97
Table 2. Robot videos neg. log-likelihood in nats per sub-pixel. âTrâ is the training set, âTs-seenâ is the test set with novel arm and camera conï¬guration and previously seen objects, and âTs- novelâ is the same as âTs-seenâ but with novel objects.
Model O(N) PixelCNN O(log N) PixelCNN O(log N) PixelCNN, in-graph O(T N) VPN O(T ) VPN O(T ) VPN, in-graph O(T log N) VPN O(T log N) VPN, in-graph scale 32 32 32 64 64 64 64 64 time 120.0 1.17 1.14 1929.8 0.38 0.37 3.82 3.07 speedup 1.0Ã 102Ã 105Ã 1.0Ã 5078Ã 5215Ã 505Ã 628Ã
trained four upscaling networks to produce up to 128 Ã 128 samples.At scales 64 Ã 64 and above, during training we randomly cropped the image to 32 Ã 32. This accelerates training but does not pose a problem at test time because all of the networks are fully convolutional.
Table 4. Sampling speed of several models in seconds per frame on an Nvidia Quadro M4000 GPU. The top three rows were mea- sured on 32Ã32 ImageNet, with batch size of 30. The bottom ï¬ve rows were measured on generating 64 à 64 videos of 18 frames each, averaged over 5 videos.
Table 3 shows the results. On both 32 à 32 and 64 à 64 ImageNet it achieves signiï¬cantly better likelihood scores than have been reported for any non-pixel-autoregressive density models, such as ConvDRAW and Real NVP, that also allow eï¬cient sampling.
ing from 8 Ã 8, but less realistic results due to the more challenging nature of the problem. Upscaling starting from 32 Ã 32 results in much more realistic images. Here the diversity is apparent in the samples (as in the data, condi- tioned on low-resolution) in the local details such as the dogâs fur patterns or the frogâs eye contours.
Of course, performance of these approaches varies consid- erably depending on the implementation details, especially in the design and capacity of deep neural networks used. But it is notable that the very simple and direct approach developed here can surpass the state-of-the-art among fast- sampling density models.
# 4.5. Sampling time comparison
As expected, we observe a very large speedup of our model compared to sampling from a standard PixelCNN at the same resolution (see Table 4). Even at 32 Ã 32 we ob- serve two orders of magnitude speedup, and the speedup is greater for higher resolution.
32 Model 3.86 (3.83) PixelRNN 3.83 (3.77) PixelCNN Real NVP 4.28(4.26) Conv. DRAW 4.40(4.35) 3.95(3.92) Ours 64 3.64(3.57) 3.57(3.48) 3.98(3.75) 4.10(4.04) 3.70(3.67) 128 - - - - 3.55(3.42)
Since our model only requires O(log N) network evalua- tions to sample, we can ï¬t the entire computation graph for sampling into memory, for reasonable batch sizes. In- graph computation in TensorFlow can further improve the speed of both image and video generation, due to reduced overhead by avoiding repeated calls to sess.run.
Table 3. ImageNet negative log-likelihood in bits per sub-pixel at 32 Ã 32, 64 Ã 64 and 128 Ã 128 resolution.
In Figure 8 we show examples of diverse 128 Ã 128 class conditional image generation.
Since our model has a PixelCNN at the lowest resolution, it can also be accelerated by caching PixelCNN hidden unit activations, recently implemented b by Ramachandran et al. (2017). This could allow one to use higher-resolution base PixelCNNs without sacriï¬cing speed.
Interestingly, the model often produced quite realistic bird images from scratch when trained on CUB, and these sam- ples looked more realistic than any animal image generated by our ImageNet models. One plausible explanation for this diï¬erence is a lack of model capacity; a single network modeling the 1000 very diverse ImageNet categories can devote only very limited capacity to each one, compared to a network that only needs to model birds. This sug- gests that ï¬nding ways to increase capacity without slowing down training or sampling could be a promising direction.
# 5. Conclusions
In this paper, we developed a parallelized, multiscale ver- sion of PixelCNN. It achieves competitive density estima- tion results on CUB, MPII, MS-COCO, ImageNet, and Robot Pushing videos, surpassing all other density models that admit fast sampling. Qualitatively, it can achieve com- pelling results in text-to-image synthesis and video gener- ation, as well as diverse super-resolution from very small images all the way to 512 Ã 512.
Figure 7 shows upscaling starting from ground-truth im- ages of size 8Ã8, 16Ã16 and 32Ã32. We observe the largest diversity of samples in terms of global structure when start-
Many more samples from all of our models can be found in the appendix and supplementary material.
Parallel Multiscale Autoregressive Density Estimation
# References
Andriluka, Mykhaylo, Pishchulin, Leonid, Gehler, Peter, and Schiele, Bernt. 2d human pose estimation: New benchmark and state of the art analysis. In CVPR, pp. 3686â3693, 2014.
Dahl, Ryan, Norouzi, Mohammad, and Shlens, Jonathon. arXiv preprint Pixel arXiv:1702.00783, 2017. recursive super resolution.
Deng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, ImageNet: A large-scale hierarchical and Fei-Fei, Li. image database. In CVPR, 2009.
Larochelle, Hugo and Murray, Iain. The neural autoregres- sive distribution estimator. In AISTATS, 2011.
Ledig, Christian, Theis, Lucas, Huszar, Ferenc, Caballero, Jose, Cunningham, Andrew, Acosta, Alejandro, Aitken, Andrew, Tejani, Alykhan, Totz, Johannes, Wang, Zehan, and Shi, Wenzhe. Photo-realistic single image super- resolution using a generative adversarial network. 2016.
Lin, Tsung-Yi, Maire, Michael, Belongie, Serge, Hays, James, Perona, Pietro, Ramanan, Deva, Doll´ar, Piotr, and Zitnick, C Lawrence. Microsoft COCO: Common objects in context. In ECCV, pp. 740â755, 2014.
Denton, Emily L, Chintala, Soumith, Szlam, Arthur, and Fergus, Rob. Deep generative image models using a Laplacian pyramid of adversarial networks. In NIPS, pp. 1486â1494, 2015.
Dinh, Laurent, Sohl-Dickstein, Jascha, and Bengio, Samy. Density estimation using Real NVP. In NIPS, 2016.
Mansimov, Elman, Parisotto, Emilio, Ba, Jimmy Lei, and Salakhutdinov, Ruslan. Generating images from cap- tions with attention. In ICLR, 2015.
Nguyen, Anh, Yosinski, Jason, Bengio, Yoshua, Dosovit- skiy, Alexey, and Clune, Jeï¬. Plug & play generative networks: Conditional iterative generation of images in latent space. arXiv preprint arXiv:1612.00005, 2016.
Finn, Chelsea, Goodfellow, Ian, and Levine, Sergey. Unsu- pervised learning for physical interaction through video prediction. In NIPS, 2016.
Goodfellow, Ian J., Pouget-Abadie, Jean, Mirza, Mehdi, Xu, Bing, Warde-Farley, David, Ozair, Sherjil, Courville, Aaron C., and Bengio, Yoshua. Generative adversarial nets. In NIPS, 2014.
Gulrajani, Ishaan, Kumar, Kundan, Ahmed, Faruk, Taiga, Adrien Ali, Visin, Francesco, Vazquez, David, and Courville, Aaron. PixelVAE: A latent variable model for natural images. arXiv preprint arXiv:1611.05013, 2016.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Identity mappings in deep residual networks. In ECCV, pp. 630â645, 2016.
Johnson, Justin, Alahi, Alexandre, and Fei-Fei, Li. Per- ceptual losses for real-time style transfer and super- resolution. In ECCV, 2016.
Kalchbrenner, Nal, Espeholt, Lasse, Simonyan, Karen, Oord, Aaron van den, Graves, Alex, and Kavukcuoglu, Koray. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099, 2016a.
Kalchbrenner, Nal, Oord, Aaron van den, Simonyan, Karen, Danihelka, Ivo, Vinyals, Oriol, Graves, Alex, and Kavukcuoglu, Koray. Video pixel networks. Preprint arXiv:1610.00527, 2016b.
Kingma, Diederik P and Salimans, Tim. Improving vari- ational inference with inverse autoregressive ï¬ow. In NIPS, 2016.
Oord, Aaron van den, Dieleman, Sander, Zen, Heiga, Si- monyan, Karen, Vinyals, Oriol, Graves, Alex, Kalch- brenner, Nal, Senior, Andrew, and Kavukcuoglu, Ko- ray. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016.
Ramachandran, Prajit, Paine, Tom Le, Khorrami, Pooya, Babaeizadeh, Mohammad, Chang, Shiyu, Zhang, Yang, Hasegawa-Johnson, Mark, Campbell, Roy, and Huang, Thomas. Fast generation for convolutional autoregres- sive models. 2017.
Reed, Scott, Akata, Zeynep, Mohan, Santosh, Tenka, Samuel, Schiele, Bernt, and Lee, Honglak. Learning what and where to draw. In NIPS, 2016a.
Reed, Scott, Akata, Zeynep, Yan, Xinchen, Logeswaran, Lajanugen, Schiele, Bernt, and Lee, Honglak. Gen- In ICML, erative adversarial text-to-image synthesis. 2016b.
Reed, Scott, van den Oord, A¨aron, Kalchbrenner, Nal, Bapst, Victor, Botvinick, Matt, and de Freitas, Nando. Generating interpretable images with controllable struc- ture. Technical report, 2016c.
Salimans, Tim, Karpathy, Andrej, Chen, Xi, and Kingma, Diederik P. PixelCNN++: Improving the PixelCNN with discretized logistic mixture likelihood and other modiï¬cations. arXiv preprint arXiv:1701.05517, 2017.
Shi, Wenzhe, Caballero, Jose, Husz´ar, Ferenc, Totz, Jo- hannes, Aitken, Andrew P, Bishop, Rob, Rueckert, Daniel, and Wang, Zehan. Real-time single image and video super-resolution using an eï¬cient sub-pixel con- volutional neural network. In CVPR, 2016.
Parallel Multiscale Autoregressive Density Estimation
Sønderby, Casper Kaae, Caballero, Jose, Theis, Lucas, Shi, Wenzhe, and Husz´ar, Ferenc. Amortised MAP inference for image super-resolution. 2017.
# 6. Appendix
Below we show additional samples.
Theis, L. and Bethge, M. Generative image modeling using spatial LSTMs. In NIPS, 2015.
Iain, and Larochelle, Hugo. RNADE: The real-valued neural autoregressive density- estimator. In NIPS, 2013.
and Kavukcuoglu, Koray. Pixel recurrent neural networks. In ICML, pp. 1747â1756, 2016a.
van den Oord, A¨aron, Kalchbrenner, Nal, Vinyals, Oriol, Espeholt, Lasse, Graves, Alex, and Kavukcuoglu, Koray. Conditional image generation with PixelCNN decoders. In NIPS, 2016b.
Wah, Catherine, Branson, Steve, Welinder, Peter, Perona, Pietro, and Belongie, Serge. The Caltech-UCSD birds- 200-2011 dataset. 2011.
Wang, Xiaolong and Gupta, Abhinav. Generative image modeling using style and structure adversarial networks. In ECCV, pp. 318â335, 2016.
Wu, Yuhuai, Burda, Yuri, Salakhutdinov, Ruslan, and Grosse, Roger. On the quantitative analysis of decoder- based generative models. 2017.
Zhang, Han, Xu, Tao, Li, Hongsheng, Zhang, Shaoting, Huang, Xiaolei, Wang, Xiaogang, and Metaxas, Dim- itris. StackGAN: Text to photo-realistic image synthe- sis with stacked generative adversarial networks. arXiv preprint arXiv:1612.03242, 2016.
Parallel Multiscale Autoregressive Density Estimation
oe ae ee Be ite striped breast, 5 â < A bird with a short neck, yellow eyebrows and brown and whi neck and primaries. sail oak fam ak soi beak ail A yellow bird with a black head, orange eyes and an orange bill. ak. sail Beak tail 503 This little bird has a thin long curved down beak white under-body and brow head wings back and tail. Awhite large bird with orange legs and gray secondaries and primaries, and a short yellow bill. beak rail eak au 3k sail The bird has a small bill that is black and a white breast. pu peak gout beak oi beak rau beak i Ld a White bellied bird has black and orange breast, black head and straight black tail. tai beak sail beak tit beak toi beak The bird is round with a green crown and white belly. ake ai beak you eae âSmall light brown bird with black rectricles and a long white beak. ti beak rail eae soi beak tit beak The small brown bird has an ivory belly with dark brown stripes on its crown. This bird has a white belly and breast with a black back and red crown and nape. beak beak _ beak An aquatic bird with a long, two toned neck with red eyes. rai weak sail beak fm 3k oi beak i This is a large brown bird with a bright green head, yellow bill and orange feet. pau beak sail beak ait beak toi beak 4 This magnificent specimen has a white belly, pink breast and neck, with black superciliary and white winabars. no Es A bird with a red bill that has a pointed black tip, white wing bars, a small head, white throat and belly. beak it beak st 3 a =| = This bird has a white back , breast and belly with a black crown and long The bird has curved feet that are black and a small bill. oak beak soi With long brown upper converts and giant white wings, the grey breasted bird flies through the air.
Figure 9. Additional CUB samples randomly chosen from the validation set.
Parallel Multiscale Autoregressive Density Estimation
A blurry photo of a woman swimming underwater in a pool pelvis head gelvis head elvis ead Aman ina black shirt and blue jeans is washing a black car. Aman wearing camo is fixing a large gun on the table. head elvis head ee a head ead head An elderly man in a black striped shirt holding a yellow handle. eag has ead ead |_| Aman in a white shirt with a black vest is driving a boat. ie elvis head pelvis head vis A middle-aged man is wearing a bicycling outfit and a red helmet and has the number 96 on his handlebars. Aman in a white and green shirt is standing next to a tiller. head head ead Aman ina tight fitting red outfit is doing a gymnastics move. a man ina blue shirt and pants is doing a pull up on metal bar at wooden poles. head elvis head vis head This man i holding a large package and wheeling it down a hallway. Aman in a red shirt and blue overalls who is chopping wood with a large ax.
Figure 10. Additional MPII samples randomly chosen from the validation set.
Parallel Multiscale Autoregressive Density Estimation
person 05 ed | Three people on the beach with one holding a surfboard rT - ry | = Sopanaeeret it â See ee e âTwo horses are in the grass by the woods. arte, = A set of four buses parked next to each other ona parking lot. A bus is being towed by a blue tow truck A building with a clock mounted on it. A train is standing at a railway station and a car is A giraffe walking through a grassy area near some parked in front of it. âwoman holding a baby while sitting in front of a cake layer bunting a baseball at a game. airplane oka Alarge white airplane parked in a stationary position. A bunch of trucks parked next to each other. Three horses and a foal in an enclosed field. person airplane f a \ eel j é a nif I nif Mi ed Sat âSome white sheep are in a brown pen. âA young skier is looking away while people in the large commercial airplane taking off from the landing background look on. stripe. A black roman numeral clock on a building. Aman sitting at a desk covered with papers. A smart phone sitting next to a receipt on a table
Figure 11. Additional MS-COCO samples randomly chosen from the validation set.
Parallel Multiscale Autoregressive Density Estimation
N iN) = = 2 2 = = o ae 88 BP Be EP SF ro pm BP BF SE SP os Py P95 OM KS5 XH Qo FO ©5 OH KS XH x xo x xe &® oe 25 209 x xe ® oe Ny N = aS & B x xe 4 4 g z 3 8 BB Ss & BB o © Fy Q © x 3 â ¢ ioe ao if a âiz lon cas <i 4 17 a 4 L er | 7 ES te 4 fo baal aoe "N ⢠4 4 as a & = eo as ear a, a 1 ee ee es _â* Se Se ee re Se cod nel ne al ae re fe] a 4 4 2 poe poe poe a a ae rane Fae iw at ae ey ae a Ae tee Cw Ew Cw | a ap be Log ai (> <a Av | ee gle ral sel ral sa bea lis Ts boar boos bene | = Tei enon aa ie } fe 7 } fe Hot red eb Eas ioe ea |
Figure 12. Robot pushing videos at 64 Ã 64, 128 Ã 128 and 256 Ã 256.
Parallel Multiscale Autoregressive Density Estimation
Â¥ ai] Pomegranate
Figure 13. Label-conditional 128 Ã 128 ImageNet samples.
Parallel Multiscale Autoregressive Density Estimation
32x32 â 512 x 512 8x8 > 512 x 512 woe a. Js 5 $
Figure 14. Additional upscaling samples. | {
"id": "1701.05517"
} |
1703.03400 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | We propose an algorithm for meta-learning that is model-agnostic, in the
sense that it is compatible with any model trained with gradient descent and
applicable to a variety of different learning problems, including
classification, regression, and reinforcement learning. The goal of
meta-learning is to train a model on a variety of learning tasks, such that it
can solve new learning tasks using only a small number of training samples. In
our approach, the parameters of the model are explicitly trained such that a
small number of gradient steps with a small amount of training data from a new
task will produce good generalization performance on that task. In effect, our
method trains the model to be easy to fine-tune. We demonstrate that this
approach leads to state-of-the-art performance on two few-shot image
classification benchmarks, produces good results on few-shot regression, and
accelerates fine-tuning for policy gradient reinforcement learning with neural
network policies. | http://arxiv.org/pdf/1703.03400 | Chelsea Finn, Pieter Abbeel, Sergey Levine | cs.LG, cs.AI, cs.CV, cs.NE | ICML 2017. Code at https://github.com/cbfinn/maml, Videos of RL
results at https://sites.google.com/view/maml, Blog post at
http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/ | null | cs.LG | 20170309 | 20170718 | 7 1 0 2
l u J 8 1 ] G L . s c [
3 v 0 0 4 3 0 . 3 0 7 1 : v i X r a
# Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
# Chelsea Finn 1 Pieter Abbeel 1 2 Sergey Levine 1
# Abstract
the form of computation required to complete the task.
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is com- patible with any model trained with gradient de- scent and applicable to a variety of different learning problems, including classiï¬cation, re- gression, and reinforcement learning. The goal of meta-learning is to train a model on a vari- ety of learning tasks, such that it can solve new learning tasks using only a small number of train- ing samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to ï¬ne-tune. We demonstrate that this approach leads to state-of-the-art performance on two few- shot image classiï¬cation benchmarks, produces good results on few-shot regression, and acceler- ates ï¬ne-tuning for policy gradient reinforcement learning with neural network policies.
# 1. Introduction
Learning quickly is a hallmark of human intelligence, whether it involves recognizing objects from a few exam- ples or quickly learning new skills after just minutes of experience. Our artiï¬cial agents should be able to do the same, learning and adapting quickly from only a few exam- ples, and continuing to adapt as more data becomes avail- able. This kind of fast and ï¬exible learning is challenging, since the agent must integrate its prior experience with a small amount of new information, while avoiding overï¬t- ting to the new data. Furthermore, the form of prior ex- perience and new data will depend on the task. As such, for the greatest applicability, the mechanism for learning to learn (or meta-learning) should be general to the task and
1University of California, Berkeley 2OpenAI. Correspondence to: Chelsea Finn <cbï¬nn@eecs.berkeley.edu>.
Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s).
In this work, we propose a meta-learning algorithm that is general and model-agnostic, in the sense that it can be directly applied to any learning problem and model that is trained with a gradient descent procedure. Our focus is on deep neural network models, but we illustrate how our approach can easily handle different architectures and different problem settings, including classiï¬cation, regres- sion, and policy gradient reinforcement learning, with min- imal modiï¬cation. In meta-learning, the goal of the trained model is to quickly learn a new task from a small amount of new data, and the model is trained by the meta-learner to be able to learn on a large number of different tasks. The key idea underlying our method is to train the modelâs initial parameters such that the model has maximal perfor- mance on a new task after the parameters have been up- dated through one or more gradient steps computed with a small amount of data from that new task. Unlike prior meta-learning methods that learn an update function or learning rule (Schmidhuber, 1987; Bengio et al., 1992; Andrychowicz et al., 2016; Ravi & Larochelle, 2017), our algorithm does not expand the number of learned param- eters nor place constraints on the model architecture (e.g. by requiring a recurrent model (Santoro et al., 2016) or a Siamese network (Koch, 2015)), and it can be readily com- bined with fully connected, convolutional, or recurrent neu- ral networks. It can also be used with a variety of loss func- tions, including differentiable supervised losses and non- differentiable reinforcement learning objectives.
The process of training a modelâs parameters such that a few gradient steps, or even a single gradient step, can pro- duce good results on a new task can be viewed from a fea- ture learning standpoint as building an internal representa- tion that is broadly suitable for many tasks. If the internal representation is suitable to many tasks, simply ï¬ne-tuning the parameters slightly (e.g. by primarily modifying the top layer weights in a feedforward model) can produce good results. In effect, our procedure optimizes for models that are easy and fast to ï¬ne-tune, allowing the adaptation to happen in the right space for fast learning. From a dynami- cal systems standpoint, our learning process can be viewed as maximizing the sensitivity of the loss functions of new tasks with respect to the parameters: when the sensitivity is high, small local changes to the parameters can lead to
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
large improvements in the task loss.
The primary contribution of this work is a simple model- and task-agnostic algorithm for meta-learning that trains a modelâs parameters such that a small number of gradi- ent updates will lead to fast learning on a new task. We demonstrate the algorithm on different model types, includ- ing fully connected and convolutional networks, and in sev- eral distinct domains, including few-shot regression, image classiï¬cation, and reinforcement learning. Our evaluation shows that our meta-learning algorithm compares favor- ably to state-of-the-art one-shot learning methods designed speciï¬cally for supervised classiï¬cation, while using fewer parameters, but that it can also be readily applied to regres- sion and can accelerate reinforcement learning in the pres- ence of task variability, substantially outperforming direct pretraining as initialization.
# 2. Model-Agnostic Meta-Learning
We aim to train models that can achieve rapid adaptation, a problem setting that is often formalized as few-shot learn- ing. In this section, we will deï¬ne the problem setup and present the general form of our algorithm.
# 2.1. Meta-Learning Problem Set-Up
The goal of few-shot meta-learning is to train a model that can quickly adapt to a new task using only a few datapoints and training iterations. To accomplish this, the model or learner is trained during a meta-learning phase on a set of tasks, such that the trained model can quickly adapt to new tasks using only a small number of examples or trials. In effect, the meta-learning problem treats entire tasks as training examples. In this section, we formalize this meta- learning problem setting in a general manner, including brief examples of different learning domains. We will dis- cuss two different learning domains in detail in Section 3.
We consider a model, denoted f , that maps observa- tions x to outputs a. During meta-learning, the model is trained to be able to adapt to a large or inï¬nite num- ber of tasks. Since we would like to apply our frame- work to a variety of learning problems, from classiï¬ca- tion to reinforcement learning, we introduce a generic notion of a learning task below. Formally, each task T = {L(x1, a1, . . . , xH , aH ), q(x1), q(xt+1|xt, at), H} consists of a loss function L, a distribution over initial ob- servations q(x1), a transition distribution q(xt+1|xt, at), and an episode length H. In i.i.d. supervised learning prob- lems, the length H = 1. The model may generate samples of length H by choosing an output at at each time t. The loss L(x1, a1, . . . , xH , aH ) â R, provides task-speciï¬c feedback, which might be in the form of a misclassiï¬cation loss or a cost function in a Markov decision process.
θ âL1 âL3 âL2 θâ 3 θâ 1 θâ 2
Figure 1. Diagram of our model-agnostic meta-learning algo- rithm (MAML), which optimizes for a representation θ that can quickly adapt to new tasks.
In our meta-learning scenario, we consider a distribution over tasks p(T ) that we want our model to be able to adapt to. In the K-shot learning setting, the model is trained to learn a new task Ti drawn from p(T ) from only K samples drawn from qi and feedback LTi generated by Ti. During meta-training, a task Ti is sampled from p(T ), the model is trained with K samples and feedback from the corre- sponding loss LTi from Ti, and then tested on new samples from Ti. The model f is then improved by considering how the test error on new data from qi changes with respect to the parameters. In effect, the test error on sampled tasks Ti serves as the training error of the meta-learning process. At the end of meta-training, new tasks are sampled from p(T ), and meta-performance is measured by the modelâs perfor- mance after learning from K samples. Generally, tasks used for meta-testing are held out during meta-training.
# 2.2. A Model-Agnostic Meta-Learning Algorithm
In contrast to prior work, which has sought to train re- current neural networks that ingest entire datasets (San- toro et al., 2016; Duan et al., 2016b) or feature embed- dings that can be combined with nonparametric methods at test time (Vinyals et al., 2016; Koch, 2015), we propose a method that can learn the parameters of any standard model via meta-learning in such a way as to prepare that model for fast adaptation. The intuition behind this approach is that some internal representations are more transferrable than others. For example, a neural network might learn internal features that are broadly applicable to all tasks in p(T ), rather than a single individual task. How can we en- courage the emergence of such general-purpose representa- tions? We take an explicit approach to this problem: since the model will be ï¬ne-tuned using a gradient-based learn- ing rule on a new task, we will aim to learn a model in such a way that this gradient-based learning rule can make rapid progress on new tasks drawn from p(T ), without overï¬t- ting. In effect, we will aim to ï¬nd model parameters that are sensitive to changes in the task, such that small changes in the parameters will produce large improvements on the loss function of any task drawn from p(T ), when altered in the direction of the gradient of that loss (see Figure 1). We
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Algorithm 1 Model-Agnostic Meta-Learning Require: p(T ): distribution over tasks Require: α, β: step size hyperparameters 1: randomly initialize θ 2: while not done do 3: 4: 5: 6:
Sample batch of tasks Ti â¼ p(T ) for all Ti do
Evaluate Vg Ll7,; (fo) with respect to K examples Compute adapted parameters with gradient de- scent: 6; = 0 â aVoLlr, (fo)
# end for Update θ â θ â βâθ
# 7: 8: 9: end while
8: Update 0 + 0 â BVo TPT) Lr, (fo)
products, which is supported by standard deep learning li- In our braries such as TensorFlow (Abadi et al., 2016). experiments, we also include a comparison to dropping this backward pass and using a ï¬rst-order approximation, which we discuss in Section 5.2.
3. Species of MAML In this section, we discuss speciï¬c instantiations of our meta-learning algorithm for supervised learning and rein- forcement learning. The domains differ in the form of loss function and in how data is generated by the task and pre- sented to the model, but the same basic adaptation mecha- nism can be applied in both cases.
make no assumption on the form of the model, other than to assume that it is parametrized by some parameter vector θ, and that the loss function is smooth enough in θ that we can use gradient-based learning techniques.
Formally, we consider a model represented by a parametrized function fg with parameters 6. When adapt- ing to a new task 7;, the modelâs parameters 6 become 64. In our method, the updated parameter vector / is computed using one or more gradient descent updates on task 7;. For example, when using one gradient update, 0, = 0âaVoLlr,(fo)- The step size a may be fixed as a hyperparameter or meta- learned. For simplicity of notation, we will consider one gradient update for the rest of this section, but using multi- ple gradient updates is a straightforward extension.
The model parameters are trained by optimizing for the per- formance of fg, with respect to across tasks sampled from p(T). More concretely, the meta-objective is as follows: min > Lr (for) = > Lr: (fo-oVo£7,(fo))
min > Lr (for) = > Lr: (fo-oVo£7,(fo)) Ti~p(T) Ti~p(T)
# 3.1. Supervised Regression and Classiï¬cation
Few-shot learning is well-studied in the domain of super- vised tasks, where the goal is to learn a new function from only a few input/output pairs for that task, using prior data from similar tasks for meta-learning. For example, the goal might be to classify images of a Segway after seeing only one or a few examples of a Segway, with a model that has previously seen many other types of objects. Likewise, in few-shot regression, the goal is to predict the outputs of a continuous-valued function from only a few datapoints sampled from that function, after training on many func- tions with similar statistical properties.
To formalize the supervised regression and classiï¬cation problems in the context of the meta-learning deï¬nitions in Section 2.1, we can deï¬ne the horizon H = 1 and drop the timestep subscript on xt, since the model accepts a single input and produces a single output, rather than a sequence of inputs and outputs. The task Ti generates K i.i.d. ob- servations x from qi, and the task loss is represented by the error between the modelâs output for x and the correspond- ing target values y for that observation and task.
Note that the meta-optimization is performed over the model parameters 0, whereas the objective is computed us- ing the updated model parameters 6â. In effect, our pro- posed method aims to optimize the model parameters such that one or a small number of gradient steps on a new task will produce maximally effective behavior on that task.
The meta-optimization across tasks is performed via stochastic gradient descent (SGD), such that the model pa- rameters 6 are updated as follows:
6<-8-BVo YS. Lr( fo) (1) Ti~p(T)
Two common loss functions used for supervised classiï¬ca- tion and regression are cross-entropy and mean-squared er- ror (MSE), which we will describe below; though, other su- pervised loss functions may be used as well. For regression tasks using mean-squared error, the loss takes the form:
Lr(fe)= Yo Wife) -y¥P|3, @ x), yOAT;
where x(j), y(j) are an input/output pair sampled from task Ti. In K-shot regression tasks, K input/output pairs are provided for learning for each task.
where β is the meta step size. The full algorithm, in the general case, is outlined in Algorithm 1.
The MAML meta-gradient update involves a gradient through a gradient. Computationally, this requires an addi- tional backward pass through f to compute Hessian-vector
Similarly, for discrete classification tasks with a cross- entropy loss, the loss takes the form: Lo (fe) = SD ¥ 0g folx)
LTi(fÏ) = x(j),y(j)â¼Ti + (1 â y(j)) log(1 â fÏ(x(j))) (3)
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Algorithm 2 MAML for Few-Shot Supervised Learning Require: p(T ): distribution over tasks Require: α, β: step size hyperparameters
Algorithm 3 MAML for Reinforcement Learning Require: p(T ): distribution over tasks Require: α, β: step size hyperparameters
1: randomly initialize θ 2: while not done do 3: 4: 5: 6:
Sample batch of tasks Ti â¼ p(T ) for all Ti do
Sample K datapoints D = {x,y} from T; Evaluate Vo £7; (fo) using D and £7, in Equation (2) or (3) Compute adapted parameters with gradient descent: 0, = 0 â aVoLr,(fo) Sample datapoints D} = {x,y} from 7; for the meta-update
7:
8:
9: 10:
end for Update θ â θ â βâθ and LTi in Equation 2 or 3
9: end for
10: Update @ â 0â BVe YF ty LT: (for) using each D/ and £7; in Equation 2 or 3
1: randomly initialize θ 2: while not done do 3: 4: 5:
Sample batch of tasks Ti â¼ p(T ) for all Ti do
Sample K trajectories D = {(x1, a1, ...xx)} using fo in 7; Evaluate Vo £7; (fo) using D and £7, in Equation 4 Compute adapted parameters with gradient descent: 9; = 0 â aVoLr, (fo) Sample trajectories Dj = {(x1, a1, ...x)} using fg: in Ti end for Update @ â 8 â BV0 Yo¢. ur) £7; (for) using each Di and £7; in Equation 4
6: 7:
8:
9: 10:
9: end for
11: end while
11: end while
According to the conventional terminology, K-shot classi- ï¬cation tasks use K input/output pairs from each class, for a total of N K data points for N -way classiï¬cation. Given a distribution over tasks p(Ti), these loss functions can be di- rectly inserted into the equations in Section 2.2 to perform meta-learning, as detailed in Algorithm 2.
# 3.2. Reinforcement Learning
In reinforcement learning (RL), the goal of few-shot meta- learning is to enable an agent to quickly acquire a policy for a new test task using only a small amount of experience in the test setting. A new task might involve achieving a new goal or succeeding on a previously trained goal in a new environment. For example, an agent might learn to quickly ï¬gure out how to navigate mazes so that, when faced with a new maze, it can determine how to reliably reach the exit with only a few samples. In this section, we will discuss how MAML can be applied to meta-learning for RL.
Since the expected reward is generally not differentiable due to unknown dynamics, we use policy gradient meth- ods to estimate the gradient both for the model gradient update(s) and the meta-optimization. Since policy gradi- ents are an on-policy algorithm, each additional gradient step during the adaptation of fg requires new samples from the current policy fo,,. We detail the algorithm in Algo- rithm 3. This algorithm has the same structure as Algo- rithm 2, with the principal difference being that steps 5 and 8 require sampling trajectories from the environment cor- responding to task 7;. Practical implementations of this method may also use a variety of improvements recently proposed for policy gradient algorithms, including state or action-dependent baselines and trust regions (Schulman et al., 2015).
# 4. Related Work
Each RL task Ti contains an initial state distribution qi(x1) and a transition distribution qi(xt+1|xt, at), and the loss LTi corresponds to the (negative) reward function R. The entire task is therefore a Markov decision process (MDP) with horizon H, where the learner is allowed to query a limited number of sample trajectories for few-shot learn- ing. Any aspect of the MDP may change across tasks in p(T ). The model being learned, fθ, is a policy that maps from states xt to a distribution over actions at at each timestep t â {1, ..., H}. The loss for task Ti and model fÏ takes the form
H Lai (fo) = âExi ain fo.a7; > a) - 4 t=1
The method that we propose in this paper addresses the general problem of meta-learning (Thrun & Pratt, 1998; Schmidhuber, 1987; Naik & Mammone, 1992), which in- cludes few-shot learning. A popular approach for meta- learning is to train a meta-learner that learns how to up- date the parameters of the learnerâs model (Bengio et al., 1992; Schmidhuber, 1992; Bengio et al., 1990). This ap- proach has been applied to learning to optimize deep net- works (Hochreiter et al., 2001; Andrychowicz et al., 2016; Li & Malik, 2017), as well as for learning dynamically changing recurrent networks (Ha et al., 2017). One recent approach learns both the weight initialization and the opti- mizer, for few-shot image recognition (Ravi & Larochelle, 2017). Unlike these methods, the MAML learnerâs weights are updated using the gradient, rather than a learned update; our method does not introduce additional parameters for meta-learning nor require a particular learner architecture.
In K-shot reinforcement learning, K rollouts from fθ and task Ti, (x1, a1, ...xH ), and the corresponding rewards R(xt, at), may be used for adaptation on a new task Ti.
Few-shot learning methods have also been developed for
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
speciï¬c tasks such as generative modeling (Edwards & Storkey, 2017; Rezende et al., 2016) and image recogni- tion (Vinyals et al., 2016). One successful approach for few-shot classiï¬cation is to learn to compare new exam- ples in a learned metric space using e.g. Siamese net- works (Koch, 2015) or recurrence with attention mech- anisms (Vinyals et al., 2016; Shyam et al., 2017; Snell et al., 2017). These approaches have generated some of the most successful results, but are difï¬cult to directly extend to other problems, such as reinforcement learning. Our method, in contrast, is agnostic to the form of the model and to the particular learning task.
model learned with MAML continue to improve with addi- tional gradient updates and/or examples?
All of the meta-learning problems that we consider require some amount of adaptation to new tasks at test-time. When possible, we compare our results to an oracle that receives the identity of the task (which is a problem-dependent rep- resentation) as an additional input, as an upper bound on the performance of the model. All of the experiments were performed using TensorFlow (Abadi et al., 2016), which al- lows for automatic differentiation through the gradient up- date(s) during meta-learning. The code is available online1.
Another approach to meta-learning is to train memory- augmented models on many tasks, where the recurrent learner is trained to adapt to new tasks as it is rolled out. Such networks have been applied to few-shot image recog- nition (Santoro et al., 2016; Munkhdalai & Yu, 2017) and learning âfastâ reinforcement learning agents (Duan et al., 2016b; Wang et al., 2016). Our experiments show that our method outperforms the recurrent approach on few- shot classiï¬cation. Furthermore, unlike these methods, our approach simply provides a good weight initialization and uses the same gradient descent update for both the learner and meta-update. As a result, it is straightforward to ï¬ne- tune the learner for additional gradient steps.
Our approach is also related to methods for initialization of deep networks. In computer vision, models pretrained on large-scale image classiï¬cation have been shown to learn effective features for a range of problems (Donahue et al., In contrast, our method explicitly optimizes the 2014). model for fast adaptability, allowing it to adapt to new tasks with only a few examples. Our method can also be viewed as explicitly maximizing sensitivity of new task losses to the model parameters. A number of prior works have ex- plored sensitivity in deep networks, often in the context of initialization (Saxe et al., 2014; Kirkpatrick et al., 2016). Most of these works have considered good random initial- izations, though a number of papers have addressed data- dependent initializers (Kr¨ahenb¨uhl et al., 2016; Salimans & Kingma, 2016), including learned initializations (Husken & Goerick, 2000; Maclaurin et al., 2015). In contrast, our method explicitly trains the parameters for sensitivity on a given task distribution, allowing for extremely efï¬cient adaptation for problems such as K-shot learning and rapid reinforcement learning in only one or a few gradient steps.
# 5. Experimental Evaluation
The goal of our experimental evaluation is to answer the following questions: (1) Can MAML enable fast learning of new tasks? (2) Can MAML be used for meta-learning in multiple different domains, including supervised regres- sion, classiï¬cation, and reinforcement learning? (3) Can a
# 5.1. Regression
We start with a simple regression problem that illustrates the basic principles of MAML. Each task involves regress- ing from the input to the output of a sine wave, where the amplitude and phase of the sinusoid are varied between tasks. Thus, p(T ) is continuous, where the amplitude varies within [0.1, 5.0] and the phase varies within [0, Ï], and the input and output both have a dimensionality of 1. During training and testing, datapoints x are sampled uni- formly from [â5.0, 5.0]. The loss is the mean-squared error between the prediction f (x) and true value. The regres- sor is a neural network model with 2 hidden layers of size 40 with ReLU nonlinearities. When training with MAML, we use one gradient update with K = 10 examples with a ï¬xed step size α = 0.01, and use Adam as the meta- optimizer (Kingma & Ba, 2015). The baselines are like- wise trained with Adam. To evaluate performance, we ï¬ne- tune a single meta-learned model on varying numbers of K examples, and compare performance to two baselines: (a) pretraining on all of the tasks, which entails training a net- work to regress to random sinusoid functions and then, at test-time, ï¬ne-tuning with gradient descent on the K pro- vided points, using an automatically tuned step size, and (b) an oracle which receives the true amplitude and phase as input. In Appendix C, we show comparisons to addi- tional multi-task and adaptation methods.
We evaluate performance by ï¬ne-tuning the model learned by MAML and the pretrained model on K = {5, 10, 20} datapoints. During ï¬ne-tuning, each gradient step is com- puted using the same K datapoints. The qualitative results, shown in Figure 2 and further expanded on in Appendix B show that the learned model is able to quickly adapt with only 5 datapoints, shown as purple triangles, whereas the model that is pretrained using standard supervised learning on all tasks is unable to adequately adapt with so few dat- apoints without catastrophic overï¬tting. Crucially, when the K datapoints are all in one half of the input range, the
1Code for the regression and supervised experiments is at github.com/cbfinn/maml and code for the RL experi- ments is at github.com/cbfinn/maml_rl
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
MAML, K=5 MAML, K=10 pretrained, K=5, step size=0.01 retrained, K=10, step size=0.02
retrained, K=10, step size=0.02
pretrained, K=5, step size=0.01
MAML, K=5
MAML, K=10
pre-update lgradstep --+ 10 grad steps â- groundtruth « 4 used for grad lgradstep <= 10 grad steps \ pre-update
Figure 2. Few-shot adaptation for the simple regression task. Left: Note that MAML is able to estimate parts of the curve where there are no datapoints, indicating that the model has learned about the periodic structure of sine waves. Right: Fine-tuning of a model pretrained on the same distribution of tasks without MAML, with a tuned step size. Due to the often contradictory outputs on the pre-training tasks, this model is unable to recover a suitable representation and fails to extrapolate from the small number of test-time samples.
k-shot regression, k=10 â*â MAMI (ours) = *- pretrained, step=0.02 sor oracle mean squared error number of gradient steps
Figure 3. Quantitative sinusoid regression results showing the learning curve at meta test-time. Note that MAML continues to improve with additional gradient steps without overï¬tting to the extremely small dataset during meta-testing, achieving a loss that is substantially lower than the baseline ï¬ne-tuning approach.
model trained with MAML can still infer the amplitude and phase in the other half of the range, demonstrating that the MAML trained model f has learned to model the periodic nature of the sine wave. Furthermore, we observe both in the qualitative and quantitative results (Figure 3 and Ap- pendix B) that the model learned with MAML continues to improve with additional gradient steps, despite being trained for maximal performance after one gradient step. This improvement suggests that MAML optimizes the pa- rameters such that they lie in a region that is amenable to fast adaptation and is sensitive to loss functions from p(T ), as discussed in Section 2.2, rather than overï¬tting to pa- rameters θ that only improve after one step.
# 5.2. Classiï¬cation
To evaluate MAML in comparison to prior meta-learning and few-shot learning algorithms, we applied our method to few-shot image recognition on the Omniglot (Lake et al., 2011) and MiniImagenet datasets. The Omniglot dataset consists of 20 instances of 1623 characters from 50 dif- ferent alphabets. Each instance was drawn by a different person. The MiniImagenet dataset was proposed by Ravi & Larochelle (2017), and involves 64 training classes, 12 validation classes, and 24 test classes. The Omniglot and MiniImagenet image recognition tasks are the most com- mon recently used few-shot learning benchmarks (Vinyals et al., 2016; Santoro et al., 2016; Ravi & Larochelle, 2017).
We follow the experimental protocol proposed by Vinyals et al. (2016), which involves fast learning of N -way clas- siï¬cation with 1 or 5 shots. The problem of N -way classi- ï¬cation is set up as follows: select N unseen classes, pro- vide the model with K different instances of each of the N classes, and evaluate the modelâs ability to classify new in- stances within the N classes. For Omniglot, we randomly select 1200 characters for training, irrespective of alphabet, and use the remaining for testing. The Omniglot dataset is augmented with rotations by multiples of 90 degrees, as proposed by Santoro et al. (2016).
Our model follows the same architecture as the embedding function used by Vinyals et al. (2016), which has 4 mod- ules with a 3 à 3 convolutions and 64 ï¬lters, followed by batch normalization (Ioffe & Szegedy, 2015), a ReLU non- linearity, and 2 à 2 max-pooling. The Omniglot images are downsampled to 28 à 28, so the dimensionality of the last hidden layer is 64. As in the baseline classiï¬er used by Vinyals et al. (2016), the last layer is fed into a soft- max. For Omniglot, we used strided convolutions instead of max-pooling. For MiniImagenet, we used 32 ï¬lters per layer to reduce overï¬tting, as done by (Ravi & Larochelle, 2017). In order to also provide a fair comparison against memory-augmented neural networks (Santoro et al., 2016) and to test the ï¬exibility of MAML, we also provide re- sults for a non-convolutional network. For this, we use a network with 4 hidden layers with sizes 256, 128, 64, 64, each including batch normalization and ReLU nonlineari- ties, followed by a linear layer and softmax. For all models, the loss function is the cross-entropy error between the pre- dicted and true class. Additional hyperparameter details are included in Appendix A.1.
We present the results in Table 1. The convolutional model learned by MAML compares well to the state-of-the-art re- sults on this task, narrowly outperforming the prior meth- ods. Some of these existing methods, such as matching networks, Siamese networks, and memory models are de- signed with few-shot classiï¬cation in mind, and are not readily applicable to domains such as reinforcement learn- ing. Additionally, the model learned with MAML uses
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Table 1. Few-shot classiï¬cation on held-out Omniglot characters (top) and the MiniImagenet test set (bottom). MAML achieves results that are comparable to or outperform state-of-the-art convolutional and recurrent models. Siamese nets, matching nets, and the memory module approaches are all speciï¬c to classiï¬cation, and are not directly applicable to regression or RL scenarios. The ± shows 95% conï¬dence intervals over tasks. Note that the Omniglot results may not be strictly comparable since the train/test splits used in the prior work were not available. The MiniImagenet evaluation of baseline methods and matching networks is from Ravi & Larochelle (2017).
5-way Accuracy 20-way Accuracy Omniglot (Lake et al., 2011) MANN, no conv (Santoro et al., 2016) MAML, no conv (ours) Siamese nets (Koch, 2015) matching nets (Vinyals et al., 2016) neural statistician (Edwards & Storkey, 2017) memory mod. (Kaiser et al., 2017) MAML (ours) 1-shot 82.8% 5-shot 94.9% 89.7 ± 1.1% 97.5 ± 0.6% 97.3% 98.1% 98.1% 98.4% 98.4% 98.9% 99.5% 99.6% 1-shot â â 88.2% 93.8% 93.2% 95.0% 5-shot â â 97.0% 98.5% 98.1% 98.6% 98.7 ± 0.4% 99.9 ± 0.1% 95.8 ± 0.3% 98.9 ± 0.2%
5-way Accuracy MiniImagenet (Ravi & Larochelle, 2017) ï¬ne-tuning baseline nearest neighbor baseline matching nets (Vinyals et al., 2016) meta-learner LSTM (Ravi & Larochelle, 2017) MAML, ï¬rst order approx. (ours) MAML (ours) 5-shot 1-shot 49.79 ± 0.79% 28.86 ± 0.54% 51.04 ± 0.65% 41.08 ± 0.70% 55.31 ± 0.73% 43.56 ± 0.84% 43.44 ± 0.77% 60.60 ± 0.71% 48.07 ± 1.75% 63.15 ± 0.91% 48.70 ± 1.84% 63.11 ± 0.92%
fewer overall parameters compared to matching networks and the meta-learner LSTM, since the algorithm does not introduce any additional parameters beyond the weights of the classiï¬er itself. Compared to these prior methods, memory-augmented neural networks (Santoro et al., 2016) speciï¬cally, and recurrent meta-learning models in gen- eral, represent a more broadly applicable class of meth- ods that, like MAML, can be used for other tasks such as reinforcement learning (Duan et al., 2016b; Wang et al., 2016). However, as shown in the comparison, MAML sig- niï¬cantly outperforms memory-augmented networks and the meta-learner LSTM on 5-way Omniglot and MiniIm- agenet classiï¬cation, both in the 1-shot and 5-shot case.
A significant computational expense in MAML comes from the use of second derivatives when backpropagat- ing the meta-gradient through the gradient operator in the meta-objective (see Equation (1)). On Minilmagenet, we show a comparison to a first-order approximation of MAML, where these second derivatives are omitted. Note that the resulting method still computes the meta-gradient at the post-update parameter values 0, which provides for effective meta-learning. Surprisingly however, the perfor- mance of this method is nearly the same as that obtained with full second derivatives, suggesting that most of the improvement in MAML comes from the gradients of the objective at the post-update parameter values, rather than the second order updates from differentiating through the gradient update. Past work has observed that ReLU neu- ral networks are locally almost linear (Goodfellow et al., 2015), which suggests that second derivatives may be close to zero in most cases, partially explaining the good perfor-
point robot, 2d navigation WANE Gus) pretrained + random -10! += oracle average return (log scale) 1 2 number of gradient steps MAML as pre-update pre-update oa â 3steps oa 3 âkk goal position |) °2 oa a ool] â 3steps pretrained -o| 21} |e A goal position
point robot, 2d navigation WANE Gus) pretrained + random -10! += oracle average return (log scale) 1 2 number of gradient steps
MAML as pre-update oa â 3steps 3 âkk goal position |) oa a -o|
pre-update oa °2 ool] â 3steps pretrained 21} |e A goal position
Figure 4. Top: quantitative results from 2D navigation task, Bot- tom: qualitative comparison between model learned with MAML and with ï¬ne-tuning from a pretrained network.
mance of the ï¬rst-order approximation. This approxima- tion removes the need for computing Hessian-vector prod- ucts in an additional backward pass, which we found led to roughly 33% speed-up in network computation.
# 5.3. Reinforcement Learning
To evaluate MAML on reinforcement learning problems, we constructed several sets of tasks based off of the sim- ulated continuous control environments in the rllab bench- mark suite (Duan et al., 2016a). We discuss the individual domains below. In all of the domains, the model trained by MAML is a neural network policy with two hidden lay- ers of size 100, with ReLU nonlinearities. The gradient updates are computed using vanilla policy gradient (RE- INFORCE) (Williams, 1992), and we use trust-region pol- icy optimization (TRPO) as the meta-optimizer (Schulman et al., 2015). In order to avoid computing third derivatives,
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
half-cheetah, forward/backward half-cheetah, goal velocity average a 0 2 0 1 2 number of gradient steps number of gradient steps 1 2 number of gradient steps ant, goal velocity ant, forward/backward MAML (ours) pretrained random â*> oracle 1 2 3 number of gradient steps
half-cheetah, forward/backward a 0 2 1 2 number of gradient steps
2 0 1 2 number of gradient steps ant, goal velocity
ant, forward/backward MAML (ours) pretrained random â*> oracle 1 2 3 number of gradient steps
half-cheetah, goal velocity average return a number of gradient steps
MAML (ours) pretrained random â*> oracle
Figure 5. Reinforcement learning results for the half-cheetah and ant locomotion tasks, with the tasks shown on the far right. Each gradient step requires additional samples from the environment, unlike the supervised learning tasks. The results show that MAML can adapt to new goal velocities and directions substantially faster than conventional pretraining or random initialization, achieving good performs in just two or three gradient steps. We exclude the goal velocity, random baseline curves, since the returns are much worse (< â200 for cheetah and < â25 for ant).
we use ï¬nite differences to compute the Hessian-vector products for TRPO. For both learning and meta-learning updates, we use the standard linear feature baseline pro- posed by Duan et al. (2016a), which is ï¬tted separately at each iteration for each sampled task in the batch. We com- pare to three baseline models: (a) pretraining one policy on all of the tasks and then ï¬ne-tuning, (b) training a policy from randomly initialized weights, and (c) an oracle policy which receives the parameters of the task as input, which for the tasks below corresponds to a goal position, goal di- rection, or goal velocity for the agent. The baseline models of (a) and (b) are ï¬ne-tuned with gradient descent with a manually tuned step size. Videos of the learned policies can be viewed at sites.google.com/view/maml 2D Navigation. In our ï¬rst meta-RL experiment, we study a set of tasks where a point agent must move to different goal positions in 2D, randomly chosen for each task within a unit square. The observation is the current 2D position, and actions correspond to velocity commands clipped to be in the range [â0.1, 0.1]. The reward is the negative squared distance to the goal, and episodes terminate when the agent is within 0.01 of the goal or at the horizon of H = 100. The policy was trained with MAML to maximize performance after 1 policy gradient update using 20 trajectories. Ad- ditional hyperparameter settings for this problem and the following RL problems are in Appendix A.2. In our evalu- ation, we compare adaptation to a new task with up to 4 gra- dient updates, each with 40 samples. The results in Figure 4 show the adaptation performance of models that are initial- ized with MAML, conventional pretraining on the same set of tasks, random initialization, and an oracle policy that receives the goal position as input. The results show that MAML can learn a model that adapts much more quickly in a single gradient update, and furthermore continues to improve with additional updates.
the negative absolute value between the current velocity of the agent and a goal, which is chosen uniformly at random between 0.0 and 2.0 for the cheetah and between 0.0 and 3.0 for the ant. In the goal direction experiments, the re- ward is the magnitude of the velocity in either the forward or backward direction, chosen at random for each task in p(T ). The horizon is H = 200, with 20 rollouts per gradi- ent step for all problems except the ant forward/backward task, which used 40 rollouts per step. The results in Fig- ure 5 show that MAML learns a model that can quickly adapt its velocity and direction with even just a single gra- dient update, and continues to improve with more gradi- ent steps. The results also show that, on these challenging tasks, the MAML initialization substantially outperforms random initialization and pretraining. In fact, pretraining is in some cases worse than random initialization, a fact observed in prior RL work (Parisotto et al., 2016).
# 6. Discussion and Future Work
We introduced a meta-learning method based on learning easily adaptable model parameters through gradient de- scent. Our approach has a number of beneï¬ts. It is simple and does not introduce any learned parameters for meta- learning. It can be combined with any model representation that is amenable to gradient-based training, and any differ- entiable objective, including classiï¬cation, regression, and reinforcement learning. Lastly, since our method merely produces a weight initialization, adaptation can be per- formed with any amount of data and any number of gra- dient steps, though we demonstrate state-of-the-art results on classiï¬cation with only one or ï¬ve examples per class. We also show that our method can adapt an RL agent using policy gradients and a very modest amount of experience.
Locomotion. To study how well MAML can scale to more complex deep RL problems, we also study adaptation on high-dimensional locomotion tasks with the MuJoCo sim- ulator (Todorov et al., 2012). The tasks require two sim- ulated robots â a planar cheetah and a 3D quadruped (the âantâ) â to run in a particular direction or at a particular In the goal velocity experiments, the reward is velocity.
Reusing knowledge from past tasks may be a crucial in- gredient in making high-capacity scalable models, such as deep neural networks, amenable to fast training with small datasets. We believe that this work is one step toward a sim- ple and general-purpose meta-learning technique that can be applied to any problem and any model. Further research in this area can make multitask initialization a standard in- gredient in deep learning and reinforcement learning.
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
# Acknowledgements
The authors would like to thank Xi Chen and Trevor Darrell for helpful discussions, Yan Duan and Alex Lee for techni- cal advice, Nikhil Mishra, Haoran Tang, and Greg Kahn for feedback on an early draft of the paper, and the anonymous reviewers for their comments. This work was supported in part by an ONR PECASE award and an NSF GRFP award.
Ha, David, Dai, Andrew, and Le, Quoc V. Hypernetworks. International Conference on Learning Representations (ICLR), 2017.
Hochreiter, Sepp, Younger, A Steven, and Conwell, Pe- In ter R. Learning to learn using gradient descent. International Conference on Artiï¬cial Neural Networks. Springer, 2001.
# References
Abadi, Mart´ın, Agarwal, Ashish, Barham, Paul, Brevdo, Eugene, Chen, Zhifeng, Citro, Craig, Corrado, Greg S, Davis, Andy, Dean, Jeffrey, Devin, Matthieu, et al. Ten- sorï¬ow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
Andrychowicz, Marcin, Denil, Misha, Gomez, Sergio, Hoffman, Matthew W, Pfau, David, Schaul, Tom, and de Freitas, Nando. Learning to learn by gradient descent by gradient descent. In Neural Information Processing Systems (NIPS), 2016.
Husken, Michael and Goerick, Christian. Fast learning for problem classes using knowledge based network initial- In Neural Networks, 2000. IJCNN 2000, Pro- ization. ceedings of the IEEE-INNS-ENNS International Joint Conference on, volume 6, pp. 619â624. IEEE, 2000.
Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal International Conference on Machine covariate shift. Learning (ICML), 2015.
Kaiser, Lukasz, Nachum, Oï¬r, Roy, Aurko, and Bengio, Samy. Learning to remember rare events. International Conference on Learning Representations (ICLR), 2017.
Bengio, Samy, Bengio, Yoshua, Cloutier, Jocelyn, and Gecsei, Jan. On the optimization of a synaptic learning In Optimality in Artiï¬cial and Biological Neural rule. Networks, pp. 6â8, 1992.
Kingma, Diederik and Ba, Jimmy. Adam: A method for International Conference on stochastic optimization. Learning Representations (ICLR), 2015.
Bengio, Yoshua, Bengio, Samy, and Cloutier, Jocelyn. Learning a synaptic learning rule. Universit´e de Montr´eal, D´epartement dâinformatique et de recherche op´erationnelle, 1990.
Donahue, Jeff, Jia, Yangqing, Vinyals, Oriol, Hoffman, Judy, Zhang, Ning, Tzeng, Eric, and Darrell, Trevor. De- caf: A deep convolutional activation feature for generic visual recognition. In International Conference on Ma- chine Learning (ICML), 2014.
Kirkpatrick, James, Pascanu, Razvan, Rabinowitz, Neil, Veness, Joel, Desjardins, Guillaume, Rusu, Andrei A, Milan, Kieran, Quan, John, Ramalho, Tiago, Grabska- Overcoming catas- Barwinska, Agnieszka, et al. trophic forgetting in neural networks. arXiv preprint arXiv:1612.00796, 2016.
Koch, Gregory. Siamese neural networks for one-shot im- age recognition. ICML Deep Learning Workshop, 2015.
Duan, Yan, Chen, Xi, Houthooft, Rein, Schulman, John, and Abbeel, Pieter. Benchmarking deep reinforcement In International Con- learning for continuous control. ference on Machine Learning (ICML), 2016a.
Kr¨ahenb¨uhl, Philipp, Doersch, Carl, Donahue, Jeff, and Darrell, Trevor. Data-dependent initializations of con- volutional neural networks. International Conference on Learning Representations (ICLR), 2016.
Duan, Yan, Schulman, John, Chen, Xi, Bartlett, Peter L, Sutskever, Ilya, and Abbeel, Pieter. Rl2: Fast reinforce- ment learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779, 2016b.
Lake, Brenden M, Salakhutdinov, Ruslan, Gross, Jason, and Tenenbaum, Joshua B. One shot learning of simple visual concepts. In Conference of the Cognitive Science Society (CogSci), 2011.
Edwards, Harrison and Storkey, Amos. Towards a neural statistician. International Conference on Learning Rep- resentations (ICLR), 2017.
Li, Ke and Malik, Jitendra. Learning to optimize. Interna- tional Conference on Learning Representations (ICLR), 2017.
Goodfellow, Ian J, Shlens, Jonathon, and Szegedy, Chris- tian. Explaining and harnessing adversarial examples. International Conference on Learning Representations (ICLR), 2015.
Maclaurin, Dougal, Duvenaud, David, and Adams, Ryan. Gradient-based hyperparameter optimization through re- In International Conference on Ma- versible learning. chine Learning (ICML), 2015.
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Munkhdalai, Tsendsuren and Yu, Hong. Meta net- works. International Conferecence on Machine Learn- ing (ICML), 2017.
Snell, Jake, Swersky, Kevin, and Zemel, Richard S. Pro- totypical networks for few-shot learning. arXiv preprint arXiv:1703.05175, 2017.
Naik, Devang K and Mammone, RJ. Meta-neural networks that learn by learning. In International Joint Conference on Neural Netowrks (IJCNN), 1992.
Thrun, Sebastian and Pratt, Lorien. Learning to learn. Springer Science & Business Media, 1998.
Parisotto, Emilio, Ba, Jimmy Lei, and Salakhutdinov, Rus- lan. Actor-mimic: Deep multitask and transfer reinforce- International Conference on Learning ment learning. Representations (ICLR), 2016.
Todorov, Emanuel, Erez, Tom, and Tassa, Yuval. Mujoco: In Inter- A physics engine for model-based control. national Conference on Intelligent Robots and Systems (IROS), 2012.
Ravi, Sachin and Larochelle, Hugo. Optimization as a In International Confer- model for few-shot learning. ence on Learning Representations (ICLR), 2017.
Vinyals, Oriol, Blundell, Charles, Lillicrap, Tim, Wierstra, Daan, et al. Matching networks for one shot learning. In Neural Information Processing Systems (NIPS), 2016.
Rei, Marek. Online representation learning in re- arXiv preprint current neural arXiv:1508.03854, 2015. language models.
Wang, Jane X, Kurth-Nelson, Zeb, Tirumala, Dhruva, Soyer, Hubert, Leibo, Joel Z, Munos, Remi, Blun- dell, Charles, Kumaran, Dharshan, and Botvinick, Matt. Learning to reinforcement learn. arXiv preprint arXiv:1611.05763, 2016.
Rezende, Danilo Jimenez, Mohamed, Shakir, Danihelka, Ivo, Gregor, Karol, and Wierstra, Daan. One-shot gener- alization in deep generative models. International Con- ference on Machine Learning (ICML), 2016.
Williams, Ronald J. Simple statistical gradient-following learning. algorithms for connectionist reinforcement Machine learning, 8(3-4):229â256, 1992.
Salimans, Tim and Kingma, Diederik P. Weight normaliza- tion: A simple reparameterization to accelerate training of deep neural networks. In Neural Information Process- ing Systems (NIPS), 2016.
Santoro, Adam, Bartunov, Sergey, Botvinick, Matthew, Wierstra, Daan, and Lillicrap, Timothy. Meta-learning In Interna- with memory-augmented neural networks. tional Conference on Machine Learning (ICML), 2016.
Saxe, Andrew, McClelland, James, and Ganguli, Surya. Exact solutions to the nonlinear dynamics of learning in International Conference deep linear neural networks. on Learning Representations (ICLR), 2014.
Schmidhuber, Jurgen. Evolutionary principles in self- referential learning. On learning how to learn: The meta-meta-... hook.) Diploma thesis, Institut f. Infor- matik, Tech. Univ. Munich, 1987.
Schmidhuber, J¨urgen. Learning to control fast-weight memories: An alternative to dynamic recurrent net- works. Neural Computation, 1992.
Schulman, John, Levine, Sergey, Abbeel, Pieter, Jordan, Michael I, and Moritz, Philipp. Trust region policy optimization. In International Conference on Machine Learning (ICML), 2015.
Shyam, Pranav, Gupta, Shubham, and Dukkipati, Ambed- kar. Attentive recurrent comparators. International Con- ferecence on Machine Learning (ICML), 2017.
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
# A. Additional Experiment Details
# C.1. Multi-task baselines
In this section, we provide additional details of the experi- mental set-up and hyperparameters.
# A.1. Classiï¬cation
For N-way, K-shot classiï¬cation, each gradient is com- puted using a batch size of N K examples. For Omniglot, the 5-way convolutional and non-convolutional MAML models were each trained with 1 gradient step with step size α = 0.4 and a meta batch-size of 32 tasks. The network was evaluated using 3 gradient steps with the same step size α = 0.4. The 20-way convolutional MAML model was trained and evaluated with 5 gradient steps with step size α = 0.1. During training, the meta batch-size was set to 16 tasks. For MiniImagenet, both models were trained using 5 gradient steps of size α = 0.01, and evaluated using 10 gradient steps at test time. Following Ravi & Larochelle (2017), 15 examples per class were used for evaluating the post-update meta-gradient. We used a meta batch-size of 4 and 2 tasks for 1-shot and 5-shot training respectively. All models were trained for 60000 iterations on a single NVIDIA Pascal Titan X GPU.
The pretraining baseline in the main text trained a single network on all tasks, which we referred to as âpretraining on all tasksâ. To evaluate the model, as with MAML, we ï¬ne-tuned this model on each test task using K examples. In the domains that we study, different tasks involve dif- ferent output values for the same input. As a result, by pre-training on all tasks, the model would learn to output the average output for a particular input value. In some in- stances, this model may learn very little about the actual domain, and instead learn about the range of the output space.
We experimented with a multi-task method to provide a point of comparison, where instead of averaging in the out- put space, we averaged in the parameter space. To achieve averaging in parameter space, we sequentially trained 500 separate models on 500 tasks drawn from p(T ). Each model was initialized randomly and trained on a large amount of data from its assigned task. We then took the average parameter vector across models and ï¬ne-tuned on 5 datapoints with a tuned step size. All of our experiments for this method were on the sinusoid task because of com- putational requirements. The error of the individual regres- sors was low: less than 0.02 on their respective sine waves.
# A.2. Reinforcement Learning
In all reinforcement learning experiments, the MAML pol- icy was trained using a single gradient step with α = 0.1. During evaluation, we found that halving the learning rate after the ï¬rst gradient step produced superior performance. Thus, the step size during adaptation was set to α = 0.1 for the ï¬rst step, and α = 0.05 for all future steps. The step sizes for the baseline methods were manually tuned for each domain. In the 2D navigation, we used a meta batch size of 20; in the locomotion problems, we used a meta batch size of 40 tasks. The MAML models were trained for up to 500 meta-iterations, and the model with the best average return during training was used for evaluation. For the ant goal velocity task, we added a positive reward bonus at each timestep to prevent the ant from ending the episode.
We tried three variants of this set-up. During training of the individual regressors, we tried using one of the fol- lowing: no regularization, standard ¢2 weight decay, and fy weight regularization to the mean parameter vector thus far of the trained regressors. The latter two variants en- courage the individual models to find parsimonious solu- tions. When using regularization, we set the magnitude of the regularization to be as high as possible without signif- icantly deterring performance. In our results, we refer to this approach as âmulti-taskâ. As seen in the results in Ta- ble 2, we find averaging in the parameter space (multi-task) performed worse than averaging in the output space (pre- training on all tasks). This suggests that it is difficult to find parsimonious solutions to multiple tasks when training on tasks separately, and that MAML is learning a solution that is more sophisticated than the mean optimal parameter vector.
# B. Additional Sinusoid Results
In Figure 6, we show the full quantitative results of the MAML model trained on 10-shot learning and evaluated on 5-shot, 10-shot, and 20-shot. In Figure 7, we show the qualitative performance of MAML and the pretrained base- line on randomly sampled sinusoids.
# C. Additional Comparisons
In this section, we include more thorough evaluations of our approach, including additional multi-task baselines and a comparison representative of the approach of Rei (2015).
# C.2. Context vector adaptation
Rei (2015) developed a method which learns a context vec- tor that can be adapted online, with an application to re- current language models. The parameters in this context vector are learned and adapted in the same way as the pa- rameters in the MAML model. To provide a comparison to using such a context vector for meta-learning problems, we concatenated a set of free parameters z to the input x, and only allowed the gradient steps to modify z, rather than modifying the model parameters θ, as in MAML. For im-
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
k-shot regression, k=5 ss k-shot regression, k=10 k-shot regression, k=20 â+â MAML (ours) âsâ MAML (ours) --=- pretrained, step=0.02 = oracle + MAMI (ours) -*- pretrained, step=0.02 sor oracle -*- retrained, step=0.01 Re = oracle mean squared error mean squared error number of gradient steps number of gradient steps number of gradient steps
k-shot regression, k=5 â+â MAML (ours) -*- retrained, step=0.01 Re = oracle mean squared error number of gradient steps
ss k-shot regression, k=10 âsâ MAML (ours) --=- pretrained, step=0.02 = oracle mean squared error number of gradient steps
k-shot regression, k=20 + MAMI (ours) -*- pretrained, step=0.02 sor oracle number of gradient steps
Figure 6. Quantitative sinusoid regression results showing test-time learning curves with varying numbers of K test-time samples. Each gradient step is computed using the same K examples. Note that MAML continues to improve with additional gradient steps without overï¬tting to the extremely small dataset during meta-testing, and achieves a loss that is substantially lower than the baseline ï¬ne-tuning approach.
Table 2. Additional multi-task baselines on the sinusoid regres- sion domain, showing 5-shot mean squared error. The results sug- gest that MAML is learning a solution more sophisticated than the mean optimal parameter vector.
num. grad steps multi-task, no reg multi-task, l2 reg multi-task, reg to mean θ pretrain on all tasks MAML (ours) 1 4.19 7.18 2.91 2.41 0.67 5 3.85 5.69 2.72 2.23 0.38 10 3.69 5.60 2.71 2.19 0.35
image. We ran this method on Omniglot and two RL do- mains following the same experimental protocol. We report the results in Tables 3, 4, and 5. Learning an adaptable con- text vector performed well on the toy pointmass problem, but sub-par on more difï¬cult problems, likely due to a less ï¬exible meta-optimization.
Table 4. 2D Pointmass, average return
Table 3. 5-way Omniglot Classiï¬cation 1-shot 5-shot context vector MAML 94.9 ± 0.9% 97.7 ± 0.3% 98.7 ± 0.4% 99.9 ± 0.1%
age inputs, z was concatenated channel-wise with the input
num. grad steps context vector â42.42 â13.90 â5.17 â3.18 MAML (ours) â40.41 â11.68 â3.33 â3.23
Table 5. Half-cheetah forward/backward, average return
num. grad steps context vector â40.49 â44.08 â38.27 â42.50 315.65 MAML (ours) â50.69
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
MAML, K=5 MAML, K=10 pretrained, K=5, step size=0.01 pretrained, K=10, step size=0.02 MAML, K=10 pretrained, K=5, step size=0.01 MAML, K=10 MAML, K=10 pretrained, K=10, step size=0.02
pretrained, K=10, step size=0.02
pretrained, K=5, step size=0.01
MAML, K=5
MAML, K=10
pretrained, K=5, step size=0.01
MAML, K=10
MAML, K=10
MAML, K=10
pretrained, K=10, step size=0.02
pre-update ++ lgradstep <=+ 10gradsteps ââ groundtruth a 4 usedforgrad © pre-update ve lgradstep <= 10 grad steps
Figure 7. A random sample of qualitative results from the sinusoid regression task. | {
"id": "1612.00796"
} |
1703.03429 | What can you do with a rock? Affordance extraction via word embeddings | Autonomous agents must often detect affordances: the set of behaviors enabled
by a situation. Affordance detection is particularly helpful in domains with
large action spaces, allowing the agent to prune its search space by avoiding
futile behaviors. This paper presents a method for affordance extraction via
word embeddings trained on a Wikipedia corpus. The resulting word vectors are
treated as a common knowledge database which can be queried using linear
algebra. We apply this method to a reinforcement learning agent in a text-only
environment and show that affordance-based action selection improves
performance most of the time. Our method increases the computational complexity
of each learning step but significantly reduces the total number of steps
needed. In addition, the agent's action selections begin to resemble those a
human would choose. | http://arxiv.org/pdf/1703.03429 | Nancy Fulda, Daniel Ricks, Ben Murdoch, David Wingate | cs.AI, cs.CL | 7 pages, 7 figures, 2 algorithms, data runs were performed using the
Autoplay learning environment for interactive fiction | Proceedings of the Twenty-Sixth International Joint Conference on
Artificial Intelligence (IJCAI), Pages 1039-1045, 2017 | cs.AI | 20170309 | 20170309 | 7 1 0 2
r a M 9 ] I A . s c [
1 v 9 2 4 3 0 . 3 0 7 1 : v i X r a
# What can you do with a rock? Affordance extraction via word embeddings
Nancy Fulda and Daniel Ricks and Ben Murdoch and David Wingate {nfulda, daniel ricks, murdoch, wingated}@byu.edu
# Brigham Young University
# Abstract
mense wealth of common sense knowledge implicitly en- coded in online corpora.
Autonomous agents must often detect affordances: the set of behaviors enabled by a situation. Af- fordance detection is particularly helpful in do- mains with large action spaces, allowing the agent to prune its search space by avoiding futile be- haviors. This paper presents a method for affor- dance extraction via word embeddings trained on a Wikipedia corpus. The resulting word vectors are treated as a common knowledge database which can be queried using linear algebra. We apply this method to a reinforcement learning agent in a text-only environment and show that affordance- based action selection improves performance most of the time. Our method increases the computa- tional complexity of each learning step but signif- icantly reduces the total number of steps needed. In addition, the agentâs action selections begin to resemble those a human would choose.
# 1 Introduction
The physical world is ï¬lled with constraints. You can open a door, but only if it isnât locked. You can douse a ï¬re, but only if a ï¬re is present. You can throw a rock or drop a rock or even, under certain circumstances, converse with a rock, but you cannot traverse it, enumerate it, or impeach it. The term affordances [Gibson, 1977] refers to the subset of possible actions which are feasible in a given situation. Human beings detect these affordances automatically, often subconsciously, but it is not uncommon for autonomous learning agents to attempt impossible or even ridiculous actions, thus wasting effort on futile behaviors.
This paper presents a method for affordance extraction based on the copiously available linguistic information in on- line corpora. Word embeddings trained using Wikipedia arti- cles are treated as a common sense knowledge base that en- codes (among other things) object-speciï¬c affordances. Be- cause knowledge is represented as vectors, the knowledge base can be queried using linear algebra. This somewhat counterintuitive notion - the idea that words can be manip- ulated mathematically - creates a theoretical bridge between the frustrating realities of real-world systems and the im-
We apply our technique to a text-based environment and show that a priori knowledge provided by affordance ex- traction greatly speeds learning. Speciï¬cally, we reduce the agentâs search space by (a) identifying actions afforded by a given object; and (b) discriminating objects that can be grasped, lifted and manipulated from objects which can merely be observed. Because the agent explores only those actions which âmake senseâ, it is able to discover valuable be- haviors more quickly than a comparable agent using a brute force approach. Critically, the affordance agent is demon- strably able to eliminate extraneous actions without (in most cases) discarding beneï¬cial ones.
# 2 Related Work
Our research relies heavily on word2vec [Mikolov et al., 2013a], an algorithm that encodes individual words based on the contexts in which they tend to appear. Earlier work has shown that word vectors trained using this method con- tain intriguing semantic properties, including structured rep- resentations of gender and geography [Mikolov et al., 2013b; Mikolov et al., 2013c]. The (by now) archetypal example of such properties is represented by the algebraic expres- sion vector[âkingâ] â vector[âmanâ] + vector[âwomanâ] = vector[âqueenâ].
Researchers have leveraged these properties for diverse ap- plications including sentence- and paragraph-level encoding [Kiros et al., 2015; Le and Mikolov, 2014], image catego- rization [Frome et al., 2013], bidirectional retrieval [Karpa- thy et al., 2014], semantic segmentation [Socher et al., 2011], biomedical document retrieval [Brokos et al., 2016], and the alignment of movie scripts to their corresponding source texts [Zhu et al., 2015]. Our work is most similar to [Zhu et al., 2014]; however, rather than using a Markov Logic Network to build an explicit knowledge base, we instead rely on the semantic structure implicitly encoded in skip-grams.
Affordance detection, a topic of rising importance in our increasingly technological society, has been attempted and/or accomplished using visual characteristics [Song et al., 2011; Song et al., 2015], haptic data [Navarro et al., 2012], visuo- motor simulation [Schenck et al., 2012; Schenck et al., 2016], repeated real-world experimentation [Montesano et al., 2007;
Stoytchev, 2008], and knowledge base representations [Zhu et al., 2014].
In 2001 [Laird and van Lent, 2001] identiï¬ed text-based adventure games as a step toward general problem solving. The same year at AAAI, Mark DePristo and Robert Zubek unveiled a hybrid system for text-based game play [Arkin, 1998], which operated on hand-crafted logic trees combined with a secondary sensory system used for goal selection. The handcrafted logic worked well, but goal selection broke down and became cluttered due to the scale of the environment. Perhaps most notably, in 2015 [Narasimhan et al., 2015] de- signed an agent which passed the text output of the game through an LSTM [Hochreiter and Schmidhuber, 1997] to ï¬nd a state representation, then used a DQN [Mnih et al., 2015] to select a Q-valued action. This approach appeared to work well within a small discrete environment with reliable state action pairs, but as the complexity and alphabet of the environment grew, the clarity of Q-values broke down and left them with a negative overall reward. Our work, in contrast, is able to ï¬nd meaningful state action pairs even in complex environments with many possible actions.
# 3 Wikipedia as a Common Sense Knowledge Base
Google âknowledge baseâ, and youâll get a list of hand-crafted systems, both commercial and academic, with strict con- straints on encoding methods. These highly-structured, often node-based solutions are successful at a wide variety of tasks including topic gisting [Liu and Singh, 2004], affordance de- tection [Zhu et al., 2014] and general reasoning [Russ et al., 2011]. Traditional knowledge bases are human-interpretable, closely tied to high-level human cognitive functions, and able to encode complex relationships compactly and effectively.
It may seem strange, then, to treat Wikipedia as a knowl- edge base. When compared with curated solutions like Con- ceptNet [Liu and Singh, 2004], Cyc [Matuszek et al., 2006], and WordNet [Miller, 1995], its contents are largely unstruc- tured, polluted by irrelevant data, and prone to user error. When used as a training corpus for the word2vec algorithm, however, Wikipedia becomes more tractable. The word vec- tors create a compact representation of the knowledge base and, as observed by [Bolukbasi et al., 2016a] and [Bolukbasi et al., 2016b], can even encode relationships about which a human author is not consciously cognizant. Perhaps most notably, Wikipedia and other online corpora are constantly updated in response to new developments and new human in- sight; hence, they do not require explicit maintenance.
However: in order to leverage the semantic structure im- plicitly encoded within Wikipedia, we must be able to in- terpret the resulting word vectors. Signiï¬cant semantic re- lationships are not readily apparent from the raw word vec- tors or from their PCA reduction. In order to extract useful information, the database must be queried through a math- ematical process. For example, in Figure 1 a dot product is used to project gendered terms onto the space deï¬ned by vector[âkingâ] â vector[âqueenâ] and vector[âwomanâ] â vector[âmanâ]. In such a projection, the mathematical re- lationship between the words is readily apparent. Masculine
actress 02 oman échoolil ._aranathether at Fie Ege ess or stewardess gather air oud evaiterctor @randfather 0.0 duke Prince echoolboy brother stallion foster Dy Be gouboy steward «ng dock emperor vector['woman'] - vector{'manâ] oul 02 aman =015 0.10 -0.05 0.00 005 0.10 O15 0.20 vectorf'king'] - vector['queen']
Figure 1: Word vectors projected into the space deï¬ned by vector[âkingâ] â vector[âqueenâ] and vector[âwomanâ] â vector[âmanâ]. In this projection, masculine and feminine terms are linearly separable.
and feminine terms become linearly separable, making it easy to distinguish instances of each group.
These relationships can be leveraged to detect affordances, and thus reduce the agentâs search space. In its most general interpretation, the adjective affordant describes the set of ac- tions which are physically possible under given conditions. In the following subsections, however, we use it in the more restricted sense of actions which seem reasonable. For ex- ample, it is physically possible to eat a pencil, but it does not âmake senseâ to do so.
# 3.1 Verb/Noun affordances
So how do you teach an algorithm what âmakes senseâ? We address this challenge through an example-based query. First we provide a canonical set of verb/noun pairs which illus- trate the relationship we desire to extract from the knowl- edge base. Then we query the database using the analogy format presented by [Mikolov et al., 2013a]. Using their ter- minology, the analogy sing:song::[?]:[x] encodes the follow- ing question: If the affordant verb for âsongâ is âsingâ, then what is the affordant verb for [x]?
In theory, a single canonical example is sufï¬cient to per- form a query. However, experience has shown that results are better when multiple canonical values are averaged.
More formally, let W be the set of all English-language word vectors in our agentâs vocabulary. Further, let N = {ii,,...,71;},. NC W be the set of all nouns in W and let V = {t,..., 0}, VC W be the set of all verbs in W.
Let C = {(v1, 71), ..., (Gm, Tim) } represent a set of canon- ical verb/noun pairs used by our algorithm. We use C to de- fine an affordance vector @ = 1/m >> ,(@;â7i;), which can be thought of as the distance and direction within the embedding space which encodes affordant behavior.
In our experiments we used the following verb/noun pairs as our canonical set:
Our algorithm vanquish duel unsheath summon wield overpower cloak impale battle behead Co-occurrence Concept Net die have cut make ï¬ght kill move use destroy be kill parry strike slash look cool cut harm fence thrust injure
Figure 2: Verb associations for the noun âswordâ using three different methods: (1) Affordance detection using word vec- tors extracted from Wikipedia, as described in this section, (2) Strict co-occurrence counts using a Wikipedia corpus and a co-occurrence window of 9 words, (3) Results generated using ConceptNetâs CapableOf relationship.
[âsing songâ, âdrink waterâ, âread bookâ, âeat foodâ, âwear coatâ, âdrive carâ, âride horseâ, âgive giftâ, âattack enemyâ, âsay wordâ, âopen doorâ, âclimb treeâ, âheal woundâ, âcure diseaseâ, âpaint pictureâ]
We describe a verb/noun pair (wv, 71) as affordant to the ex- tent that 7 + @ = v. Therefore, a typical knowledge base query would return the n closest verbs {t-1,..., en} to the point i+ a
For example, using the canonical set listed above and a set of pre-trained word vectors, a query using 7% = vec- tor[âswordâ] returns the following:
[âvanquishâ, âduelâ, âunsheatheâ, âwieldâ, âsum- monâ, âbeheadâ, âbattleâ, âimpaleâ, âoverpowerâ, âcloakâ]
Intuitively, this query process produces verbs which an- swer the question, âWhat should you do with an [x]?â. For example, when word vectors are trained on a Wikipedia cor- pus with part-of-speech tagging, the ï¬ve most affordant verbs to the noun âhorseâ are {âgallopâ, ârideâ, âraceâ, âhorseâ, âout- runâ}, and the top ï¬ve results for âkingâ are {âdethroneâ, âdis- obeyâ, âdeposeâ, âreignâ, âabdicateâ}.
The resulting lists are surprisingly logical, especially given the unstructured nature of the Wikipedia corpus from which the vector embeddings were extracted. Subjective examina- tion suggests that affordances extracted using Wikipedia are at least as relevant as those produced by more traditional methods (see Figure 2).
It is worth noting that our algorithm is not resilient to pol- ysemy, and behaves unpredictably when multiple interpre- tations exist for a given word. For example, the verb âeatâ is highly affordant with respect to most food items, but the twelve most salient results for âappleâ are {âappleâ, âpackageâ, âprogramâ, âreleaseâ, âsyncâ, âbuyâ, âoutsellâ, âdownloadâ, âin- stallâ, âreinstallâ, âuninstallâ, ârebootâ}. In this case, âApple, the software companyâ is more strongly represented in the corpus than âapple, the fruitâ.
3.2 Finding a verb that matches a given noun is useful. But an au- tonomous agent is often confronted with more than one object at a time. How should it determine which object to manipu- late, or whether any of the objects are manipulable? Pencils,
ire gat 2 gpple , eel ct 04 wad atreet 03 3 op pate 67295 gastie a yo gina ES ath âir gates = drone Bon way wglivise : pote Oe = 00 our 9 Syord e) gines 5 sina oem ARMED eovensncon 3 t B-0af ates ee ney grosquito wizard HF rach oc é39 er BPE acer HN error gyallet -0.4 | etissors 03 0.2 â01 0.0 OL 0.2 vector['forest'] - vector| treeâ)
Figure 3: Word vectors projected into the space deï¬ned by vector[âforestâ] â vector[âtreeâ] and vector[âmountainâ] â vector[âpebbleâ]. Small, manipulable objects appear in the lower-left corner of the graph. Large, abstract, or background objects appear in the upper right. An objectâs manipulabil- ity can be roughly estimated by measuring its location along either of the deï¬ning axes.
pillows, and coffee mugs are easy to grasp and lift, but the same cannot be said of shadows, boulders, or holograms.
To identify affordant nouns - i.e. nouns that can be ma- nipulated in a meaningful way - we again utilize analogies based on canonical examples. In this section, we describe a noun as affordant to the extent that it can be pushed, pulled, grasped, transported, or transformed. After all, it would not make much sense to lift a sunset or unlock a cliff.
We begin by defining canonical affordance vectors @, = Tig1 â Tig and Gy = fy, â Tyo for each axis of the affordant vector space. Then, for each object 0; under consideration, a pair of projections po, = 0; dot @, and po, = 0; dot dy.
The results of such a projection can be seen in Figure 3. This query is distinct from those described in section 3.1 be- cause, instead of using analogies to test the relationships be- tween nouns and verbs, we are instead locating a noun on the spectrum deï¬ned by two other words.
In our experiments, we used a single canonical vec- tor, vector[âforestâ] - vector[âtreeâ], to distinguish between nouns of different classes. Potentially affordant nouns were projected onto this line of manipulability, with the word whose projection lay closest to âtreeâ being selected for fur- ther experimentation.
Critical to this approach is the insight that canonical word vectors are most effective when they are thought of as exem- plars rather than as descriptors. For example, vector[âforestâ] â vector[âtreeâ] and vector[âbuildingâ] â vector[âbrickâ] function reasonably well as projections for identifying manip- ulable items. vector[âbigâ] â vector[âsmallâ], on the other hand, is utterly ineffective.
Algorithm 1 Noun Selection With Affordance Detection 1: state = game response to last command 2: manipulable nouns â {} 3: for each word w â state do 4: if w is a noun then 5: 6: 7: 8: 9: end for 10: noun = a randomly selected noun from manipulable nouns
Algorithm 2 Verb Selection With Analogy Reduction 1: navigation verbs = [ânorthâ, âsouthâ, âeastâ, âwestâ, ânortheastâ, âsoutheastâ,
âsouthwestâ, ânorthwestâ, âupâ, âdownâ, âenterâ]
2: manipulation verbs = a list of 1000 most common verbs 3: essential manipulation verbs = [âgetâ, âdropâ, âpushâ, âpullâ, âopenâ,
âcloseâ]
4: aï¬ordant verbs = verbs returned by Word2vec that match noun 5: aï¬ordant verbs = aï¬ordant verbs â©
manipulation verbs
6: f inal verbs = navigation verbs ⪠aï¬ordant verbs ⪠essential manipulation verbs
7: verb = a randomly selected verb from f inal verbs
4 Test Environment: A World Made of Words In this paper, we test our ideas in the challenging world of text-based adventure gaming. Text-based adventure games offer an unrestricted, free-form interface: the player is pre- sented with a block of text describing a situation, and must respond with a written phrase. Typical actions include com- mands such as: âexamine walletâ, âeat appleâ, or âlight camp- ï¬re with matchesâ. The game engine parses this response and produces a new block of text. The resulting inter- actions, although syntactically simple, provide a fertile re- search environment for natural language processing and hu- man/computer interaction. Game players must identify ob- jects that are manipulable and apply appropriate actions to those objects in order to make progress.
In these games, the learning agent faces a frustrating di- chotomy: its action set must be large enough to accommodate any situation it encounters, and yet each additional action in- creases the size of its search space. A brute force approach to such scenarios is frequently futile, and yet factorization, func- tion approximation, and other search space reduction tech- niques bring the risk of data loss. We desire an agent that is able to clearly perceive all its options, and yet applies only that subset which is likely to produce results.
In other words, we want an agent that explores the game world the same way a human does: by trying only those ac- tions that âmake senseâ. In the following sections, we show that affordance-based action selection provides a meaningful ï¬rst step towards this goal.
4.1 Learning algorithm Our agent utilizes a variant of Q-learning [Watkins and Dayan, 1992], a reinforcement learning algorithm which at- tempts to maximize expected discounted reward. Q-values are updated according to the equation
AQ(s, a) = a(R(s, a) + ymaraQ(sâ,a) â Q(s,a)) (1) where Q(s, a) is the expected reward for performing action a in observed state s, a is the learning rate, 7 is the discount
Figure 4: Sample text from the adventure game Zork. Player responses follow a single angle bracket.
factor, and sâ is the new state observation after performing action a. Because our test environments are typically deter- ministic with a high percentage of consumable rewards, we modify this algorithm slightly, setting a = 1 and constrain- ing Q-value updates such that
Q'(s,a) = max( Q(s,a), Q(s,a) + AQ(s,a)) (2)
This adaptation encourages the agent to retain behaviors that have produced a reward at least once, even if the reward fails to manifest on subsequent attempts. The goal is to prevent the agent from âunlearningâ behaviors that are no longer effective during the current training epoch, but which will be essential in order to score points during the next round of play.
The agentâs state representation is encoded as a hash of the text provided by the game engine. Actions are comprised of verb/object pairs:
a=v+ââ +0,veV,0cO (3)
where V is the set of all English-language verbs and O is the set of all English-language nouns. To enable the agent to dis- tinguish between state transitions and merely informational feedback, the agent executes a âlookâ command every second iteration and assumes that the resulting game text represents its new state. Some games append a summary of actions taken and points earned in response to each âlookâ command. To prevent this from obfuscating the state space, we stripped all numerals from the game text prior to hashing.
Given that the English language contains at least 20,000 verbs and 100,000 nouns in active use, a naive application of Q-learning is intractable. Some form of action-space reduc- tion must be used. For our baseline comparison, we use an agent with a vocabulary consisting of the 1000 most common verbs in Wikipedia, an 11-word navigation list and a 6-word essential manipulation list as depicted in Algorithm 2. The navigation list contains words which, by convention, are used to navigate through text-based games. The essential manip- ulation list contains words which, again by convention, are generally applicable to all in-game objects.
The baseline agent does not use a ï¬xed noun vocabulary. Instead, it extracts nouns from the game text using part-of- speech tags. To facilitate game interactions, the baseline agent augments its noun list using adjectives that precede them. For example, if the game text consisted of âYou see a red pill and a blue pillâ, then the agentâs noun list for that
superior performance detective cavetrip curses mansion comparable performance inferior performance break-in omniquest ou fae Parc zenon parallel reverb spirit ztuu we 7S â ? bs KO 5 y ââ 20 : 1 b 10 : candy zork1 tryst205 om a0 aoa) 1000 > am ane ano m0 ion0 om âa0 0a 1000 u 10 â baseline agent oe â verb space reduction oe â object space reduction â verb and object reduction
Figure 5: Learning trajectories for sixteen Z-machine games. Agents played each game 1000 times, with 1000 game steps during each trial. No agent received any reward on the remaining 32 games. 10 data runs were averaged to create this plot.
state would be [âpillâ, âred pillâ, âblue pillâ]. (And its next action is hopefully âswallow red pillâ).
In Sections 5.1 and 5.2 the baseline agent is contrasted with an agent using affordance extraction to reduce its manipula- tion list from 1000 verbs to a mere 30 verbs for each state, and to reduce its object list to a maximum of 15 nouns per state. We compare our approach to other search space reduc- tion techniques and show that the a priori knowledge pro- vided by affordance extraction enables the agent to achieve results which cannot be paralleled through brute force meth- ods. All agents used epsilon-greedy exploration with a de- caying epsilon.
The purpose of our research was to test the value of affordance-based search space reduction. Therefore, we did not add augmentations to address some of the more challeng- ing aspects of text-based adventure games. Speciï¬cally, the agent maintained no representation of items carried in inven- tory or of the game score achieved thus far. The agent was also not given the ability to construct prepositional commands such as âput book on shelfâ or âslay dragon with swordâ.
Our affordance-based search space reduction algorithms enabled the agent to score points on 16/50 games, with a peak performance (expressed as a percentage of maximum game score) of 23.40% for verb space reduction, 4.33% for object space reduction, and 31.45% when both methods were combined. The baseline agent (see Sec. 4.1) scored points on 12/50 games, with a peak performance of 4.45%. (Peak performance is deï¬ned as the maximum score achieved over all epochs, a metric that expresses the agentâs ability to comb through the search space and discover areas of high reward.)
Two games experienced termination errors and were ex- cluded from our subsequent analysis; however, our reduction methods outperformed the baseline in both peak performance and average reward in the discarded partial results.
Figures 5 and 7 show the performance of our reduction techniques when compared to the baseline. Affordance- based search space reduction improved overall performance on 12/16 games, and decreased performance on only 1 game.
5 Results We tested our agent on a suite of 50 text-based adventure games compatible with Infocomâs Z-machine. These games represent a wide variety of situations, ranging from business scenarios like âDetectiveâ to complex ï¬ctional worlds like âZork: The Underground Empireâ. Signiï¬cantly, the games provide little or no information about the agentâs goals, or actions that might provide reward.
During training, the agent interacted with the game engine for 1000 epochs, with 1000 training steps in each epoch. On each game step, the agent received a positive reward corre- sponding to the change in game score. At the end of each epoch the game was restarted and the game score reset, but the agent retained its learned Q-values.
Examination of the 32 games in which no agent scored points (and which are correspondingly not depicted in Fig- ures 5 and 7) revealed three prevalent failure modes: (1) The game required prepositional commands such as âlook at ma- chineâ or âgive dagger to wizardâ, (2) The game provided points only after an unusually complex sequence of events, (3) The game required the user to infer the proper term for manipulable objects. (For example, the game might describe âsomething shinyâ at the bottom of a lake, but required the agent to âget shiny objectâ.) Our test framework was not de- signed to address these issues, and hence did not score points on those games. A fourth failure mode (4) might be the ab- sence of a game-critical verb within the 1000-word manipu- lation list. However, this did not occur in our coarse exami- nation of games that failed.
Affordant selection Random selection decorate glass open window add table generate quantity ring window weld glass travel passage climb staircase jump table
Figure 6: Sample exploration actions produced by a Q-learner with and without affordance detection. The random agent used nouns extracted from game text and a verb list compris- ing the 200 most common verbs in Wikipedia.
5.1 Alternate reduction methods We compared our affordance-based reduction technique with four other approaches that seemed intuitively applicable to the test domain. Results are shown in Figure 7.
Intrinsic rewards: This approach guides the agentâs ex- ploration of the search space by allotting a small reward each time a new state is attained. We call these awards intrinsic because they are tied to the agentâs assessment of its progress rather than to external events.
Random reduction: When applying search space reduc- tions one must always ask: âDid improvements result from my speciï¬c choice of reduced space, or would any reduction be equally effective?â We address this question by randomly selecting 30 manipulation verbs to use during each epoch.
ConceptNet reduction: In this approach we used Con- ceptNetâs CapableOf relation to obtain a list of verbs relevant to the current object. We then reduced the agentâs manipula- tion list to include only words that were also in ConceptNetâs word list (effectively taking the intersection of the two lists). Co-occurrence reduction: In this method, we populated a co-occurrence dictionary using the 1000 most common verbs and 30,000 most common nouns in Wikipedia. The dictio- nary tracked the number of times each verb/noun pair oc- curred within a 9-word radius. During game play, the agentâs manipulation list was reduced to include only words which exceeded a low threshold (co-occurrences > 3).
Figure 7 shows the performance of these four algorithms, along with a baseline learner using a 1000-word manipulation list. Affordance-based verb selection improved performance in most games, but the other reduction techniques fell prey to a classic danger: they pruned precisely those actions which were essential to obtain reward.
# 5.2 Fixed-length vocabularies vs. Free-form learning
An interesting question arises from our research. What if, rather than beginning with a 1000-word vocabulary, the agent was free to search the entire English-language verb space?
A traditional learning agent could not do this: the space of possible verbs is too large. However, the Wikipedia knowl- edge base opens new opportunities. Using the action selec-
1.0 [candy EME detective EE omnniquest mmm 2en0n ° ° ° iS & & normalized average score 2 S 0.0 ¢ * é & 8 § sg, £. o 2 & g 58 os e 3 é S oe s& g os g ? gs és gg ¥ 5 SF ss es ¢ & es ge se s 8 é © & & > &
Figure 7: Five verb space reduction techniques compared over 100 exploration epochs. Average of 5 data runs. Re- sults were normalized for each game based on the maximum reward achieved by any agent.
tion mechanism described in Section 4.1, we allowed the agent to construct its own manipulation list for each state (see Section 3.1). The top 15 responses were unioned with the agentâs navigation and essential manipulation lists, with actions selected randomly from that set.
A sampling of the agentâs behavior is displayed in Figure 6, along with comparable action selections from the baseline agent described in Section 4.1. The free-form learner is able to produce actions that seem, not only reasonable, but also rather inventive when considered in the context of the game environment. We believe that further research in this direction may enable the development of one-shot learning for text- based adventure games.
6 Conclusion The common sense knowledge implicitly encoded within Wikipedia opens new opportunities for autonomous agents. In this paper we have shown that previously intractable search spaces can be efï¬ciently navigated when word embeddings are used to identify context-dependent affordances. In the do- main of text-based adventure games, this approach is superior to several other intuitive methods.
Our initial experiments have been restricted to text-based environments, but the underlying principles apply to any do- main in which mappings can be formed between words and objects. Steady advances in object recognition and semantic segmentation, combined with improved precision in robotic systems, suggests that our methods are applicable to systems including self-driving cars, domestic robots, and UAVs.
7 Acknowledgements Our experiments were run using Autoplay: a learn- ing environment for interactive ï¬ction (https://github.com/- danielricks/autoplay). We thank Nvidia, the Center for Un- manned Aircraft Systems, and Analog Devices, Inc. for their generous support.
# References [Arkin, 1998] Ronald C. Arkin. Behavior-Based Robotics. MIT
Press, 1998.
[Bolukbasi et al., 2016a] Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, NIPS, pages 4349â4357. Curran Associates, Inc., 2016.
[Bolukbasi et al., 2016b] Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam Tauman Kalai. Quantifying and reducing stereotypes in word embeddings. CoRR, abs/1606.06121, 2016.
Prodromos Malakasiotis, and Ion Androutsopoulos. Using centroids of word embeddings and word moverâs distance for biomedical document retrieval in question answering. CoRR, abs/1608.03905, 2016. [Frome et al., 2013] Andrea Frome, Greg S. Corrado, Jonathon Shlens, Samy Bengio, Jeffrey Dean, and Tomas Mikolov. Devise: A deep visual-semantic embedding model. In In NIPS, 2013. [Gibson, 1977] James J. Gibson. The theory of affordances.
In Robert Shaw and John Bransford, editors, Perceiving, Acting, and Knowing. 1977.
[Hochreiter and Schmidhuber, 1997] Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997.
and Li Fei-fei. Deep fragment embeddings for bidirectional image sentence mapping. In In arXiv:1406.5679, 2014.
[Kiros et al., 2015] Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. Skip-thought vectors. CoRR, abs/1506.06726, 2015. [Laird and van Lent, 2001] John E. Laird and Michael van Lent. Human-level AIâs killer application: Interactive computer games. AI Magazine, 22(2):15â26, 2001.
[Le and Mikolov, 2014] Quoc V. Le and Tomas Mikolov. Dis- tributed representations of sentences and documents. CoRR, abs/1405.4053, 2014.
[Liu and Singh, 2004] H. Liu and P. Singh. Conceptnet â a prac- tical commonsense reasoning tool-kit. BT Technology Journal, 22(4):211â226, 2004.
[Matuszek et al., 2006] Cynthia Matuszek, John Cabral, Michael Witbrock, and John Deoliveira. An introduction to the syntax and content of cyc. In Proceedings of the 2006 AAAI Spring Sympo- sium on Formalizing and Compiling Background Knowledge and Its Applications to Knowledge Representation and Question An- swering, pages 44â49, 2006.
[Mikolov et al., 2013a] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efï¬cient estimation of word representations in vector space. CoRR, abs/1301.3781, 2013.
[Mikolov et al., 2013b] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, NIPS, pages 3111â3119. Curran Associates, Inc., 2013. [Mikolov et al., 2013c] Tomas Mikolov, Wen tau Yih, and Geoffrey Zweig. Linguistic regularities in continuous space word represen- tations. Association for Computational Linguistics, May 2013.
[Miller, 1995] George A. Miller. Wordnet: A lexical database for english. Commun. ACM, 38(11):39â41, November 1995.
[Mnih et al., 2015] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostro- vski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
[Montesano et al., 2007] L. Montesano, M. Lopes, A. Bernardino, and J. Santos-Victor. Modeling affordances using bayesian net- works. In 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 4102â4107, Oct 2007.
[Narasimhan et al., 2015] Karthik Narasimhan, Tejas D. Kulka- rni, and Regina Barzilay. Language understanding for text- CoRR, based games using deep reinforcement abs/1506.08941, 2015.
[Navarro et al., 2012] Stefan Escaida Navarro, Nicolas Gorges, Heinz W¨orn, Julian Schill, Tamim Asfour, and R¨udiger Dill- mann. Haptic object recognition for multi-ï¬ngered robot hands. In 2012 IEEE Haptics Symposium (HAPTICS), pages 497â502. IEEE, 2012.
[Russ et al., 2011] Thomas A Russ, Cartic Ramakrishnan, Ed- uard H Hovy, Mihail Bota, and Gully APC Burns. Knowledge engineering tools for reasoning with scientiï¬c observations and interpretations: a neural connectivity use case. BMC bioinfor- matics, 12(1):351, 2011.
[Schenck et al., 2012] Wolfram Schenck, Hendrik Hasenbein, and Ralf M¨oller. Detecting affordances by mental imagery. In Alessandro G. Di Nuovo, Vivian M. de la Cruz, and Davide Marocco, editors, Proceedings of the SAB Workshop on âArti- ï¬cial Mental Imageryâ, pages 15â18, Odense (Danmark), 2012. [Schenck et al., 2016] Wolfram Schenck, Hendrik Hasenbein, and Ralf M¨oller. Detecting affordances by visuomotor simulation. arXiv preprint arXiv:1611.00274, 2016.
[Socher et al., 2011] Richard Socher, Cliff C. Lin, Chris Manning, and Andrew Y Ng. Parsing natural scenes and natural language with recursive neural networks. ICML, pages 129 â136, 2011. [Song et al., 2011] Hyun Oh Song, Mario Fritz, Chunhui Gu, and Trevor Darrell. Visual grasp affordances from appearance-based cues. In ICCV Workshops, pages 998â1005. IEEE, 2011.
[Song et al., 2015] Hyun Oh Song, Mario Fritz, Daniel Goehring, and Trevor Darrell. Learning to detect visual grasp affordance. In IEEE Transactions on Automation Science and Engineering (TASE), 2015.
[Stoytchev, 2008] Alexander Stoytchev. Learning the Affordances of Tools Using a Behavior-Grounded Approach, pages 140â158. Springer Berlin Heidelberg, Berlin, Heidelberg, 2008.
[Watkins and Dayan, 1992] Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279â292, 1992. [Zhu et al., 2014] Yuke Zhu, Alireza Fathi, and Li Fei-Fei. Reason- ing about object affordances in a knowledge base representation. In ECCV, 2014.
[Zhu et al., 2015] Yukun Zhu, Ryan Kiros, Richard S. Zemel, Rus- lan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. CoRR, abs/1506.06724, 2015. | {
"id": "1611.00274"
} |
1703.01780 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | 8 1 0 2
r p A 6 1 ] E N . s c [
6 v 0 8 7 1 0 . 3 0 7 1 : v i X r a
# Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
Antti Tarvainen The Curious AI Company and Aalto University antti.tarvainen@aalto.fi Harri Valpola The Curious AI Company harri@cai.fi
# Abstract
The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional beneï¬t, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels from 35.24% to 9.11%.
# Introduction
Deep learning has seen tremendous success in areas such as image and speech recognition. In order to learn useful abstractions, deep learning models require a large number of parameters, thus making them prone to over-ï¬tting (Figure 1a). Moreover, adding high-quality labels to training data manually is often expensive. Therefore, it is desirable to use regularization methods that exploit unlabeled data effectively to reduce over-ï¬tting in semi-supervised learning.
When a percept is changed slightly, a human typically still considers it to be the same object. Corre- spondingly, a classiï¬cation model should favor functions that give consistent output for similar data points. One approach for achieving this is to add noise to the input of the model. To enable the model to learn more abstract invariances, the noise may be added to intermediate representations, an insight that has motivated many regularization techniques, such as Dropout [28]. Rather than minimizing the classiï¬cation cost at the zero-dimensional data points of the input space, the regularized model minimizes the cost on a manifold around each data point, thus pushing decision boundaries away from the labeled data points (Figure 1b).
Since the classiï¬cation cost is undeï¬ned for unlabeled examples, the noise regularization by itself does not aid in semi-supervised learning. To overcome this, the Î model [21] evaluates each data point with and without noise, and then applies a consistency cost between the two predictions. In this case, the model assumes a dual role as a teacher and a student. As a student, it learns as before; as a teacher, it generates targets, which are then used by itself as a student for learning. Since the model itself generates targets, they may very well be incorrect. If too much weight is given to the generated targets, the cost of inconsistency outweighs that of misclassiï¬cation, preventing the learning of new
(a) (b) (c) (d) (e)
Figure 1: A sketch of a binary classiï¬cation task with two labeled examples (large blue dots) and one unlabeled example, demonstrating how the choice of the unlabeled target (black circle) affects the ï¬tted function (gray curve). (a) A model with no regularization is free to ï¬t any function that predicts the labeled training examples well. (b) A model trained with noisy labeled data (small dots) learns to give consistent predictions around labeled data points. (c) Consistency to noise around unlabeled examples provides additional smoothing. For the clarity of illustration, the teacher model (gray curve) is ï¬rst ï¬tted to the labeled examples, and then left unchanged during the training of the student model. Also for clarity, we will omit the small dots in ï¬gures d and e. (d) Noise on the teacher model reduces the bias of the targets without additional training. The expected direction of stochastic gradient descent is towards the mean (large blue circle) of individual noisy targets (small blue circles). (e) An ensemble of models gives an even better expected target. Both Temporal Ensembling and the Mean Teacher method use this approach.
information. In effect, the model suffers from conï¬rmation bias (Figure 1c), a hazard that can be mitigated by improving the quality of targets.
There are at least two ways to improve the target quality. One approach is to choose the perturbation of the representations carefully instead of barely applying additive or multiplicative noise. Another approach is to choose the teacher model carefully instead of barely replicating the student model. Concurrently to our research, Miyato et al. [16] have taken the ï¬rst approach and shown that Virtual Adversarial Training can yield impressive results. We take the second approach and will show that it too provides signiï¬cant beneï¬ts. To our understanding, these two approaches are compatible, and their combination may produce even better outcomes. However, the analysis of their combined effects is outside the scope of this paper.
Our goal, then, is to form a better teacher model from the student model without additional training. As the ï¬rst step, consider that the softmax output of a model does not usually provide accurate predictions outside training data. This can be partly alleviated by adding noise to the model at inference time [4], and consequently a noisy teacher can yield more accurate targets (Figure 1d). This approach was used in Pseudo-Ensemble Agreement [2] and has lately been shown to work well on semi-supervised image classiï¬cation [13, 23]. Laine & Aila [13] named the method the Î model; we will use this name for it and their version of it as the basis of our experiments.
The Î model can be further improved by Temporal Ensembling [13], which maintains an exponential moving average (EMA) prediction for each of the training examples. At each training step, all the EMA predictions of the examples in that minibatch are updated based on the new predictions. Consequently, the EMA prediction of each example is formed by an ensemble of the modelâs current version and those earlier versions that evaluated the same example. This ensembling improves the quality of the predictions, and using them as the teacher predictions improves results. However, since each target is updated only once per epoch, the learned information is incorporated into the training process at a slow pace. The larger the dataset, the longer the span of the updates, and in the case of on-line learning, it is unclear how Temporal Ensembling can be used at all. (One could evaluate all the targets periodically more than once per epoch, but keeping the evaluation span constant would require O(n2) evaluations per epoch where n is the number of training examples.)
# 2 Mean Teacher
To overcome the limitations of Temporal Ensembling, we propose averaging model weights instead of predictions. Since the teacher model is an average of consecutive student models, we call this the Mean Teacher method (Figure 2). Averaging model weights over training steps tends to produce a
2
prediction prediction 3 « > ] « > | classification consistency cost r cost t â â Ol Ol Q+â> @â Ol exponential Ol â< moving << average 3 | a label input student model teacher model
Figure 2: The Mean Teacher method. The figure depicts a training batch with a single labeled example. Both the student and the teacher model evaluate the input applying noise (7, 7â) within their computation. The softmax output of the student model is compared with the one-hot label using classification cost and with the teacher output using consistency cost. After the weights of the student model have been updated with gradient descent, the teacher model weights are updated as an exponential moving average of the student weights. Both model outputs can be used for prediction, but at the end of the training the teacher prediction is more likely to be correct. A training step with an unlabeled example would be similar, except no classification cost would be applied.
more accurate model than using the ï¬nal weights directly [19]. We can take advantage of this during training to construct better targets. Instead of sharing the weights with the student model, the teacher model uses the EMA weights of the student model. Now it can aggregate information after every step instead of every epoch. In addition, since the weight averages improve all layer outputs, not just the top output, the target model has better intermediate representations. These aspects lead to two practical advantages over Temporal Ensembling: First, the more accurate target labels lead to a faster feedback loop between the student and the teacher models, resulting in better test accuracy. Second, the approach scales to large datasets and on-line learning.
More formally, we define the consistency cost J as the expected distance between the prediction of the student model (with weights 6 and noise 7) and the prediction of the teacher model (with weights 0â and noise 77â).
I(0) = Exon [Il F(@, 9.0!) = F(x, 8,0)?
The difference between the II model, Temporal Ensembling, and Mean teacher is how the teacher predictions are generated. Whereas the II model uses 6â = 6, and Temporal Ensembling approximates f(x, 9â, 7â) with a weighted average of successive predictions, we define 6), at training step t as the EMA of successive @ weights:
O, = a0)_, + (Lâa)%
where a is a smoothing coefficient hyperparameter. An additional difference between the three algorithms is that the IT model applies training to 6â whereas Temporal Ensembling and Mean Teacher treat it as a constant with regards to optimization. We can approximate the consistency cost function J by sampling noise 7,7â at each training step with stochastic gradient descent. Following Laine & Aila [13], we use mean squared error (MSE) as the consistency cost in most of our experiments.
3
Table 1: Error rate percentage on SVHN over 10 runs (4 runs when using all labels). We use exponential moving average weights in the evaluation of all our models. All the methods use a similar 13-layer ConvNet architecture. See Table 5 in the Appendix for results without input augmentation.
250 labels 73257 images GAN [25] Πmodel [13] Temporal Ensembling [13] VAT+EntMin [16] Supervised-only Πmodel Mean Teacher 27.77 ± 3.18 9.69 ± 0.92 4.35 ± 0.50 4.35 ± 0.50 4.35 ± 0.50 500 labels 73257 images 18.44 ± 4.8 6.65 ± 0.53 5.12 ± 0.13 16.88 ± 1.30 6.83 ± 0.66 4.18 ± 0.27 4.18 ± 0.27 4.18 ± 0.27 1000 labels 73257 images 8.11 ± 1.3 4.82 ± 0.17 4.42 ± 0.16 3.863.863.86 12.32 ± 0.95 4.95 ± 0.26 3.95 ± 0.19 73257 labels 73257 images 2.54 ± 0.04 2.74 ± 0.06 2.75 ± 0.10 2.50 ± 0.07 2.50 ± 0.05 2.50 ± 0.05 2.50 ± 0.05
Table 2: Error rate percentage on CIFAR-10 over 10 runs (4 runs when using all labels).
1000 labels 50000 images GAN [25] Πmodel [13] Temporal Ensembling [13] VAT+EntMin [16] Supervised-only Πmodel Mean Teacher 46.43 ± 1.21 27.36 ± 1.20 21.55 ± 1.48 21.55 ± 1.48 21.55 ± 1.48 2000 labels 50000 images 33.94 ± 0.73 18.02 ± 0.60 15.73 ± 0.31 15.73 ± 0.31 15.73 ± 0.31 4000 labels 50000 images 18.63 ± 2.32 12.36 ± 0.31 12.16 ± 0.31 10.55 10.55 10.55 20.66 ± 0.57 13.20 ± 0.27 12.31 ± 0.28 50000 labels 50000 images 5.56 ± 0.10 5.60 ± 0.10 5.60 ± 0.10 5.60 ± 0.10 5.82 ± 0.15 6.06 ± 0.11 5.94 ± 0.15
# 3 Experiments
To test our hypotheses, we ï¬rst replicated the Î model [13] in TensorFlow [1] as our baseline. We then modiï¬ed the baseline model to use weight-averaged consistency targets. The model architecture is a 13-layer convolutional neural network (ConvNet) with three types of noise: random translations and horizontal ï¬ips of the input images, Gaussian noise on the input layer, and dropout applied within the network. We use mean squared error as the consistency cost and ramp up its weight from 0 to its ï¬nal value during the ï¬rst 80 epochs. The details of the model and the training procedure are described in Appendix B.1.
# 3.1 Comparison to other methods on SVHN and CIFAR-10
We ran experiments using the Street View House Numbers (SVHN) and CIFAR-10 benchmarks [17]. Both datasets contain 32x32 pixel RGB images belonging to ten different classes. In SVHN, each example is a close-up of a house number, and the class represents the identity of the digit at the center of the image. In CIFAR-10, each example is a natural image belonging to a class such as horses, cats, cars and airplanes. SVHN contains of 73257 training samples and 26032 test samples. CIFAR-10 consists of 50000 training samples and 10000 test samples.
Tables 1 and 2 compare the results against recent state-of-the-art methods. All the methods in the comparison use a similar 13-layer ConvNet architecture. Mean Teacher improves test accuracy over the Î model and Temporal Ensembling on semi-supervised SVHN tasks. Mean Teacher also improves results on CIFAR-10 over our baseline Î model.
The recently published version of Virtual Adversarial Training by Miyato et al. [16] performs even better than Mean Teacher on the 1000-label SVHN and the 4000-label CIFAR-10. As discussed in the introduction, VAT and Mean Teacher are complimentary approaches. Their combination may yield better accuracy than either of them alone, but that investigation is beyond the scope of this paper.
4
Table 3: Error percentage over 10 runs on SVHN with extra unlabeled training data.
Πmodel (ours) Mean Teacher 500 labels 73257 images 6.83 ± 0.66 4.18 ± 0.27 4.18 ± 0.27 4.18 ± 0.27 500 labels 173257 images 4.49 ± 0.27 3.02 ± 0.16 3.02 ± 0.16 3.02 ± 0.16 500 labels 573257 images 3.26 ± 0.14 2.46 ± 0.06 2.46 ± 0.06 2.46 ± 0.06
73257 images and labels 73257 images and 500 labels 573257 images and 500 labels 102 100 10-1 ââ Mmodel (test set) ââ Mean teacher (student, test set) ve T model (training) tee Mean teacher (student, training) 10° 102 classification cost 100% 1 model 1 model (EMA) Mean teacher (student) Mean teacher (teacher) 50% classification error RoR 8 8 & & 5% 2% 0k = 20k_=â 40k) GOK 80K 100k -«Ok-=â 20k) 40k_~â GOK) 80k = 100k â0k = 20k 40k = 60K_~=â 80k ~â100k
Figure 3: Smoothened classiï¬cation cost (top) and classiï¬cation error (bottom) of Mean Teacher and our baseline Î model on SVHN over the ï¬rst 100000 training steps. In the upper row, the training classiï¬cation costs are measured using only labeled data.
# 3.2 SVHN with extra unlabeled data
Above, we suggested that Mean Teacher scales well to large datasets and on-line learning. In addition, the SVHN and CIFAR-10 results indicate that it uses unlabeled examples efï¬ciently. Therefore, we wanted to test whether we have reached the limits of our approach.
Besides the primary training data, SVHN includes also an extra dataset of 531131 examples. We picked 500 samples from the primary training as our labeled training examples. We used the rest of the primary training set together with the extra training set as unlabeled examples. We ran experiments with Mean Teacher and our baseline Î model, and used either 0, 100000 or 500000 extra examples. Table 3 shows the results.
# 3.3 Analysis of the training curves
The training curves on Figure 3 help us understand the effects of using Mean Teacher. As expected, the EMA-weighted models (blue and dark gray curves in the bottom row) give more accurate predictions than the bare student models (orange and light gray) after an initial period.
Using the EMA-weighted model as the teacher improves results in the semi-supervised settings. There appears to be a virtuous feedback cycle of the teacher (blue curve) improving the student (orange) via the consistency cost, and the student improving the teacher via exponential moving averaging. If this feedback cycle is detached, the learning is slower, and the model starts to overï¬t earlier (dark gray and light gray).
Mean Teacher helps when labels are scarce. When using 500 labels (middle column) Mean Teacher learns faster, and continues training after the Î model stops improving. On the other hand, in the all-labeled case (left column), Mean Teacher and the Î model behave virtually identically.
5
15% 4 (a) eg âity 15% 4 (b) 15% 4 (c) ° Brey ug 10% | with âoe,* 10% 4 10% 4 . augmentation 2 . : e 5% 4 . 5% 4 5% 4 no input dropout both 0.0 0.25 O05 0.75 0 0103 1 3 10 noise noise teacher dropout consistency cost weight 15% 15% 4 (e) consistency 15% + (f) ramp-up 10% 10% 4 on 10% 4 â off 5% 5% 4 5% {e+ oa a sH aR o4 sf mm Nu gt wid 4 Oa OD D> Rags 5 7 tT 2 FT oS a o a aR 8 8 23 3 8 Be Eo F fo § EMA decay dual output diff. cost 9 © cons. cost function t
Figure 4: Validation error on 250-label SVHN over four runs per hyperparameter setting and their means. In each experiment, we varied one hyperparameter, and used the evaluation run hyperparameters of Table 1 for the rest. The hyperparameter settings used in the evaluation runs are marked with the bolded font weight. See the text for details.
Mean Teacher uses unlabeled training data more efï¬ciently than the Î model, as seen in the middle column. On the other hand, with 500k extra unlabeled examples (right column), Î model keeps improving for longer. Mean Teacher learns faster, and eventually converges to a better result, but the sheer amount of data appears to offset Î modelâs worse predictions.
# 3.4 Ablation experiments
To assess the importance of various aspects of the model, we ran experiments on SVHN with 250 labels, varying one or a few hyperparameters at a time while keeping the others ï¬xed.
Removal of noise (Figures 4(a) and 4(b)). In the introduction and Figure 1, we presented the hypothesis that the Î model produces better predictions by adding noise to the model on both sides. But after the addition of Mean Teacher, is noise still needed? Yes. We can see that either input augmentation or dropout is necessary for passable performance. On the other hand, input noise does not help when augmentation is in use. Dropout on the teacher side provides only a marginal beneï¬t over just having it on the student side, at least when input augmentation is in use.
Sensitivity to EMA decay and consistency weight (Figures 4(c) and 4(d)). The essential hyperpa- rameters of the Mean Teacher algorithm are the consistency cost weight and the EMA decay α. How sensitive is the algorithm to their values? We can see that in each case the good values span roughly an order of magnitude and outside these ranges the performance degrades quickly. Note that EMA decay α = 0 makes the model a variation of the Î model, although somewhat inefï¬cient one because the gradients are propagated through only the student path. Note also that in the evaluation runs we used EMA decay α = 0.99 during the ramp-up phase, and α = 0.999 for the rest of the training. We chose this strategy because the student improves quickly early in the training, and thus the teacher should forget the old, inaccurate, student weights quickly. Later the student improvement slows, and the teacher beneï¬ts from a longer memory.
Decoupling classiï¬cation and consistency (Figure 4(e)). The consistency to teacher predictions may not necessarily be a good proxy for the classiï¬cation task, especially early in the training. So far our model has strongly coupled these two tasks by using the same output for both. How would decoupling the tasks change the performance of the algorithm? To investigate, we changed the model to have two top layers and produce two outputs. We then trained one of the outputs for classiï¬cation and the other for consistency. We also added a mean squared error cost between the output logits, and then varied the weight of this cost, allowing us to control the strength of the coupling. Looking at the results (reported using the EMA version of the classiï¬cation output), we can see that the strongly coupled version performs well and the too loosely coupled versions do not. On the other hand, a moderate decoupling seems to have the beneï¬t of making the consistency ramp-up redundant.
6
Table 4: Error rate percentage of ResNet Mean Teacher compared to the state of the art. We report the test results from 10 runs on CIFAR-10 and validation results from 2 runs on ImageNet.
State of the art ConvNet Mean Teacher ResNet Mean Teacher State of the art using all labels CIFAR-10 4000 labels 10.55 [16] 12.31 ± 0.28 6.28 ± 0.15 6.28 ± 0.15 6.28 ± 0.15 2.86 [5] ImageNet 2012 10% of the labels 35.24 ± 0.90 [20] 9.11 ± 0.12 9.11 ± 0.12 9.11 ± 0.12 3.79 [10]
Changing from MSE to KL-divergence (Figure 4(f)) Following Laine & Aila [13], we use mean squared error (MSE) as our consistency cost function, but KL-divergence would seem a more natural choice. Which one works better? We ran experiments with instances of a cost function family ranging from MSE (Ï = 0 in the ï¬gure) to KL-divergence (Ï = 1), and found out that in this setting MSE performs better than the other cost functions. See Appendix C for the details of the cost function family and for our intuition about why MSE performs so well.
# 3.5 Mean Teacher with residual networks on CIFAR-10 and ImageNet
In the experiments above, we used a traditional 13-layer convolutional architecture (ConvNet), which has the beneï¬t of making comparisons to earlier work easy. In order to explore the effect of the model architecture, we ran experiments using a 12-block (26-layer) Residual Network [8] (ResNet) with Shake-Shake regularization [5] on CIFAR-10. The details of the model and the training procedure are described in Appendix B.2. As shown in Table 4, the results improve remarkably with the better network architecture.
To test whether the methods scales to more natural images, we ran experiments on Imagenet 2012 dataset [22] using 10% of the labels. We used a 50-block (152-layer) ResNeXt architecture [33], and saw a clear improvement over the state of the art. As the test set is not publicly available, we measured the results using the validation set.
# 4 Related work
Noise regularization of neural networks was proposed by Sietsma & Dow [26]. More recently, several types of perturbations have been shown to regularize intermediate representations effectively in deep learning. Adversarial Training [6] changes the input slightly to give predictions that are as different as possible from the original predictions. Dropout [28] zeroes random dimensions of layer outputs. Dropconnect [31] generalizes Dropout by zeroing individual weights instead of activations. Stochastic Depth [11] drops entire layers of residual networks, and Swapout [27] generalizes Dropout and Stochastic Depth. Shake-shake regularization [5] duplicates residual paths and samples a linear combination of their outputs independently during forward and backward passes.
Several semi-supervised methods are based on training the model predictions to be consistent to perturbation. The Denoising Source Separation framework (DSS) [29] uses denoising of latent variables to learn their likelihood estimate. The Î variant of Ladder Network [21] implements DSS with a deep learning model for classiï¬cation tasks. It produces a noisy student predictions and clean teacher predictions, and applies a denoising layer to predict teacher predictions from the student predictions. The Î model [13] improves the Î model by removing the explicit denoising layer and applying noise also to the teacher predictions. Similar methods had been proposed already earlier for linear models [30] and deep learning [2]. Virtual Adversarial Training [16] is similar to the Î model but uses adversarial perturbation instead of independent noise.
The idea of a teacher model training a student is related to model compression [3] and distillation [9]. The knowledge of a complicated model can be transferred to a simpler model by training the simpler model with the softmax outputs of the complicated model. The softmax outputs contain more information about the task than the one-hot outputs, and the requirement of representing this
7
knowledge regularizes the simpler model. Besides its use in model compression, distillation can be used to harden trained models against adversarial attacks [18]. The difference between distillation and consistency regularization is that distillation is performed after training whereas consistency regularization is performed on training time.
Consistency regularization can be seen as a form of label propagation [34]. Training samples that resemble each other are more likely to belong to the same class. Label propagation takes advantage of this assumption by pushing label information from each example to examples that are near it according to some metric. Label propagation can also be applied to deep learning models [32]. However, ordinary label propagation requires a predeï¬ned distance metric in the input space. In contrast, consistency targets employ a learned distance metric implied by the abstract representations of the model. As the model learns new features, the distance metric changes to accommodate these features. Therefore, consistency targets guide learning in two ways. On the one hand they spread the labels according to the current distance metric, and on the other hand, they aid the network learn a better distance metric.
# 5 Conclusion
Temporal Ensembling, Virtual Adversarial Training and other forms of consistency regularization have recently shown their strength in semi-supervised learning. In this paper, we propose Mean Teacher, a method that averages model weights to form a target-generating teacher model. Unlike Temporal Ensembling, Mean Teacher works with large datasets and on-line learning. Our experiments suggest that it improves the speed of learning and the classiï¬cation accuracy of the trained network. In addition, it scales well to state-of-the-art architectures and large image sizes.
The success of consistency regularization depends on the quality of teacher-generated targets. If the targets can be improved, they should be. Mean Teacher and Virtual Adversarial Training represent two ways of exploiting this principle. Their combination may yield even better targets. There are probably additional methods to be uncovered that improve targets and trained models even further.
# Acknowledgements
We thank Samuli Laine and Timo Aila for fruitful discussions about their work, Phil Bachman, Colin Raffel, and Thomas Robert for noticing errors in the previous versions of this paper and everyone at The Curious AI Company for their help, encouragement, and ideas.
# References
[1] Abadi, MartÃn, Agarwal, Ashish, Barham, Paul, Brevdo, Eugene, Chen, Zhifeng, Citro, Craig, Corrado, Greg S., Davis, Andy, Dean, Jeffrey, Devin, Matthieu, Ghemawat, Sanjay, Goodfellow, Ian, Harp, Andrew, Irving, Geoffrey, Isard, Michael, Jia, Yangqing, Jozefowicz, Rafal, Kaiser, Lukasz, Kudlur, Manjunath, Levenberg, Josh, Mané, Dan, Monga, Rajat, Moore, Sherry, Murray, Derek, Olah, Chris, Schuster, Mike, Shlens, Jonathon, Steiner, Benoit, Sutskever, Ilya, Talwar, Kunal, Tucker, Paul, Vanhoucke, Vincent, Vasudevan, Vijay, Viégas, Fernanda, Vinyals, Oriol, Warden, Pete, Wattenberg, Martin, Wicke, Martin, Yu, Yuan, and Zheng, Xiaoqiang. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. 2015.
[2] Bachman, Philip, Alsharif, Ouais, and Precup, Doina. Learning with Pseudo-Ensembles. arXiv:1412.4864 [cs, stat], December 2014. arXiv: 1412.4864.
[3] BuciluËa, Cristian, Caruana, Rich, and Niculescu-Mizil, Alexandru. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 535â541. ACM, 2006.
[4] Gal, Yarin and Ghahramani, Zoubin. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In Proceedings of The 33rd International Conference on Machine Learning, pp. 1050â1059, 2016.
[5] Gastaldi, Xavier. Shake-Shake regularization. arXiv:1705.07485 [cs], May 2017. arXiv: 1705.07485.
8
[6] Goodfellow, Ian J., Shlens, Jonathon, and Szegedy, Christian. Explaining and Harnessing Adversarial Examples. December 2014. arXiv: 1412.6572.
[7] Guo, Chuan, Pleiss, Geoff, Sun, Yu, and Weinberger, Kilian Q. On Calibration of Modern Neural Networks. arXiv:1706.04599 [cs], June 2017. arXiv: 1706.04599.
[8] He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep Residual Learning for Image Recognition. arXiv:1512.03385 [cs], December 2015. arXiv: 1512.03385.
[9] Hinton, Geoffrey, Vinyals, Oriol, and Dean, Jeff. Distilling the Knowledge in a Neural Network. arXiv:1503.02531 [cs, stat], March 2015. arXiv: 1503.02531.
[10] Hu, Jie, Shen, Li, and Sun, Gang. Squeeze-and-Excitation Networks. arXiv:1709.01507 [cs], September 2017. arXiv: 1709.01507.
[11] Huang, Gao, Sun, Yu, Liu, Zhuang, Sedra, Daniel, and Weinberger, Kilian. Deep Networks with Stochastic Depth. arXiv:1603.09382 [cs], March 2016. arXiv: 1603.09382.
[12] Kingma, Diederik and Ba, Jimmy. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs], December 2014. arXiv: 1412.6980.
[13] Laine, Samuli and Aila, Timo. Temporal Ensembling for Semi-Supervised Learning. arXiv:1610.02242 [cs], October 2016. arXiv: 1610.02242.
[14] Loshchilov, Ilya and Hutter, Frank. SGDR: Stochastic Gradient Descent with Warm Restarts. arXiv:1608.03983 [cs, math], August 2016. arXiv: 1608.03983.
[15] Maas, Andrew L., Hannun, Awni Y., and Ng, Andrew Y. Rectiï¬er nonlinearities improve neural network acoustic models. In Proc. ICML, volume 30, 2013.
[16] Miyato, Takeru, Maeda, Shin-ichi, Koyama, Masanori, and Ishii, Shin. Virtual Adversarial Train- ing: a Regularization Method for Supervised and Semi-supervised Learning. arXiv:1704.03976 [cs, stat], April 2017. arXiv: 1704.03976.
[17] Netzer, Yuval, Wang, Tao, Coates, Adam, Bissacco, Alessandro, Wu, Bo, and Ng, Andrew Y. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011.
[18] Papernot, Nicolas, McDaniel, Patrick, Wu, Xi, Jha, Somesh, and Swami, Ananthram. Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks. arXiv:1511.04508 [cs, stat], November 2015. arXiv: 1511.04508.
[19] Polyak, B. T. and Juditsky, A. B. Acceleration of Stochastic Approximation by Averaging. SIAM J. Control Optim., 30(4):838â855, July 1992. ISSN 0363-0129. doi: 10.1137/0330046.
[20] Pu, Yunchen, Gan, Zhe, Henao, Ricardo, Yuan, Xin, Li, Chunyuan, Stevens, Andrew, and Carin, Lawrence. Variational Autoencoder for Deep Learning of Images, Labels and Captions. arXiv:1609.08976 [cs, stat], September 2016. arXiv: 1609.08976.
[21] Rasmus, Antti, Berglund, Mathias, Honkala, Mikko, Valpola, Harri, and Raiko, Tapani. Semi- supervised Learning with Ladder Networks. In Cortes, C., Lawrence, N. D., Lee, D. D., Sugiyama, M., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 28, pp. 3546â3554. Curran Associates, Inc., 2015.
[22] Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang, Zhiheng, Karpathy, Andrej, Khosla, Aditya, Bernstein, Michael, Berg, Alexander C., and Fei- Fei, Li. ImageNet Large Scale Visual Recognition Challenge. arXiv:1409.0575 [cs], September 2014. arXiv: 1409.0575.
[23] Sajjadi, Mehdi, Javanmardi, Mehran, and Tasdizen, Tolga. Regularization With Stochastic Trans- formations and Perturbations for Deep Semi-Supervised Learning. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 29, pp. 1163â1171. Curran Associates, Inc., 2016.
9
[24] Salimans, Tim and Kingma, Diederik P. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing Systems, pp. 901â901, 2016.
[25] Salimans, Tim, Goodfellow, Ian, Zaremba, Wojciech, Cheung, Vicki, Radford, Alec, and Chen, Xi. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pp. 2226â2234, 2016.
[26] Sietsma, Jocelyn and Dow, Robert JF. Creating artiï¬cial neural networks that generalize. Neural networks, 4(1):67â79, 1991.
[27] Singh, Saurabh, Hoiem, Derek, and Forsyth, David. Swapout: Learning an ensemble of deep architectures. arXiv:1605.06465 [cs], May 2016. arXiv: 1605.06465.
[28] Srivastava, Nitish, Hinton, Geoffrey, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: A Simple Way to Prevent Neural Networks from Overï¬tting. J. Mach. Learn. Res., 15(1):1929â1958, January 2014. ISSN 1532-4435.
[29] Särelä, Jaakko and Valpola, Harri. Denoising Source Separation. Journal of Machine Learning Research, 6(Mar):233â272, 2005. ISSN ISSN 1533-7928.
[30] Wager, Stefan, Wang, Sida, and Liang, Percy. Dropout Training as Adaptive Regularization. arXiv:1307.1493 [cs, stat], July 2013. arXiv: 1307.1493.
[31] Wan, Li, Zeiler, Matthew, Zhang, Sixin, Le Cun, Yann, and Fergus, Rob. Regularization of Neural Networks using DropConnect. pp. 1058â1066, 2013.
[32] Weston, Jason, Ratle, Frédéric, Mobahi, Hossein, and Collobert, Ronan. Deep learning via semi-supervised embedding. In Neural Networks: Tricks of the Trade, pp. 639â655. Springer, 2012.
[33] Xie, Saining, Girshick, Ross, Dollár, Piotr, Tu, Zhuowen, and He, Kaiming. Aggregated Residual Transformations for Deep Neural Networks. arXiv:1611.05431 [cs], November 2016. arXiv: 1611.05431.
[34] Zhu, Xiaojin and Ghahramani, Zoubin. Learning from labeled and unlabeled data with label propagation. 2002.
10
# Appendix
# A Results without input augmentation
See table 5 for the results without input augmentation.
Table 5: Error rate percentage on SVHN and CIFAR-10 over 10 runs, including the results without input augmentation. We use exponential moving average weights in the evaluation of all our models. All the comparison methods use a 13-layer ConvNet architecture similar to ours and augmentation similar to ours, expect GAN, which does not use augmentation.
18.44 ± 4.8 6.65 ± 0.53 5.12 ± 0.13 8.11 ± 1.3 4.82 ± 0.17 4.42 ± 0.16 3.863.863.86 2.54 ± 0.04 2.74 ± 0.06 Supervised-onlye Πmodel Mean Teacher Without augmentation Supervised-onlye Πmodel Mean Teacher 27.77 ± 3.18 9.69 ± 0.92 4.35 ± 0.50 4.35 ± 0.50 4.35 ± 0.50 36.26 ± 3.83 10.36 ± 0.94 5.85 ± 0.62 16.88 ± 1.30 6.83 ± 0.66 4.18 ± 0.27 4.18 ± 0.27 4.18 ± 0.27 19.68 ± 1.03 7.01 ± 0.29 5.45 ± 0.14 12.32 ± 0.95 4.95 ± 0.26 3.95 ± 0.19 14.15 ± 0.87 5.73 ± 0.16 5.21 ± 0.21 2.75 ± 0.10 2.50 ± 0.07 2.50 ± 0.05 2.50 ± 0.05 2.50 ± 0.05 3.04 ± 0.04 2.75 ± 0.08 2.77 ± 0.09 CIFAR-10 GANb Πmodelc Temporal Ensemblingc VAT+EntMind Ours 1000 labels 2000 labels 4000 labels 18.63 ± 2.32 12.36 ± 0.31 12.16 ± 0.31 10.55 all labelsa 5.56 ± 0.10 5.56 ± 0.10 5.56 ± 0.10 5.60 ± 0.10 Supervised-onlye Πmodel Mean Teacher Mean Teacher ResNet 46.43 ± 1.21 27.36 ± 1.20 21.55 ± 1.48 10.08 ± 0.41 10.08 ± 0.41 10.08 ± 0.41 33.94 ± 0.73 18.02 ± 0.60 15.73 ± 0.31 15.73 ± 0.31 15.73 ± 0.31 20.66 ± 0.57 13.20 ± 0.27 12.31 ± 0.28 6.28 ± 0.15 6.28 ± 0.15 6.28 ± 0.15 5.82 ± 0.15 6.06 ± 0.11 5.94 ± 0.15 Without augmentation Supervised-onlye Πmodel Mean Teacher 48.38 ± 1.07 32.18 ± 1.33 30.62 ± 1.13 36.07 ± 0.90 23.92 ± 1.07 23.14 ± 0.46 24.47 ± 0.50 17.08 ± 0.32 17.74 ± 0.30 7.43 ± 0.06 7.00 ± 0.20 7.21 ± 0.24
# a 4 runs e Only labeled examples and only classiï¬cation cost
# B Experimental setup
Source mean-teacher. code for the experiments is available at https://github.com/CuriousAI/
# B.1 Convolutional network models
We replicated the Î model of Laine & Aila [13] in TensorFlow [1], and added support for Mean Teacher training. We modiï¬ed the model slightly to match the requirements of the experiments, as described in subsections B.1.1 and B.1.2. The difference between the original Î model described by Laine & Aila [13] and our baseline Î model thus depends on the experiment. The difference between
11
Table 6: The convolutional network architecture we used in the experiments.
Input Translation Horizontal ï¬ipa Randomly p = 0.5 Gaussian noise Convolutional Convolutional Convolutional Pooling Dropout Convolutional Convolutional Convolutional Pooling Dropout Convolutional Convolutional Convolutional Pooling Softmax
Ï = 0.15 128 ï¬lters, 3 à 3, same padding 128 ï¬lters, 3 à 3, same padding 128 ï¬lters, 3 à 3, same padding Maxpool 2 à 2 p = 0.5 256 ï¬lters, 3 à 3, same padding 256 ï¬lters, 3 à 3, same padding 256 ï¬lters, 3 à 3, same padding Maxpool 2 à 2 p = 0.5 512 ï¬lters, 3 à 3, valid padding 256 ï¬lters, 1 à 1, same padding 128 ï¬lters, 1 à 1, same padding Average pool (6 à 6 â 1Ã1 pixels) Fully connected 128 â 10
a Not applied on SVHN experiments
our baseline Î model and our Mean Teacher model is whether the teacher weights are identical to the student weights or an EMA of the student weights. In addition, the Î models (both the original and ours) backpropagate gradients to both sides of the model whereas Mean Teacher applies them only to the student side.
Table 6 describes the architecture of the convolutional network. We applied mean-only batch normalization and weight normalization [24] on convolutional and softmax layers. We used Leaky ReLu [15] with α = 0.1 as the nonlinearity on each of the convolutional layers.
We used cross-entropy between the student softmax output and the one-hot label as the classiï¬cation cost, and the mean square error between the student and teacher softmax outputs as the consistency cost. The total cost was the weighted sum of these costs, where the weight of classiï¬cation cost was the expected number of labeled examples per minibatch, subject to the ramp-ups described below.
We trained the network with minibatches of size 100. We used Adam Optimizer [12] for training with learning rate 0.003 and parameters β1 = 0.9, β2 = 0.999, and ε = 10â8. In our baseline Î model we applied gradients through both teacher and student sides of the network. In Mean teacher model, the teacher model parameters were updated after each training step using an EMA with α = 0.999. These hyperparameters were subject to the ramp-ups and ramp-downs described below.
We applied a ramp-up period of 40000 training steps at the beginning of training. The consistency cost coefï¬cient and the learning rate were ramped up from 0 to their maximum values, using a sigmoid-shaped function eâ5(1âx)2
We used different training settings in different experiments. In the CIFAR-10 experiment, we matched the settings of Laine & Aila [13] as closely as possible. In the SVHN experiments, we diverged from Laine & Aila [13] to accommodate for the sparsity of labeled data. Table 7 summarizes the differences between our experiments.
# B.1.1 ConvNet on CIFAR-10
We normalized the input images with ZCA based on training set statistics.
12
For sampling minibatches, the labeled and unlabeled examples were treated equally, and thus the number of labeled examples varied from minibatch to minibatch.
We applied a ramp-down for the last 25000 training steps. The learning rate coefï¬cient was ramped down to 0 from its maximum value. Adam β1 was ramped down to 0.5 from its maximum value. The ramp-downs were performed using sigmoid-shaped function 1 â eâ12.5x2 , where x â [0, 1]. These ramp-downs did not improve the results, but were used to stay as close as possible to the settings of Laine & Aila [13].
# B.1.2 ConvNet on SVHN
We normalized the input images to have zero mean and unit variance.
When doing semi-supervised training, we used 1 labeled example and 99 unlabeled examples in each mini-batch. This was important to speed up training when using extra unlabeled data. After all labeled examples had been used, they were shufï¬ed and reused. Similarly, after all unlabeled examples had been used, they were shufï¬ed and reused.
We applied different values for Adam β2 and EMA decay rate during the ramp-up period and the rest of the training. Both of the values were 0.99 during the ï¬rst 40000 steps, and 0.999 afterwards. This helped the 250-label case converge reliably.
We trained the network for 180000 steps when not using extra unlabeled examples, for 400000 steps when using 100k extra unlabeled examples, and for 600000 steps when using 500k extra unlabeled examples.
# B.1.3 The baseline ConvNet models
For training the supervised-only and Î model baselines we used the same hyperparameters as for training the Mean Teacher, except we stopped training earlier to prevent over-ï¬tting. For supervised- only runs we did not include any unlabeled examples and did not apply the consistency cost.
We trained the supervised-only model on CIFAR-10 for 7500 steps when using 1000 images, for 15000 steps when using 2000 images, for 30000 steps when using 4000 images and for 150000 steps when using all images. We trained it on SVHN for 40000 steps when using 250, 500 or 1000 labels, and for 180000 steps when using all labels.
We trained the Î model on CIFAR-10 for 60000 steps when using 1000 labels, for 100000 steps when using 2000 labels, and for 180000 steps when using 4000 labels or all labels. We trained it on SVHN for 100000 steps when using 250 labels, and for 180000 steps when using 500, 1000, or all labels.
# B.2 Residual network models
We implemented our residual network experiments in PyTorch1. We used different architectures for our CIFAR-10 and ImageNet experiments.
# B.2.1 ResNet on CIFAR-10
For CIFAR-10, we replicated the 26-2x96d Shake-Shake regularized architecture described in [5], and consisting of 4+4+4 residual blocks.
We trained the network on 4 GPUs using minibatches of 512 images, 124 of which were labeled. We sampled the images in the same way as described in the SVHN experiments above. We augmented the input images with 4x4 random translations (reï¬ecting the pixels at borders when necessary) and random horizontal ï¬ips. (Note that following [5] we used a larger translation size than on our earlier experiments.) We normalized the images to have channel-wise zero mean and unit variance over training data.
We trained the network using stochastic gradient descent with initial learning rate 0.2 and Nesterov momentum 0.9. We trained for 180 epochs (when training with 1000 labels) or 300 epochs (when training with 4000 labels), decaying the learning rate with cosine annealing [14] so that it would
# 1https://github.com/pytorch/pytorch
13
Table 7: Differences in training settings between the ConvNet experiments
# semi-supervised CIFAR-10
Aspect image pre-processing zero mean, unit variance zero mean, unit variance ZCA image augmentation translation translation translation + horizontal ï¬ip number of labeled examples per minibatch 1 100 varying training steps 180000-600000 180000 150000 Adam β2 during and after ramp-up 0.99, 0.999 0.99, 0.999 0.999, 0.999 EMA decay rate during and after ramp-up 0.99, 0.999 0.99, 0.999 0.999, 0.999 Ramp-downs No No Yes
have reached zero after 210 epochs (when 1000 labels) or 350 epochs (when 4000 labels). We deï¬ne epoch as one pass through all the unlabeled examples â each labeled example was included many times in one such epoch.
We used a total cost function consisting of classiï¬cation cost and three other costs: We used the dual output trick described in subsection 3.4 and Figure 4(e) with MSE cost between logits with coefï¬cient 0.01. This simpliï¬ed other hyperparameter choices and improved the results. We used MSE consistency cost with coefï¬cient ramping up from 0 to 100.0 during the ï¬rst 5 epochs, using the same sigmoid ramp-up shape as in the experiments above. We also used an L2 weight decay with coefï¬cient 2e-4. We used EMA decay value 0.97 (when 1000 labels) or 0.99 (when 4000 labels).
# B.2.2 ResNet on ImageNet
On our ImageNet evaluation runs, we used a 152-layer ResNeXt architecture [33] consisting of 3+8+36+3 residual blocks, with 32 groups of 4 channels on the ï¬rst block.
We trained the network on 10 GPUs using minibatches of 400 images, 200 of which were labeled. We sampled the images in the same way as described in the SVHN experiments above. Following [10], we randomly augmented images using a 10 degree rotation, a crop with aspect ratio between 3/4 and 4/3 resized to 224x224 pixels, a random horizontal ï¬ip and a color jitter. We then normalized images to have channel-wise zero mean and unit variance over training data.
We trained the network using stochastic gradient descent with maximum learning rate 0.25 and Nesterov momentum 0.9. We ramped up the learning rate linearly during the ï¬rst two epochs from 0.1 to 0.25. We trained for 60 epochs, decaying the learning rate with cosine annealing so that it would have reached zero after 75 epochs.
We used a total cost function consisting of classiï¬cation cost and three other costs: We used the dual output trick described in subsection 3.4 and Figure 4(e) with MSE cost between logits with coefï¬cient 0.01. We used a KL-divergence consistency cost with coefï¬cient ramping up from 0 to 10.0 during the ï¬rst 5 epochs, using the same sigmoid ramp-up shape as in the experiments above. We also used an L2 weight decay with coefï¬cient 5e-5. We used EMA decay value 0.9997.
14
15% + (f) 10% | 5% MSE a 4 a a Gs 6 @ o o KL-div cons. cost function Tt
Figure 5: Copy of Figure 4(f) in the main text. Validation error on 250-label SVHN over four runs and their mean, when varying the consistency cost shape hyperparameter Ï between mean squared error (Ï = 0) and KL-divergence (Ï = 1).
# B.3 Use of training, validation and test data
In the development phase of our work with CIFAR-10 and SVHN datasets, we separated 10% of training data into a validation set. We removed randomly most of the labels from the remaining training data, retaining an equal number of labels from each class. We used a different set of labels for each of the evaluation runs. We retained labels in the validation set to enable exploration of the results. In the ï¬nal evaluation phase we used the entire training set, including the validation set but with labels removed.
On a real-world use case we would not possess a large fully-labeled validation set. However, this setup is useful in a research setting, since it enables a more thorough analysis of the results. To the best of our knowledge, this is the common practice when carrying out research on semi-supervised learning. By retaining the hyperparameters from previous work where possible we decreased the chance of over-ï¬tting our results to validation labels.
In the ImageNet experiments we removed randomly most of the labels from the training set, retaining an equal number of labels from each class. For validation we used the given validation set without modiï¬cations. We used a different set of training labels for each of the evaluation runs and evaluated the results against the validation set.
# C Varying between mean squared error and KL-divergence
As mentioned in subsection 3.4, we ran an experiment varying the consistency cost function between MSE and KL-divergence (reproduced in Figure 5). The exact consistency function we used was
2 lâ-t lâ-t C,(p.4) = Z,Dei(prlidr), where Z, = 5p, Pr = TPH = TIF aH
Ï â (0, 1] and N is the number of classes. Taking the Taylor expansion we get
1 Dy (villa) = >> 57 N (Pi â qi)? + O (N27) a
where the zeroth- and ï¬rst-order terms vanish. Consequently,
1 2 Cr(p.9) +5 Swi - a) when 7 â 0 F 2 C-(p. 4) =Fy3 Pa. (Ila) when 7 = 1.
The results in Figure 5 show that MSE performs better than KL-divergence or CÏ with any Ï . We also tried other consistency cost weights with KL-divergence and did not reach the accuracy of MSE.
15
The exact reason why MSE performs better than KL-divergence remains unclear, but the form of CÏ may help explain it. Modern neural network architectures tend to produce accurate but overly conï¬dent predictions [7]. We can assume that the true labels are accurate, but we should discount the conï¬dence of the teacher predictions. We can do that by having Ï = 1 for the classiï¬cation cost and Ï < 1 for the consistency cost. Then pÏ and qÏ discount the conï¬dence of the approximations while ZÏ keeps gradients large enough to provide a useful training signal. However, we did not perform experiments to validate this explanation.
16 | {
"id": "1706.04599"
} |
1703.01041 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | 7 1 0 2 n u J 1 1 ] E N . s c [
2 v 1 4 0 1 0 . 3 0 7 1 : v i X r a
# Large-Scale Evolution of Image Classiï¬ers
# Esteban Real 1 Sherry Moore 1 Andrew Selle 1 Saurabh Saxena 1 Yutaka Leon Suematsu 2 Jie Tan 1 Quoc V. Le 1 Alexey Kurakin 1
Abstract Neural networks have proven effective at solv- ing difï¬cult problems but designing their archi- tectures can be challenging, even for image clas- siï¬cation problems alone. Our goal is to min- imize human participation, so we employ evo- lutionary algorithms to discover such networks automatically. Despite signiï¬cant computational requirements, we show that it is now possible to evolve models with accuracies within the range of those published in the last year. Speciï¬- cally, we employ simple evolutionary techniques at unprecedented scales to discover models for the CIFAR-10 and CIFAR-100 datasets, start- ing from trivial initial conditions and reaching accuracies of 94.6% (95.6% for ensemble) and 77.0%, respectively. To do this, we use novel and intuitive mutation operators that navigate large search spaces; we stress that no human participa- tion is required once evolution starts and that the output is a fully-trained model. Throughout this work, we place special emphasis on the repeata- bility of results, the variability in the outcomes and the computational requirements.
# 1. Introduction
Neural networks can successfully perform difï¬cult tasks where large amounts of training data are available (He et al., 2015; Weyand et al., 2016; Silver et al., 2016; Wu et al., 2016). Discovering neural network architectures, however, remains a laborious task. Even within the spe- ciï¬c problem of image classiï¬cation, the state of the art was attained through many years of focused investigation by hundreds of researchers (Krizhevsky et al. (2012); Si- monyan & Zisserman (2014); Szegedy et al. (2015); He et al. (2016); Huang et al. (2016a), among many others).
It is therefore not surprising that in recent years, tech- niques to automatically discover these architectures have been gaining popularity (Bergstra & Bengio, 2012; Snoek et al., 2012; Han et al., 2015; Baker et al., 2016; Zoph & Le, 2016). One of the earliest such âneuro-discoveryâ methods was neuro-evolution (Miller et al., 1989; Stanley & Miikkulainen, 2002; Stanley, 2007; Bayer et al., 2009; Stanley et al., 2009; Breuel & Shafait, 2010; Pugh & Stan- ley, 2013; Kim & Rigazio, 2015; Zaremba, 2015; Fernando et al., 2016; Morse & Stanley, 2016). Despite the promis- ing results, the deep learning community generally per- ceives evolutionary algorithms to be incapable of match- ing the accuracies of hand-designed models (Verbancsics & Harguess, 2013; Baker et al., 2016; Zoph & Le, 2016). In this paper, we show that it is possible to evolve such com- petitive models today, given enough computational power.
We used slightly-modiï¬ed known evolutionary algorithms and scaled up the computation to unprecedented levels, as far as we know. This, together with a set of novel and intuitive mutation operators, allowed us to reach compet- itive accuracies on the CIFAR-10 dataset. This dataset was chosen because it requires large networks to reach high accuracies, thus presenting a computational challenge. We also took a small ï¬rst step toward generalization and evolved networks on the CIFAR-100 dataset. In transi- tioning from CIFAR-10 to CIFAR-100, we did not mod- ify any aspect or parameter of our algorithm. Our typical neuro-evolution outcome on CIFAR-10 had a test accuracy with µ = 94.1%, Ï = 0.4% @ 9 à 1019 FLOPs, and our top model (by validation accuracy) had a test accuracy of 94.6% @ 4Ã1020 FLOPs. Ensembling the validation-top 2 models from each population reaches a test accuracy of 95.6%, at no additional training cost. On CIFAR-100, our single experiment resulted in a test accuracy of 77.0% @ 2 à 1020 FLOPs. As far as we know, these are the most accurate results obtained on these datasets by automated discovery methods that start from trivial initial conditions.
1Google Brain, Mountain View, California, USA 2Google Re- search, Mountain View, California, USA. Correspondence to: Es- teban Real <ereal@google.com>.
Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s).
Throughout this study, we placed special emphasis on the simplicity of the algorithm. In particular, it is a âone- shotâ technique, producing a fully trained neural network It also has few impactful requiring no post-processing. meta-parameters (i.e. parameters not optimized by the al- gorithm). Starting out with poor-performing models with
Large-Scale Evolution
Table 1. Comparison with single-model hand-designed architectures. The âC10+â and âC100+â columns indicate the test accuracy on the data-augmented CIFAR-10 and CIFAR-100 datasets, respectively. The âReachable?â column denotes whether the given hand- designed model lies within our search space. An entry of âââ indicates that no value was reported. The â indicates a result reported by Huang et al. (2016b) instead of the original author. Much of this table was based on that presented in Huang et al. (2016a).
STUDY PARAMS. C10+ C100+ REACHABLE? MAXOUT (GOODFELLOW ET AL., 2013) NETWORK IN NETWORK (LIN ET AL., 2013) ALL-CNN (SPRINGENBERG ET AL., 2014) DEEPLY SUPERVISED (LEE ET AL., 2015) HIGHWAY (SRIVASTAVA ET AL., 2015) RESNET (HE ET AL., 2016) EVOLUTION (OURS) WIDE RESNET 28-10 (ZAGORUYKO & KOMODAKIS, 2016) WIDE RESNET 40-10+D/O (ZAGORUYKO & KOMODAKIS, 2016) DENSENET (HUANG ET AL., 2016A) 61.4% 90.7% â 91.2% 66.3% 1.3 M 92.8% 65.4% 92.0% 2.3 M 92.3% 67.6% 1.7 M 93.4% 72.8%â 5.4 M 94.6% 40.4 M 36.5 M 96.0% 50.7 M 96.2% 25.6 M 96.7% â â â 77.0% 80.0% 81.7% 82.8% NO NO YES NO NO YES N/A YES NO NO
no convolutions, the algorithm must evolve complex con- volutional neural networks while navigating a fairly unre- stricted search space: no ï¬xed depth, arbitrary skip con- nections, and numerical parameters that have few restric- tions on the values they can take. We also paid close atten- tion to result reporting. Namely, we present the variabil- ity in our results in addition to the top value, we account for researcher degrees of freedom (Simmons et al., 2011), we study the dependence on the meta-parameters, and we disclose the amount of computation necessary to reach the main results. We are hopeful that our explicit discussion of computation cost could spark more study of efï¬cient model search and training. Studying model performance normal- ized by computational investment allows consideration of economic concepts like opportunity cost.
# 2. Related Work
2015; Fernando et al., 2016). For example, the CPPN (Stanley, 2007; Stanley et al., 2009) allows for the evolu- tion of repeating features at different scales. Also, Kim & Rigazio (2015) use an indirect encoding to improve the convolution ï¬lters in an initially highly-optimized ï¬xed ar- chitecture.
Research on weight evolution is still ongoing (Morse & Stanley, 2016) but the broader machine learning commu- nity defaults to back-propagation for optimizing neural net- work weights (Rumelhart et al., 1988). Back-propagation and evolution can be combined as in Stanley et al. (2009), where only the structure is evolved. Their algorithm fol- lows an alternation of architectural mutations and weight back-propagation. Similarly, Breuel & Shafait (2010) use this approach for hyper-parameter search. Fernando et al. (2016) also use back-propagation, allowing the trained weights to be inherited through the structural modiï¬ca- tions.
Neuro-evolution dates back many years (Miller et al., 1989), originally being used only to evolve the weights of a ï¬xed architecture. Stanley & Miikkulainen (2002) showed that it was advantageous to simultaneously evolve the architecture using the NEAT algorithm. NEAT has three kinds of mutations: (i) modify a weight, (ii) add a connection between existing nodes, or (iii) insert a node while splitting an existing connection. It also has a mech- anism for recombining two models into one and a strategy to promote diversity known as ï¬tness sharing (Goldberg et al., 1987). Evolutionary algorithms represent the models using an encoding that is convenient for their purposeâ analogous to natureâs DNA. NEAT uses a direct encoding: every node and every connection is stored in the DNA. The alternative paradigm, indirect encoding, has been the sub- ject of much neuro-evolution research (Gruau, 1993; Stan- ley et al., 2009; Pugh & Stanley, 2013; Kim & Rigazio,
The above studies create neural networks that are small in comparison to the typical modern architectures used for im- age classiï¬cation (He et al., 2016; Huang et al., 2016a). Their focus is on the encoding or the efï¬ciency of the evo- lutionary process, but not on the scale. When it comes to images, some neuro-evolution results reach the computa- tional scale required to succeed on the MNIST dataset (Le- Cun et al., 1998). Yet, modern classiï¬ers are often tested on realistic images, such as those in the CIFAR datasets (Krizhevsky & Hinton, 2009), which are much more chal- lenging. These datasets require large models to achieve high accuracy.
Non-evolutionary neuro-discovery methods have been more successful at tackling realistic image data. Snoek et al. (2012) used Bayesian optimization to tune 9 hyper-parameters for a ï¬xed-depth architecture, reach-
Large-Scale Evolution
Table 2. Comparison with automatically discovered architectures. The âC10+â and âC100+â contain the test accuracy on the data- augmented CIFAR-10 and CIFAR-100 datasets, respectively. An entry of âââ indicates that the information was not reported or is not known to us. For Zoph & Le (2016), we quote the result with the most similar search space to ours, as well as their best result. Please refer to Table 1 for hand-designed results, including the state of the art. âDiscrete params.â means that the parameters can be picked from a handful of values only (e.g. strides â {1, 2, 4}).
STUDY STARTING POINT CONSTRAINTS POST-PROCESSING PARAMS. C10+ BAYESIAN (SNOEK ET AL., 2012) 3 LAYERS FIXED ARCHITECTURE, NO SKIPS NONE â 90.5% Q-LEARNING (BAKER ET AL., 2016) â DISCRETE PARAMS., MAX. NUM. LAYERS, NO SKIPS TUNE, RETRAIN 11.2 M 93.1% RL (ZOPH & LE, 2016) 20 LAYERS, 50% SKIPS DISCRETE PARAMS., EXACTLY 20 LAYERS SMALL GRID SEARCH, RETRAIN 2.5 M 94.0% RL (ZOPH & LE, 2016) 39 LAYERS, 2 POOL LAYERS AT 13 AND 26, 50% SKIPS DISCRETE PARAMS., EXACTLY 39 LAYERS, 2 POOL LAYERS AT 13 AND 26 ADD MORE FILTERS, SMALL GRID SEARCH, RETRAIN 37.0 M 96.4% EVOLUTION (OURS) SINGLE LAYER, ZERO CONVS. POWER-OF-2 STRIDES NONE 5.4 M 40.4 M ENSEMB. 94.6% 95.6% C100+ â 72.9% â â 77.0%
ing a new state of the art at the time. Zoph & Le (2016) used reinforcement learning on a deeper In their approach, a neu- ï¬xed-length architecture. ral networkâthe âdiscovererââconstructs a convolutional neural networkâthe âdiscoveredââone layer at a time. In addition to tuning layer parameters, they add and remove skip connections. This, together with some manual post- processing, gets them very close to the (current) state of the art. (Additionally, they surpassed the state of the art on a sequence-to-sequence problem.) Baker et al. (2016) use Q-learning to also discover a network one layer at a time, but in their approach, the number of layers is decided by the discoverer. This is a desirable feature, as it would allow a system to construct shallow or deep solutions, as may be the requirements of the dataset at hand. Different datasets would not require specially tuning the algorithm. Compar- isons among these methods are difï¬cult because they ex- plore very different search spaces and have very different initial conditions (Table 2).
Tangentially, there has also been neuro-evolution work on LSTM structure (Bayer et al., 2009; Zaremba, 2015), but this is beyond the scope of this paper. Also related to this work is that of Saxena & Verbeek (2016), who embed con- volutions with different parameters into a species of âsuper- networkâ with many parallel paths. Their algorithm then selects and ensembles paths in the super-network. Finally, canonical approaches to hyper-parameter search are grid search (used in Zagoruyko & Komodakis (2016), for ex- ample) and random search, the latter being the better of the
# two (Bergstra & Bengio, 2012).
Our approach builds on previous work, with some im- portant differences. We explore large model-architecture search spaces starting with basic initial conditions to avoid priming the system with information about known good strategies for the speciï¬c dataset at hand. Our encoding is different from the neuro-evolution methods mentioned above: we use a simpliï¬ed graph as our DNA, which is transformed to a full neural network graph for training and evaluation (Section 3). Some of the mutations acting on this DNA are reminiscent of NEAT. However, instead of single nodes, one mutation can insert whole layersâi.e. tens to hundreds of nodes at a time. We also allow for these layers to be removed, so that the evolutionary process can simplify an architecture in addition to complexifying it. Layer parameters are also mutable, but we do not prescribe a small set of possible values to choose from, to allow for a larger search space. We do not use ï¬tness sharing. We report additional results using recombination, but for the most part, we used mutation only. On the other hand, we do use back-propagation to optimize the weights, which can be inherited across mutations. Together with a learn- ing rate mutation, this allows the exploration of the space of learning rate schedules, yielding fully trained models at the end of the evolutionary process (Section 3). Ta- bles 1 and 2 compare our approach with hand-designed ar- chitectures and with other neuro-discovery techniques, re- spectively.
Large-Scale Evolution
# 3. Methods
# 3.1. Evolutionary Algorithm
To automatically search for high-performing neural net- work architectures, we evolve a population of models. Each modelâor individualâis a trained architecture. The modelâs accuracy on a separate validation dataset is a mea- sure of the individualâs quality or ï¬tness. During each evo- lutionary step, a computerâa workerâchooses two indi- viduals at random from this population and compares their ï¬tnesses. The worst of the pair is immediately removed from the populationâit is killed. The best of the pair is selected to be a parent, that is, to undergo reproduction. By this we mean that the worker creates a copy of the par- ent and modiï¬es this copy by applying a mutation, as de- scribed below. We will refer to this modiï¬ed copy as the child. After the worker creates the child, it trains this child, evaluates it on the validation set, and puts it back into the population. The child then becomes aliveâi.e. free to act as a parent. Our scheme, therefore, uses repeated pairwise competitions of random individuals, which makes it an ex- ample of tournament selection (Goldberg & Deb, 1991). Using pairwise comparisons instead of whole population operations prevents workers from idling when they ï¬nish early. Code and more detail about the methods described below can be found in Supplementary Section S1.
lutional network, two of the dimensions of the tensor rep- resent the spatial coordinates of the image and the third is a number of channels. Activation functions are applied at the vertices and can be either (i) batch-normalization (Ioffe & Szegedy, 2015) with rectiï¬ed linear units (ReLUs) or (ii) plain linear units. The graphâs edges represent identity con- nections or convolutions and contain the mutable numeri- cal parameters deï¬ning the convolutionâs properties. When multiple edges are incident on a vertex, their spatial scales or numbers of channels may not coincide. However, the vertex must have a single size and number of channels for its activations. The inconsistent inputs must be resolved. Resolution is done by choosing one of the incoming edges as the primary one. We pick this primary edge to be the one that is not a skip connection. The activations coming from the non-primary edges are reshaped through zeroth- order interpolation in the case of the size and through trun- cation/padding in the case of the number of channels, as in He et al. (2016). In addition to the graph, the learning-rate value is also stored in the DNA.
A child is similar but not identical to the parent because of the action of a mutation. In each reproduction event, the worker picks a mutation at random from a predetermined set. The set contains the following mutations:
Using this strategy to search large spaces of complex im- age models requires considerable computation. To achieve scale, we developed a massively-parallel, lock-free infras- tructure. Many workers operate asynchronously on differ- ent computers. They do not communicate directly with each other. Instead, they use a shared ï¬le-system, where the population is stored. The ï¬le-system contains direc- tories that represent the individuals. Operations on these individuals, such as the killing of one, are represented as atomic renames on the directory2. Occasionally, a worker may concurrently modify the individual another worker is operating on. In this case, the affected worker simply gives up and tries again. The population size is 1000 individuals, unless otherwise stated. The number of workers is always 1 4 of the population size. To allow for long run-times with a limited amount of space, dead individualsâ directories are frequently garbage-collected.
ALTER-LEARNING-RATE (sampling details below). ⢠IDENTITY (effectively means âkeep trainingâ). ⢠RESET-WEIGHTS (sampled as in He et al. (2015), for
example).
⢠INSERT-CONVOLUTION (inserts a convolution at a ran- dom location in the âconvolutional backboneâ, as in Fig- ure 1. The inserted convolution has 3 à 3 ï¬lters, strides of 1 or 2 at random, number of channels same as input. May apply batch-normalization and ReLU activation or none at random).
REMOVE-CONVOLUTION. ⢠ALTER-STRIDE (only powers of 2 are allowed). ⢠ALTER-NUMBER-OF-CHANNELS (of random conv.). ⢠FILTER-SIZE (horizontal or vertical at random, on ran-
dom convolution, odd values only).
⢠INSERT-ONE-TO-ONE (inserts a one-to-one/identity connection, analogous to insert-convolution mutation).
ADD-SKIP (identity between random layers). ⢠REMOVE-SKIP (removes random skip).
# 3.2. Encoding and Mutations
Individual architectures are encoded as a graph that we refer to as the DNA. In this graph, the vertices represent rank-3 tensors or activations. As is standard for a convo-
These speciï¬c mutations were chosen for their similarity to the actions that a human designer may take when im- proving an architecture. This may clear the way for hybrid evolutionaryâhand-design methods in the future. The prob- abilities for the mutations were not tuned in any way.
2The use of the ï¬le-name string to contain key information about the individual was inspired by Breuel & Shafait (2010), and it speeds up disk access enormously. In our case, the ï¬le name contains the state of the individual (alive, dead, training, etc.).
A mutation that acts on a numerical parameter chooses the new value at random around the existing value. All sam- pling is from uniform distributions. For example, a muta- tion acting on a convolution with 10 output channels will
Large-Scale Evolution
result in a convolution having between 5 and 20 output channels (that is, half to twice the original value). All val- ues within the range are possible. As a result, the models are not constrained to a number of ï¬lters that is known to work well. The same is true for all other parameters, yield- ing a âdenseâ search space. In the case of the strides, this applies to the log-base-2 of the value, to allow for activa- tion shapes to match more easily3. In principle, there is also no upper limit to any of the parameters. All model depths are attainable, for example. Up to hardware constraints, the search space is unbounded. The dense and unbounded na- ture of the parameters result in the exploration of a truly large set of possible architectures.
# 3.3. Initial Conditions
Every evolution experiment begins with a population of simple individuals, all with a learning rate of 0.1. They are all very bad performers. Each initial individual consti- tutes just a single-layer model with no convolutions. This conscious choice of poor initial conditions forces evolution to make the discoveries by itself. The experimenter con- tributes mostly through the choice of mutations that demar- cate a search space. Altogether, the use of poor initial con- ditions and a large search space limits the experimenterâs impact. In other words, it prevents the experimenter from âriggingâ the experiment to succeed.
# 3.5. Computation cost
To estimate computation costs, we identiï¬ed the basic TensorFlow (TF) operations used by our model training and validation, like convolutions, generic matrix multipli- cations, etc. For each of these TF operations, we esti- mated the theoretical number of ï¬oating-point operations (FLOPs) required. This resulted in a map from TF opera- tion to FLOPs, which is valid for all our experiments.
For each individual within an evolution experiment, we compute the total FLOPs incurred by the TF operations in its architecture over one batch of examples, both during its training (Ft FLOPs) and during its validation (Fv FLOPs). Then we assign to the individual the cost FtNt + FvNv, where Nt and Nv are the number of training and validation batches, respectively. The cost of the experiment is then the sum of the costs of all its individuals.
We intend our FLOPs measurement as a coarse estimate only. We do not take into account input/output, data prepro- cessing, TF graph building or memory-copying operations. Some of these unaccounted operations take place once per training run or once per step and some have a component that is constant in the model size (such as disk-access la- tency or input data cropping). We therefore expect the esti- mate to be more useful for large architectures (for example, those with many convolutions).
# 3.4. Training and Validation
# 3.6. Weight Inheritance
Training and validation is done on the CIFAR-10 dataset. This dataset consists of 50,000 training examples and 10,000 test examples, all of which are 32 x 32 color images labeled with 1 of 10 common object classes (Krizhevsky & Hinton, 2009). 5,000 of the training examples are held out in a validation set. The remaining 45,000 examples consti- tute our actual training set. The training set is augmented as in He et al. (2016). The CIFAR-100 dataset has the same number of dimensions, colors and examples as CIFAR-10, but uses 100 classes, making it much more challenging.
Training is done with TensorFlow (Abadi et al., 2016), us- ing SGD with a momentum of 0.9 (Sutskever et al., 2013), a batch size of 50, and a weight decay of 0.0001. Each train- ing runs for 25,600 steps, a value chosen to be brief enough so that each individual could be trained in a few seconds to a few hours, depending on model size. The loss function is the cross-entropy. Once training is complete, a single eval- uation on the validation set provides the accuracy to use as the individualâs ï¬tness. Ensembling was done by majority voting during the testing evaluation. The models used in the ensemble were selected by validation accuracy.
3For integer DNA parameters, we actually store and mutate a ï¬oating-point value. This allows multiple small mutations to have a cumulative effect in spite of integer round-off.
We need architectures that are trained to completion within an evolution experiment. If this does not happen, we are forced to retrain the best model at the end, possibly hav- ing to explore its hyper-parameters. Such extra explo- ration tends to depend on the details of the model being retrained. On the other hand, 25,600 steps are not enough to fully train each individual. Training a large model to completion is prohibitively slow for evolution. To resolve this dilemma, we allow the children to inherit the par- entsâ weights whenever possible. Namely, if a layer has matching shapes, the weights are preserved. Consequently, some mutations preserve all the weights (like the identity or learning-rate mutations), some preserve none (the weight- resetting mutation), and most preserve some but not all. An example of the latter is the ï¬lter-size mutation: only the ï¬l- ters of the convolution being mutated will be discarded.
# 3.7. Reporting Methodology
To avoid over-ï¬tting, neither the evolutionary algorithm nor the neural network training ever see the testing set. Each time we refer to âthe best modelâ, we mean the model with the highest validation accuracy. However, we always report the test accuracy. This applies not only to the choice of the best individual within an experiment, but also to the choice
Large-Scale Evolution
of the best experiment. Moreover, we only include ex- periments that we managed to reproduce, unless explicitly noted. Any statistical analysis was fully decided upon be- fore seeing the results of the experiment reported, to avoid tailoring our analysis to our experimental data (Simmons et al., 2011).
We also ran a partial control where the weight-inheritance mechanism is disabled. This run also results in a lower accuracy (92.2 %) in the same amount of time (Figure 2), using 9Ã1019 FLOPs. This shows that weight inheritance is important in the process.
# 4. Experiments and Results
We want to answer the following questions:
⢠Can a simple one-shot evolutionary process start from trivial initial conditions and yield fully trained models that rival hand-designed architectures?
Finally, we applied our neuro-evolution algorithm, with- out any changes and with the same meta-parameters, to CIFAR-100. Our only experiment reached an accuracy of 77.0 %, using 2 Ã 1020 FLOPs. We did not attempt other datasets. Table 1 shows that both the CIFAR-10 and CIFAR-100 results are competitive with modern hand- designed networks.
⢠What are the variability in outcomes, the parallelizabil- ity, and the computation cost of the method?
# 5. Analysis
⢠Can an algorithm designed iterating on CIFAR-10 be ap- plied, without any changes at all, to CIFAR-100 and still produce competitive models?
We used the algorithm in Section 3 to perform several ex- periments. Each experiment evolves a population in a few days, typiï¬ed by the example in Figure 1. The ï¬gure also contains examples of the architectures discovered, which turn out to be surprisingly simple. Evolution attempts skip connections but frequently rejects them.
Meta-parameters. We observe that populations evolve until they plateau at some local optimum (Figure 2). The ï¬tness (i.e. validation accuracy) value at this optimum varies between experiments (Figure 2, inset). Since not all experiments reach the highest possible value, some popu- lations are getting âtrappedâ at inferior local optima. This entrapment is affected by two important meta-parameters (i.e. parameters that are not optimized by the algorithm). These are the population size and the number of training steps per individual. Below we discuss them and consider their relationship to local optima.
To get a sense of the variability in outcomes, we repeated the experiment 5 times. Across all 5 experiment runs, the best model by validation accuracy has a testing accuracy of 94.6 %. Not all experiments reach the same accuracy, but they get close (µ = 94.1%, Ï = 0.4). Fine differences in the experiment outcome may be somewhat distinguishable by validation accuracy (correlation coefï¬cient = 0.894). The total amount of computation across all 5 experiments was 4Ã1020 FLOPs (or 9Ã1019 FLOPs on average per exper- iment). Each experiment was distributed over 250 parallel workers (Section 3.1). Figure 2 shows the progress of the experiments in detail.
As a control, we disabled the selection mechanism, thereby reproducing and killing random individuals. This is the form of random search that is most compatible with our infrastructure. The probability distributions for the pa- rameters are implicitly determined by the mutations. This control only achieves an accuracy of 87.3 % in the same amount of run time on the same hardware (Figure 2). The total amount of computation was 2Ã1017 FLOPs. The low FLOP count is a consequence of random search generating many small, inadequate models that train quickly but con- sume roughly constant amounts of setup time (not included in the FLOP count). We attempted to minimize this over- head by avoiding unnecessary disk access operations, to no avail: too much overhead remains spent on a combination of neural network setup, data augmentation, and training step initialization.
Effect of population size. Larger populations explore the space of models more thoroughly, and this helps reach bet- ter optima (Figure 3, left). Note, in particular, that a pop- ulation of size 2 can get trapped at very low ï¬tness values. Some intuition about this can be gained by considering the fate of a super-ï¬t individual, i.e. an individual such that any one architectural mutation reduces its ï¬tness (even though a sequence of many mutations may improve it). In the case of a population of size 2, if the super-ï¬t individual wins once, it will win every time. After the ï¬rst win, it will pro- duce a child that is one mutation away. By deï¬nition of super-ï¬t, therefore, this child is inferior4. Consequently, in the next round of tournament selection, the super-ï¬t in- dividual competes against its child and wins again. This cycle repeats forever and the population is trapped. Even if a sequence of two mutations would allow for an âescapeâ from the local optimum, such a sequence can never take place. This is only a rough argument to heuristically sug- gest why a population of size 2 is easily trapped. More generally, Figure 3 (left) empirically demonstrates a bene- ï¬t from an increase in population size. Theoretical analy- ses of this dependence are quite complex and assume very speciï¬c models of population dynamics; often larger pop- ulations are better at handling local optima, at least beyond a size threshold (Weinreich & Chao (2005) and references
4Except after identity or learning rate mutations, but these pro- duce a child with the same architecture as the parent.
Large-Scale Evolution
94.6 91.8 85.3 test accuracy (%) 22.6 a ; < : ¢ Global Poo}! eae a2 ES TESES TES] : D =a 3 % ay ei C#ENSR . 5 Output ss Global Pool 7 Real C+BNTR Zs Output Global Pool Output 0.9 28.1 70.2 wall time (hours) 256.2
Figure 1. Progress of an evolution experiment. Each dot represents an individual in the population. Blue dots (darker, top-right) are alive. The rest have been killed. The four diagrams show examples of discovered architectures. These correspond to the best individual (right- most) and three of its ancestors. The best individual was selected by its validation accuracy. Evolution sometimes stacks convolutions without any nonlinearity in between (âCâ, white background), which are mathematically equivalent to a single linear operation. Unlike typical hand-designed architectures, some convolutions are followed by more than one nonlinear function (âC+BN +R+BN +R+...â, orange background).
therein). Effect of number of training steps. The other meta- parameter is the number T of training steps for each indi- vidual. Accuracy increases with T (Figure 3, right). Larger T means an individual needs to undergo fewer identity mu- tations to reach a given level of training. Escaping local optima. While we might increase popu- lation size or number of steps to prevent a trapped popu- lation from forming, we can also free an already trapped population. For example, increasing the mutation rate or resetting all the weights of a population (Figure 4) work well but are quite costly (more details in Supplementary Section S3). Recombination. None of the results presented so far used recombination. However, we explored three forms of recombination in additional experiments. Following Tuson & Ross (1998), we attempted to evolve the mutation prob- ability distribution too. On top of this, we employed a re- combination strategy by which a child could inherit struc- ture from one parent and mutation probabilities from an- other. The goal was to allow individuals that progressed well due to good mutation choices to quickly propagate
such choices to others. In a separate experiment, we at- tempted recombining the trained weights from two parents in the hope that each parent may have learned different concepts from the training data. In a third experiment, we recombined structures so that the child fused the ar- chitectures of both parents side-by-side, generating wide models fast. While none of these approaches improved our recombination-free results, further study seems warranted.
# 6. Conclusion
In this paper we have shown that (i) neuro-evolution is ca- pable of constructing large, accurate networks for two chal- lenging and popular image classiï¬cation benchmarks; (ii) neuro-evolution can do this starting from trivial initial con- ditions while searching a very large space; (iii) the pro- cess, once started, needs no experimenter participation; and (iv) the process yields fully trained models. Completely training models required weight inheritance (Sections 3.6). In contrast to reinforcement learning, evolution provides a natural framework for weight inheritance: mutations can be constructed to guarantee a large degree of similarity be-
Large-Scale Evolution
100.0 94.6 g > o e 5 3 o B 3 * |ââ Evolution 04 ee _ Evolution w/o weight inheritance + Random search of ee r 20.04 : wall-clock time
0
wall-clock time (hours)
Figure 2. Repeatability of results and controls. In this plot, the vertical axis at wall-time t is deï¬ned as the test accuracy of the individual with the highest validation accuracy that became alive at or before t. The inset magniï¬es a portion of the main graph. The curves show the progress of various experiments, as follows. The top line (solid, blue) shows the mean test accuracy across 5 large-scale evolution experiments. The shaded area around this top line has a width of ±2Ï (clearer in inset). The next line down (dashed, orange, main graph and inset) represents a single experi- ment in which weight-inheritance was disabled, so every individ- ual has to train from random weights. The lowest curve (dotted- dashed) is a random-search control. All experiments occupied the same amount and type of hardware. A small amount of noise in the generalization from the validation to the test set explains why the lines are not monotonically increasing. Note the narrow width of the ±2Ï area (main graph and inset), which shows that the high accuracies obtained in evolution experiments are repeatable.
tween the original and mutated modelsâas we did. Evo- lution also has fewer tunable meta-parameters with a fairly predictable effect on the variance of the results, which can be made small.
While we did not focus on reducing computation costs, we hope that future algorithmic and hardware improvement will allow more economical implementation. In that case, evolution would become an appealing approach to neuro- discovery for reasons beyond the scope of this paper. For example, it âhits the ground runningâ, improving on arbi- trary initial models as soon as the experiment begins. The mutations used can implement recent advances in the ï¬eld and can be introduced without having to restart an exper- iment. Furthermore, recombination can merge improve- ments developed by different individuals, even if they come from other populations. Moreover, it may be possible to combine neuro-evolution with other automatic architecture discovery methods.
250
100 100 g °o.)66 > = ° 8 x slo g 8 o § se 8 5 | 8 2} 8 g|8 3 g 5 £|° 2]° g 50. 1 1 1 5 : 2 1000 256 2560. 25600 10 43 . population size training steps
Figure 3. Dependence on meta-parameters. In both graphs, each circle represents the result of a full evolution experiment. Both vertical axes show the test accuracy for the individual with the highest validation accuracy at the end of the experiment. All pop- ulations evolved for the same total wall-clock time. There are 5 data points at each horizontal axis value. LEFT: effect of pop- ulation size. To economize resources, in these experiments the number of individual training steps is only 2560. Note how the ac- curacy increases with population size. RIGHT: effect of number of training steps per individual. Note how the accuracy increases with more steps.
Increased mutation rate oo SS wo accuracy (%)
Increased mutation rate oo SS wo accuracy (%) 92.0 87.3 accuracy (%) ~ 3 ° z ree Fra i} 16: 333 550 733 wall time (hours)
92.0 87.3 accuracy (%) ~ 3 ° z ree Fra i} 16: 333 550 733 wall time (hours)
Figure 4. Escaping local optima in two experiments. We used smaller populations and fewer training steps per individual (2560) to make it more likely for a population to get trapped and to re- duce resource usage. Each dot represents an individual. The verti- cal axis is the accuracy. TOP: example of a population of size 100 escaping a local optimum by using a period of increased mutation rate in the middle (Section 5). BOTTOM: example of a population of size 50 escaping a local optimum by means of three consecu- tive weight resetting events (Section 5). Details in Supplementary Section S3.
Large-Scale Evolution
# Acknowledgements
We wish to thank Vincent Vanhoucke, Megan Kacho- lia, Rajat Monga, and especially Jeff Dean for their sup- port and valuable input; Geoffrey Hinton, Samy Ben- gio, Thomas Breuel, Mark DePristo, Vishy Tirumalashetty, Martin Abadi, Noam Shazeer, Yoram Singer, Dumitru Er- han, Pierre Sermanet, Xiaoqiang Zheng, Shan Carter and Vijay Vasudevan for helpful discussions; Thomas Breuel, Xin Pan and Andy Davis for coding contributions; and the larger Google Brain team for help with TensorFlow and training vision models.
# References
Goodfellow, Ian J, Warde-Farley, David, Mirza, Mehdi, Courville, Aaron C, and Bengio, Yoshua. Maxout net- works. International Conference on Machine Learning, 28:1319â1327, 2013.
Gruau, Frederic. Genetic synthesis of modular neural net- works. In Proceedings of the 5th International Confer- ence on Genetic Algorithms, pp. 318â325. Morgan Kauf- mann Publishers Inc., 1993.
Han, Song, Pool, Jeff, Tran, John, and Dally, William. Learning both weights and connections for efï¬cient neu- ral network. In Advances in Neural Information Process- ing Systems, pp. 1135â1143, 2015.
Abadi, Mart´ın, Agarwal, Ashish, Barham, Paul, Brevdo, Eugene, Chen, Zhifeng, Citro, Craig, Corrado, Greg S, Davis, Andy, Dean, Jeffrey, Devin, Matthieu, et al. Ten- sorï¬ow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectiï¬ers: Surpassing human- In Pro- level performance on imagenet classiï¬cation. ceedings of the IEEE international conference on com- puter vision, pp. 1026â1034, 2015.
and Raskar, Ramesh. Designing neural network archi- tectures using reinforcement learning. arXiv preprint arXiv:1611.02167, 2016.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vi- sion and Pattern Recognition, pp. 770â778, 2016.
Bayer, Justin, Wierstra, Daan, Togelius, Julian, and Schmidhuber, J¨urgen. Evolving memory cell structures In International Conference on for sequence learning. Artiï¬cial Neural Networks, pp. 755â764. Springer, 2009.
Huang, Gao, Liu, Zhuang, Weinberger, Kilian Q, and van der Maaten, Laurens. Densely connected convo- arXiv preprint arXiv:1608.06993, lutional networks. 2016a.
Bergstra, James and Bengio, Yoshua. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(Feb):281â305, 2012.
Huang, Gao, Sun, Yu, Liu, Zhuang, Sedra, Daniel, and Weinberger, Kilian Q. Deep networks with stochastic depth. In European Conference on Computer Vision, pp. 646â661. Springer, 2016b.
Breuel, Thomas and Shafait, Faisal. Automlp: Simple, effective, fully automated learning rate and size adjust- ment. In The Learning Workshop. Utah, 2010.
Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
Fernando, Chrisantha, Banarse, Dylan, Reynolds, Mal- colm, Besse, Frederic, Pfau, David, Jaderberg, Max, Lanctot, Marc, and Wierstra, Daan. Convolution by evo- lution: Differentiable pattern producing networks. In Proceedings of the 2016 on Genetic and Evolutionary Computation Conference, pp. 109â116. ACM, 2016.
Kim, Minyoung and Rigazio, Luca. Deep clustered convo- lutional kernels. arXiv preprint arXiv:1503.01824, 2015.
Krizhevsky, Alex and Hinton, Geoffrey. Learning multiple layers of features from tiny images. 2009.
Goldberg, David E and Deb, Kalyanmoy. A comparative analysis of selection schemes used in genetic algorithms. Foundations of genetic algorithms, 1:69â93, 1991.
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 1097â1105, 2012.
Goldberg, David E, Richardson, Jon, et al. Genetic algo- rithms with sharing for multimodal function optimiza- tion. In Genetic algorithms and their applications: Pro- ceedings of the Second International Conference on Ge- netic Algorithms, pp. 41â49. Hillsdale, NJ: Lawrence Erlbaum, 1987.
LeCun, Yann, Cortes, Corinna, and Burges, Christo- pher JC. The mnist database of handwritten digits, 1998.
Lee, Chen-Yu, Xie, Saining, Gallagher, Patrick W, Zhang, Zhengyou, and Tu, Zhuowen. Deeply-supervised nets. In AISTATS, volume 2, pp. 5, 2015.
Large-Scale Evolution
Lin, Min, Chen, Qiang, and Yan, Shuicheng. Network in network. arXiv preprint arXiv:1312.4400, 2013.
Stanley, Kenneth O. Compositional pattern producing net- works: A novel abstraction of development. Genetic pro- gramming and evolvable machines, 8(2):131â162, 2007.
Miller, Geoffrey F, Todd, Peter M, and Hegde, Shailesh U. Designing neural networks using genetic algorithms. In Proceedings of the third international conference on Ge- netic algorithms, pp. 379â384. Morgan Kaufmann Pub- lishers Inc., 1989.
Stanley, Kenneth O and Miikkulainen, Risto. Evolving neural networks through augmenting topologies. Evo- lutionary Computation, 10(2):99â127, 2002.
Morse, Gregory and Stanley, Kenneth O. Simple evo- lutionary optimization can rival stochastic gradient de- scent in neural networks. In Proceedings of the 2016 on Genetic and Evolutionary Computation Conference, pp. 477â484. ACM, 2016.
Pugh, Justin K and Stanley, Kenneth O. Evolving mul- In Proceedings of timodal controllers with hyperneat. the 15th annual conference on Genetic and evolutionary computation, pp. 735â742. ACM, 2013.
Rumelhart, David E, Hinton, Geoffrey E, and Williams, Ronald J. Learning representations by back-propagating errors. Cognitive Modeling, 5(3):1, 1988.
Stanley, Kenneth O, DâAmbrosio, David B, and Gauci, Ja- son. A hypercube-based encoding for evolving large- scale neural networks. Artiï¬cial Life, 15(2):185â212, 2009.
Sutskever, Ilya, Martens, James, Dahl, George E, and Hin- ton, Geoffrey E. On the importance of initialization and momentum in deep learning. ICML (3), 28:1139â1147, 2013.
Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Du- mitru, Vanhoucke, Vincent, and Rabinovich, Andrew. In Proceedings of Going deeper with convolutions. the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1â9, 2015.
Saxena, Shreyas and Verbeek, Jakob. Convolutional neural fabrics. In Advances In Neural Information Processing Systems, pp. 4053â4061, 2016.
Tuson, Andrew and Ross, Peter. Adapting operator settings in genetic algorithms. Evolutionary computation, 6(2): 161â184, 1998.
Silver, David, Huang, Aja, Maddison, Chris J, Guez, Arthur, Sifre, Laurent, Van Den Driessche, George, Schrittwieser, Julian, Antonoglou, Ioannis, Panneershel- vam, Veda, Lanctot, Marc, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016.
Simmons, Joseph P, Nelson, Leif D, and Simonsohn, Uri. False-positive psychology: Undisclosed ï¬exibility in data collection and analysis allows presenting anything Psychological Science, 22(11):1359â as signiï¬cant. 1366, 2011.
Verbancsics, Phillip and Harguess, Josh. neuroevolution for deep learning. arXiv:1312.5355, 2013. Generative arXiv preprint
Weinreich, Daniel M and Chao, Lin. Rapid evolutionary escape by large populations from local ï¬tness peaks is likely in nature. Evolution, 59(6):1175â1182, 2005.
Weyand, Tobias, Kostrikov, Ilya, and Philbin, James. Planet-photo geolocation with convolutional neural net- works. In European Conference on Computer Vision, pp. 37â55. Springer, 2016.
Simonyan, Karen and Zisserman, Andrew. Very deep con- volutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Snoek, Jasper, Larochelle, Hugo, and Adams, Ryan P. Practical bayesian optimization of machine learning al- gorithms. In Advances in neural information processing systems, pp. 2951â2959, 2012.
Wu, Yonghui, Schuster, Mike, Chen, Zhifeng, Le, Quoc V., Norouzi, Mohammad, et al. Googleâs neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
Zagoruyko, Sergey and Komodakis, Nikos. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
Springenberg, Jost Tobias, Dosovitskiy, Alexey, Brox, Striving for sim- arXiv preprint Thomas, and Riedmiller, Martin. plicity: The all convolutional net. arXiv:1412.6806, 2014.
Srivastava, Rupesh Kumar, Greff, Klaus, and Schmid- arXiv preprint huber, J¨urgen. Highway networks. arXiv:1505.00387, 2015.
Zaremba, Wojciech. An empirical exploration of recurrent network architectures. 2015.
Zoph, Barret and Le, Quoc V. search with reinforcement learning. arXiv:1611.01578, 2016. Neural architecture arXiv preprint
# Large-Scale Evolution of Image Classiï¬ers
# Supplementary Material
# S1. Methods Details
This section contains additional implementation details, roughly following the order in Section 3. Short code snippets illustrate the ideas. The code is not intended to run on its own and it has been highly edited for clarity.
In our implementation, each worker runs an outer loop that is responsible for selecting a pair of random individuals from the population. The individual with the highest ï¬tness usually becomes a parent and the one with the lowest ï¬tness is usually killed (Section 3.1). Occasionally, either of these two actions is not carried out in order to keep the population size close to a set-point:
def evolve_population(self): # Iterate indefinitely. while True: # Select two random individuals from the population. valid_individuals = [] for individual in self.load_individuals(): # Only loads the IDs and states. if individual.state in [TRAINING, ALIVE]: valid_individuals.append(individual) individual_pair = random.sample(valid_individuals, 2) for individual in individual_pair: # Sync changes from other workers from file-system. Loads everything else. individual.update_if_necessary() # Ensure the individual is fully trained. if individual.state == TRAINING: self._train(individual) # Select by fitness (accuracy). individual_pair.sort(key=lambda i: i.fitness, reverse=True) better_individual = individual_pair[0] worse_individual = individual_pair[1] # If the population is not too small, kill the worst of the pair. if self._population_size() >= self._population_size_setpoint: self._kill_individual(worse_individual) # If the population is not too large, reproduce the best of the pair. if self._population_size() < self._population_size_setpoint: self._reproduce_and_train_individual(better_individual) Much of the code is wrapped in try-except blocks to handle various kinds of errors. These have been removed from the code snippets for clarity. For example, the method above would be wrapped like this:
# def evolve_population(self):
while True: try: # Select two random individuals from the population. ... except: except exceptions.PopulationTooSmallException: self._create_new_individual() continue
Large-Scale Evolution
except exceptions.ConcurrencyException: # Another worker did something that interfered with the action of this worker. # Abandon the current task and keep going. continue
The encoding for an individual is represented by a serializable DNA class instance containing all information except for the trained weights (Section 3.2). For all results in this paper, this encoding is a directed, acyclic graph where edges represent convolutions and vertices represent nonlinearities. This is a sketch of the DNA class:
# class DNA(object):
def __init__(self, dna_proto): """Initializes the âDNAâ instance from a protocol buffer. The âdna_protoâ is a protocol buffer used to restore the DNA state from disk. Together with the corresponding âto_protoâ method, they allow for a serialization-deserialization mechanism. """ # Allows evolving the learning rate, i.e. exploring the space of # learning rate schedules. self.learning_rate = dna_proto.learning_rate self._vertices = {} for vertex_id in dna_proto.vertices: # String vertex ID to âVertexâ instance. vertices[vertex_id] = Vertex(vertex_proto=dna_sproto.vertices[vertex_id]) self._edges = {} for edge_id in dna_proto.edges: # String edge ID to âEdgeâ instance. mutable_edges[edge_id] = Edge(edge_proto=dna_proto.edges[edge_id])
...
# def to_proto(self):
"""Returns this instance in protocol buffer form.""" dna_proto = dna_pb2.DnaProto(learning_rate=self.learning_rate)
for vertex_id, vertex in self._vertices.iteritems(): dna_proto.vertices[vertex_id].CopyFrom(vertex.to_proto())
# for edge_id, edge in self._edges.iteritems():
# dna_proto.edges[edge_id].CopyFrom(edge.to_proto())
...
return dna_proto
def add_edge(self, dna, from_vertex_id, to_vertex_id, edge_type, edge_id): """Adds an edge to the DNA graph, ensuring internal consistency.""" # âEdgeProtoâ defines defaults for other attributes. edge = Edge(EdgeProto( from_vertex=from_vertex_id, to_vertex=to_vertex_id, type=edge_type)) self._edges[edge_id] = edge self._vertices[from_vertex_id].edges_out.add(edge_id) self._vertices[to_vertex].edges_in.add(edge_id) return edge
# Other methods like âadd_edgeâ to manipulate the graph structure. ...
The DNA holds Vertex and Edge instances. The Vertex class looks like this:
class Vertex(object):
def __init__(self, vertex_proto):
# Relationship to the rest of the graph.
Large-Scale Evolution
self.edges_in = set(vertex_proto.edges_in) self.edges_out = set(vertex_proto.edges_out) # Incoming edge IDs. # Outgoing edge IDs.
# The type of activations. if vertex_proto.HasField(âlinearâ): self.type = LINEAR # Linear activations. elif vertex_proto.HasField(âbn_reluâ): self.type = BN_RELU # ReLU activations with batch-normalization. else: raise NotImplementedError() # Some parts of the graph can be prevented from being acted upon by mutations. # The following boolean flags control this. self.inputs_mutable = vertex_proto.inputs_mutable self.outputs_mutable = vertex_proto.outputs_mutable self.properties_mutable = vertex_proto.properties_mutable # Each vertex represents a 2Ës x 2Ës x d block of nodes. s and d are positive # integers computed dynamically from the in-edges. s stands for "scale" so # that 2Ëx x 2Ës is the spatial size of the activations. d stands for "depth", # the number of channels. def to_proto(self): ... The Edge class looks like this: class Edge(object): def __init__(self, edge_proto): # Relationship to the rest of the graph. self.from_vertex = edge_proto.from_vertex self.to_vertex = edge_proto.to_vertex # Source vertex ID. # Destination vertex ID. if edge_proto.HasField(âconvâ): # In this case, the edge represents a convolution. self.type = CONV # Controls the depth (i.e. number of channels) in the output, relative to the # input. For example if there is only one input edge with a depth of 16 channels # and âself._depth_factorâ is 2, then this convolution will result in an output # depth of 32 channels. Multiple-inputs with conflicting depth must undergo # depth resolution first. self.depth_factor = edge_proto.conv.depth_factor
# Control the shape of the convolution filters (i.e. transfer function). # This parameterization ensures that the filter width and height are odd # numbers: filter_width = 2 * filter_half_width + 1. self.filter_half_width = edge_proto.conv.filter_half_width self.filter_half_height = edge_proto.conv.filter_half_height
# Controls the strides of the convolution. It will be 2Ëstride_scale. # Note that conflicting input scales must undergo scale resolution. This # controls the spatial scale of the output activations relative to the # spatial scale of the input activations. self.stride_scale = edge_proto.conv.stride_scale
# elif edge_spec.HasField(âidentityâ):
# self.type = IDENTITY
# else:
# raise NotImplementedError()
# In case depth or scale resolution is necessary due to conflicts in inputs, # These integer parameters determine which of the inputs takes precedence in # deciding the resolved depth or scale. self.depth_precedence = edge_proto.depth_precedence
Large-Scale Evolution
self.scale_precedence = edge_proto.scale_precedence
# def to_proto(self):
...
Mutations act on DNA instances. The set of mutations restricts the space explored somewhat (Section 3.2). The following are some example mutations. The AlterLearningRateMutation simply randomly modiï¬es the attribute in the DNA:
# class AlterLearningRateMutation(Mutation):
"""Mutation that modifies the learning rate."""
# def mutate(self, dna):
mutated_dna = copy.deepcopy(dna)
# Mutate the learning rate by a random factor between 0.5 and 2.0, # uniformly distributed in log scale. factor = 2**random.uniform(-1.0, 1.0) mutated_dna.learning_rate = dna.learning_rate * factor
# return mutated_dna
Many mutations modify the structure. Mutations to insert and excise vertex-edge pairs build up a main convolutional column, while mutations to add and remove edges can handle the skip connections. For example, the AddEdgeMutation can add a skip connection between random vertices.
class AddEdgeMutation(Mutation): """Adds a single edge to the graph.""" def mutate(self, dna): # Try the candidates in random order until one has the right connectivity. for from_vertex_id, to_vertex_id in self._vertex_pair_candidates(dna): mutated_dna = copy.deepcopy(dna) if (self._mutate_structure(mutated_dna, from_vertex_id, to_vertex_id)): return mutated_dna raise exceptions.MutationException() # Try another mutation. def _vertex_pair_candidates(self, dna): """Yields connectable vertex pairs.""" from_vertex_ids = _find_allowed_vertices(dna, self._to_regex, ...) if not from_vertex_ids: raise exceptions.MutationException() # Try another mutation. random.shuffle(from_vertex_ids) to_vertex_ids = _find_allowed_vertices(dna, self._from_regex, ...) if not to_vertex_ids: raise exceptions.MutationException() # Try another mutation. random.shuffle(to_vertex_ids) for to_vertex_id in to_vertex_ids: # Avoid back-connections. disallowed_from_vertex_ids, _ = topology.propagated_set(to_vertex_id) for from_vertex_id in from_vertex_ids: if from_vertex_id in disallowed_from_vertex_ids: continue # This pair does not generate a cycle, so we yield it. yield from_vertex_id, to_vertex_id def _mutate_structure(self, dna, from_vertex_id, to_vertex_id): """Adds the edge to the DNA instance.""" edge_id = _random_id() edge_type = random.choice(self._edge_types) if dna.has_edge(from_vertex_id, to_vertex_id): return False else: new_edge = dna.add_edge(from_vertex_id, to_vertex_id, edge_type, edge_id)
class AddEdgeMutation(Mutation): """Adds a single edge to the graph."""
raise exceptions.MutationException() # Try another mutation.
Large-Scale Evolution
# ... return True
For clarity, we omitted the details of a vertex ID targeting mechanism based on regular expressions, which is used to constrain where the additional edges are placed. This mechanism ensured the skip connections only joined points in the âmain convolutional backboneâ of the convnet. The precedence range is used to give the main backbone precedence over the skip connections when resolving scale and depth conï¬icts in the presence of multiple incoming edges to a vertex. Also omitted are details about the attributes of the edge to add.
To evaluate an individualâs ï¬tness, its DNA is unfolded into a TensorFlow model by the Model class. This describes how each Vertex and Edge should be interpreted. For example:
class Model(object): ... def _compute_vertex_nonlinearity(self, tensor, vertex): """Applies the necessary vertex operations depending on the vertex type.""" if vertex.type == LINEAR: pass elif vertex.type == BN_RELU: tensor = slim.batch_norm( inputs=tensor, decay=0.9, center=True, scale=True, epsilon=self._batch_norm_epsilon, activation_fn=None, updates_collections=None, is_training=self.is_training, scope=âbatch_normâ) tensor = tf.maximum(tensor, vertex.leakiness * tensor, name=âreluâ) else: raise NotImplementedError() return tensor def _compute_edge_connection(self, tensor, edge, init_scale): """Applies the necessary edge connection ops depending on the edge type.""" scale, depth = self._get_scale_and_depth(tensor) if edge.type == CONV: scale_out = scale depth_out = edge.depth_out(depth) stride = 2**edge.stride_scale # âinit_scaleâ is used to normalize the initial weights in the case of # multiple incoming edges. weights_initializer = slim.variance_scaling_initializer( factor=2.0 * init_scale**2, uniform=False) weights_regularizer = slim.l2_regularizer( weight=self._dna.weight_decay_rate) tensor = slim.conv2d( inputs=tensor, num_outputs=depth_out, kernel_size=[edge.filter_width(), edge.filter_height()], stride=stride, weights_initializer=weights_initializer, weights_regularizer=weights_regularizer, biases_initializer=None, activation_fn=None, scope=âconvâ) elif edge.type == IDENTITY: pass else: raise NotImplementedError()
# return tensor
The training and evaluation (Section 3.4) is done in a fairly standard way, similar to that in the tensorï¬ow.org tutorials for image models. The individualâs ï¬tness is the accuracy on a held-out validation dataset, as described in the main text.
Parents are able to pass some of their learned weights to their children (Section 3.6). When a child is constructed from a parent, it inherits IDs for the different sets of trainable weights (convolution ï¬lters, batch norm shifts, etc.). These IDs are embedded in the TensorFlow variable names. When the childâs weights are initialized, those that have a matching ID in the parent are inherited, provided they have the same shape:
graph = tf.Graph()
Large-Scale Evolution
with graph.as_default():
# Build the neural network using the âModelâ class and the âDNAâ instance. ...
# tf.Session.reset(self._master) with tf.Session(self._master, graph=graph) as sess:
# # Initialize all variables ...
# Make sure we can inherit batch-norm variables properly. # The TF-slim batch-norm variables must be handled separately here because some # of them are not trainable (the moving averages). batch_norm_extras = [x for x in tf.all_variables() if (
# x.name.find(âmoving_varâ) != -1 or x.name.find(âmoving_meanâ) != -1)]
# These are the variables that we will attempt to inherit from the parent. vars_to_restore = tf.trainable_variables() + batch_norm_extras
# Copy as many of the weights as possible. if mutated_weights: assignments = [] for var in vars_to_restore: stripped_name = var.name.split(â:â)[0] if stripped_name in mutated_weights: shape_mutated = mutated_weights[stripped_name].shape shape_needed = var.get_shape() if shape_mutated == shape_needed: assignments.append(var.assign(mutated_weights[stripped_name])) sess.run(assignments)
# S2. FLOPs estimation
This section describes how we estimate the number of ï¬oating point operations (FLOPs) required for an entire evolution experiment. To obtain the total FLOPs, we sum the FLOPs for each individual ever constructed. An individualâs FLOPs are the sum of its training and validation FLOPs. Namely, the individual FLOPs are given by FtNt + FvNv, where Ft is the FLOPs in one training step, Nt is the number of training steps, Fv is the FLOPs required to evaluate one validation batch of examples and Nv is the number of validation batches.
The number of training steps and the number of validation batches are known in advance and are constant throughout the experiment. Ft was obtained analytically as the sum of the FLOPs required to compute each operation executed during training (that is, each node in the TensorFlow graph). Fv was found analogously.
Below is the code snippet that computes FLOPs for the training of one individual, for example.
# import tensorflow as tf tfprof_logger = tf.contrib.tfprof.python.tools.tfprof.tfprof_logger
# def compute_flops():
"""Compute flops for one iteration of training.""" graph = tf.Graph() with graph.as_default(): # Build model ...
# Build model ... # Run one iteration of training and collect run metadata. # This metadata will be used to determine the nodes which were # actually executed as well as their argument shapes. run_meta = tf.RunMetadata() with tf.Session(graph=graph) as sess: feed_dict = {...} _ = sess.run(
Large-Scale Evolution
[train_op], feed_dict=feed_dict, run_metadata=run_meta, options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)) # Compute analytical FLOPs for all nodes in the graph. logged_ops = tfprof_logger._get_logged_ops(graph, run_meta=run_metadata) # Determine which nodes were executed during one training step # by looking at elapsed execution time of each node. elapsed_us_for_ops = {} for dev_stat in run_metadata.step_stats.dev_stats: for node_stat in dev_stat.node_stats: name = node_stat.node_name elapsed_us = node_stat.op_end_rel_micros - node_stat.op_start_rel_micros elapsed_us_for_ops[name] = elapsed_us # Compute FLOPs of executed nodes. total_flops = 0 for op in graph.get_operations(): name = op.name if elapsed_us_for_ops.get(name, 0) > 0 and name in logged_ops: total_flops += logged_ops[name].float_ops
return total_flops
Note that we also need to declare how to compute FLOPs for each operation type present (that is, for each node type in the TensorFlow graph). We did this for the following operation types (and their gradients, where applicable):
unary math operations: square, squre root, log, negation, element-wise inverse, softmax, L2 norm;
⢠binary element-wise operations: addition, subtraction, multiplication, division, minimum, maximum, power, squared difference, comparison operations;
⢠reduction operations: mean, sum, argmax, argmin;
⢠convolution, average pooling, max pooling;
⢠matrix multiplication.
# For example, for the element-wise addition operation type:
from tensorflow.python.framework import graph_util from tensorflow.python.framework import ops
@ops.RegisterStatistics("Add", "flops") def _add_flops(graph, node): """Compute flops for the Add operation.""" out_shape = graph_util.tensor_shape_from_node_def_name(graph, node.name) out_shape.assert_is_fully_defined() return ops.OpStats("flops", out_shape.num_elements())
# S3. Escaping Local Optima Details
# S3.1. Local optima and mutation rate
Entrapment at a local optimum may mean a general lack of exploration in our search algorithm. To encourage more exploration, we increased the mutation rate (Section 5). In more detail, we carried out experiments in which we ï¬rst waited until the populations converged. Some reached higher ï¬tnesses and others got trapped at poor local optima. At this point, we modiï¬ed the algorithm slightly: instead of performing 1 mutation at each reproduction event, we performed 5 mutations. We evolved with this increased mutation rate for a while and ï¬nally we switched back to the original single- mutation version. During the 5-mutation stage, some populations escape the local optimum, as in Figure 4 (top), and none
Large-Scale Evolution
get worse. Across populations, however, the escape was not frequent enough (8 out of 10) and took too long for us to propose this as an efï¬cient technique to escape optima. An interesting direction for future work would be to study more elegant methods to manage the exploration vs. exploitation trade-off in large-scale neuro-evolution.
# S3.2. Local optima and weight resetting
The identity mutation offers a mechanism for populations to get trapped in local optima. Some individuals may get trained more than their peers just because they happen to have undergone more identity mutations. It may, therefore, occur that a poor architecture may become more accurate than potentially better architectures that still need more training. In the extreme case, the well-trained poor architecture may become a super-ï¬t individual and take over the population. Suspecting this scenario, we performed experiments in which we simultaneously reset all the weights in a population that had plateaued (Section 5). The simultaneous reset should put all the individuals on the same footing, so individuals that had accidentally trained more no longer have the unfair advantage. Indeed, the results matched our expectation. The populations suffer a temporary degradation in ï¬tness immediately after the reset, as the individuals need to retrain. Later, however, the populations end up reaching higher optima (for example, Figure 4, bottom). Across 10 experiments, we ï¬nd that three successive resets tend to cause improvement (p < 0.001). We mention this effect merely as evidence of this particular drawback of weight inheritance. In our main results, we circumvented the problem by using longer training times and larger populations. Future work may explore more efï¬cient solutions. | {
"id": "1502.03167"
} |
1703.00441 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | 7 1 0 2 v o N 0 3 ] G L . s c [
2 v 1 4 4 0 0 . 3 0 7 1 : v i X r a
# Learning to Optimize Neural Nets
# Ke Li 1 Jitendra Malik 1
# Abstract
Learning to Optimize (Li & Malik, 2016) is a recently proposed framework for learning opti- mization algorithms using reinforcement learn- In this paper, we explore learning an op- ing. timization algorithm for training shallow neu- ral nets. Such high-dimensional stochastic opti- mization problems present interesting challenges for existing reinforcement learning algorithms. We develop an extension that is suited to learn- ing optimization algorithms in this setting and demonstrate that the learned optimization algo- rithm consistently outperforms other known op- timization algorithms even on unseen tasks and is robust to changes in stochasticity of gradients and the neural net architecture. More speciï¬- cally, we show that an optimization algorithm trained with the proposed method on the prob- lem of training a neural net on MNIST general- izes to the problems of training neural nets on the Toronto Faces Dataset, CIFAR-10 and CIFAR- 100.
# 1. Introduction
optimization algorithm. Given this state of affairs, perhaps it is time for us to start practicing what we preach and learn how to learn.
Recently, Li & Malik (2016) and Andrychowicz et al. (2016) introduced two different frameworks for learning optimization algorithms. Whereas Andrychowicz et al. (2016) focuses on learning an optimization algorithm for training models on a particular task, Li & Malik (2016) sets a more ambitious objective of learning an optimiza- tion algorithm for training models that is task-independent. We study the latter paradigm in this paper and develop a method for learning an optimization algorithm for high- like the dimensional stochastic optimization problems, problem of training shallow neural nets.
Under the âLearning to Optimizeâ framework proposed by Li & Malik (2016), the problem of learning an optimization algorithm is formulated as a reinforcement learning prob- lem. We consider the general structure of an unconstrained continuous optimization algorithm, as shown in Algorithm 1. In each iteration, the algorithm takes a step âx and uses it to update the current iterate x(i). In hand-engineered op- timization algorithms, âx is computed using some ï¬xed formula Ï that depends on the objective function, the cur- rent iterate and past iterates. Often, it is simply a function of the current and past gradients.
Machine learning is centred on the philosophy that learn- ing patterns automatically from data is generally better than meticulously crafting rules by hand. This data-driven ap- proach has delivered: today, machine learning techniques can be found in a wide range of application areas, both in AI and beyond. Yet, there is one domain that has conspicu- ously been left untouched by machine learning: the design of tools that power machine learning itself.
One of the most widely used tools in machine learning is optimization algorithms. We have grown accustomed to seeing an optimization algorithm as a black box that takes in a model that we design and the data that we collect and outputs the optimal model parameters. The optimization al- gorithm itself largely stays static: its design is reserved for human experts, who must toil through many rounds of the- oretical analysis and empirical validation to devise a better
1University of California, Berkeley, CA 94720, United States. Correspondence to: Ke Li <ke.li@eecs.berkeley.edu>.
Algorithm 1 General structure of optimization algorithms
Require: Objective function f x(0) â random point in the domain of f for i = 1, 2, . . . do âx â Ï(f, {x(0), . . . , x(iâ1)}) if stopping condition is met then return x(iâ1) end if x(i) â x(iâ1) + âx end for
Different choices of Ï yield different optimization algo- rithms and so each optimization algorithm is essentially characterized by its update formula Ï. Hence, by learn- ing Ï, we can learn an optimization algorithm. Li & Ma- lik (2016) observed that an optimization algorithm can be viewed as a Markov decision process (MDP), where the state includes the current iterate, the action is the step vec-
Learning to Optimize Neural Nets
tor âx and the policy is the update formula Ï. Hence, the problem of learning Ï simply reduces to a policy search problem.
In this paper, we build on the method proposed in (Li & Malik, 2016) and develop an extension that is suited to learning optimization algorithms for high-dimensional stochastic problems. We use it to learn an optimization algorithm for training shallow neural nets and show that it outperforms popular hand-engineered optimization algo- rithms like ADAM (Kingma & Ba, 2014), AdaGrad (Duchi et al., 2011) and RMSprop (Tieleman & Hinton, 2012) and an optimization algorithm learned using the supervised learning method proposed in (Andrychowicz et al., 2016). Furthermore, we demonstrate that our optimization algo- rithm learned from the experience of training on MNIST generalizes to training on other datasets that have very dis- similar statistics, like the Toronto Faces Dataset, CIFAR-10 and CIFAR-100.
# 2. Related Work
# 2.2. Learning Which Model to Learn
Methods in this category (Brazdil et al., 2008) aim to learn which base-level learner achieves the best performance on a task. The meta-knowledge captures correlations between different tasks and the performance of different base-level learners on those tasks. One challenge under this setting is to decide on a parameterization of the space of base-level learners that is both rich enough to be capable of repre- senting disparate base-level learners and compact enough to permit tractable search over this space. Brazdil et al. (2003) proposes a nonparametric representation and stores examples of different base-level learners in a database, whereas Schmidhuber (2004) proposes representing base- level learners as general-purpose programs. The former has limited representation power, while the latter makes search and learning in the space of base-level learners intractable. Hochreiter et al. (2001) views the (online) training proce- dure of any base-learner as a black box function that maps a sequence of training examples to a sequence of predictions and models it as a recurrent neural net. Under this formu- lation, meta-training reduces to training the recurrent net, and the base-level learner is encoded in the memory state of the recurrent net.
The line of work on learning optimization algorithms is fairly recent. Li & Malik (2016) and Andrychowicz et al. (2016) were the ï¬rst to propose learning general opti- mization algorithms. Li & Malik (2016) explored learn- ing task-independent optimization algorithms and used re- inforcement learning to learn the optimization algorithm, while Andrychowicz et al. (2016) investigated learning task-dependent optimization algorithms and used super- vised learning.
In the special case where objective functions that the opti- mization algorithm is trained on are loss functions for train- ing other models, these methods can be used for âlearning to learnâ or âmeta-learningâ. While these terms have ap- peared from time to time in the literature (Baxter et al., 1995; Vilalta & Drissi, 2002; Brazdil et al., 2008; Thrun & Pratt, 2012), they have been used by different authors to refer to disparate methods with different purposes. These methods all share the objective of learning some form of meta-knowledge about learning, but differ in the type of meta-knowledge they aim to learn. We can divide the vari- ous methods into the following three categories.
Hyperparameter optimization can be seen as another ex- ample of methods in this category. The space of base-level learners to search over is parameterized by a predeï¬ned set of hyperparameters. Unlike the methods above, multiple trials with different hyperparameter settings on the same task are permitted, and so generalization across tasks is not required. The discovered hyperparameters are generally speciï¬c to the task at hand and hyperparameter optimiza- tion must be rerun for new tasks. Various kinds of methods have been proposed, such those based on Bayesian opti- mization (Hutter et al., 2011; Bergstra et al., 2011; Snoek et al., 2012; Swersky et al., 2013; Feurer et al., 2015), random search (Bergstra & Bengio, 2012) and gradient- based optimization (Bengio, 2000; Domke, 2012; Maclau- rin et al., 2015).
# 2.3. Learning How to Learn
# 2.1. Learning What to Learn
Methods in this category (Thrun & Pratt, 2012) aim to learn what parameter values of the base-level learner are useful across a family of related tasks. The meta-knowledge cap- tures commonalities shared by tasks in the family, which enables learning on a new task from the family to be done more quickly. Most early methods fall into this category; this line of work has blossomed into an area that has later become known as transfer learning and multi-task learning.
Methods in this category aim to learn a good algorithm for training a base-level learner. Unlike methods in the pre- vious categories, the goal is not to learn about the out- come of learning, but rather the process of learning. The meta-knowledge captures commonalities in the behaviours of learning algorithms that achieve good performance. The base-level learner and the task are given by the user, so the learned algorithm must generalize across base-level learn- ers and tasks. Since learning in most cases is equivalent to optimizing some objective function, learning a learning algorithm often reduces to learning an optimization algo- rithm. This problem was explored in (Li & Malik, 2016)
Learning to Optimize Neural Nets
and (Andrychowicz et al., 2016). Closely related is (Ben- gio et al., 1991), which learns a Hebb-like synaptic learn- ing rule that does not depend on the objective function, which does not allow for generalization to different objec- tive functions.
Various work has explored learning how to adjust the hyperparameters of hand-engineered optimization algo- rithms, like the step size (Hansen, 2016; Daniel et al., 2016; Fu et al., 2016) or the damping factor in the Levenberg- Marquardt algorithm (Ruvolo et al., 2009). Related to this line of work is stochastic meta-descent (Bray et al., 2004), which derives a rule for adjusting the step size analytically. A different line of work (Gregor & LeCun, 2010; Sprech- mann et al., 2013) parameterizes intermediate operands of special-purpose solvers for a class of optimization prob- lems that arise in sparse coding and learns them using su- pervised learning.
may be completely unrelated to tasks used for training the optimization algorithm. Therefore, the learned optimiza- tion algorithm must not learn anything about the tasks used for training. Instead, the goal is to learn an optimization al- gorithm that can exploit the geometric structure of the error surface induced by the base-learners. For example, if the base-level model is a neural net with ReLU activation units, the optimization algorithm should hopefully learn to lever- age the piecewise linearity of the model. Hence, there is a clear division of responsibilities between the meta-learner and base-learners. The knowledge learned at the meta-level should be pertinent for all tasks, whereas the knowledge learned at the base-level should be task-speciï¬c. The meta- learner should therefore generalize across tasks, whereas the base-learner should generalize across instances.
# 3.2. RL Preliminaries
# 3. Learning to Optimize
# 3.1. Setting
In the âLearning to Optimizeâ framework, we are given a set of training objective functions f1,..., fn drawn from some distribution F. An optimization algorithm A takes an objective function f and an initial iterate 2 as in- put and produces a sequence of iterates +@),..., (7), where x7) is the solution found by the optimizer. We are also given a distribution D that generates the initial iterate 2°) and a meta-loss £, which takes an objective unction f and a sequence of iterates x,..., â7 pro- duced by an optimization algorithm as input and outputs a scalar that measures the quality of the iterates. The goal is to learn an optimization algorithm A* such that Epwrxowp [L(f,A*(f,2))] is minimized. The meta- loss is chosen to penalize optimization algorithms that ex- hibit behaviours we find undesirable, like slow convergence or excessive oscillations. Assuming we would like to learn an algorithm that minimizes the objective function it is given, a good choice of meta-loss would then simply be an f(x), which can be interpreted as the area under the curve of objective values over time.
The goal of reinforcement learning is to learn to interact with an environment in a way that minimizes cumulative costs that are expected to be incurred over time. The en- vironment is formalized as a partially observable Markov decision process (POMDP)!, which is defined by the tuple (S,O, A, Di, P,Po,¢, T), where S C R? is the set of states, O C Rââ is the set of observations, A C R7 is the set of actions, p; (89) is the probability density over initial states 80, P(St41 |8z,@z) is the probability density over the sub- sequent state s,;; given the current state s, and action a;,, Do (ot |S¢) is the probability density over the current obser- vation 0; given the current state s;, c : S + R is a function that assigns a cost to each state and T is the time horizon. Often, the probability densities p and p, are unknown and not given to the learning algorithm.
A policy Ï (at |ot, t ) is a conditional probability density over actions at given the current observation ot and time step t. When a policy is independent of t, it is known as a stationary policy. The goal of the reinforcement learning algorithm is to learn a policy Ïâ that minimizes the total expected cost over time. More precisely,
T m* =argminE.o5.01,..0r | > ¢(se)| » t=0
The objective functions f1, . . . , fn may correspond to loss functions for training base-level learners, in which case the algorithm that learns the optimization algorithm can be viewed as a meta-learner. In this setting, each objective function is the loss function for training a particular base- learner on a particular task, and so the set of training ob- jective functions can be loss functions for training a base- learner or a family of base-learners on different tasks. At test time, the learned optimization algorithm is evaluated on unseen objective functions, which correspond to loss functions for training base-learners on new tasks, which
where the expectation is taken with respect to the joint dis- tribution over the sequence of states and actions, often re- ferred to as a trajectory, which has the density
pi (S0) Po (00| So) 8 » T (azt| 01, t) p (Se41| 81,4) Po (Or41| St41) - t=0
1What is described is an undiscounted ï¬nite-horizon POMDP with continuous state, observation and action spaces.
Learning to Optimize Neural Nets
To make learning tractable, Ï is often constrained to lie in a parameterized family. A common assumption is that Ï ( at| ot, t) = N (µÏ(ot), ΣÏ(ot)), where N (µ, Σ) de- notes the density of a Gaussian with mean µ and covari- ance Σ. The functions µÏ(·) and possibly ΣÏ(·) are mod- elled using function approximators, whose parameters are learned.
optimization is challenging. In each iteration, it performs policy optimization on Ï, and uses the resulting policy as supervision to train Ï.
More precisely, GPS solves the following constrained opti- mization problem:
# T
T min Ey b 2) s.t. U (az| 82,037) = 7 (a2| $430) Vaz, se,t 6, 7 t=0
# 3.3. Formulation
In our setting, the state st consists of the current iterate x(t) and features Φ(·) that depend on the history of iterates x(1), . . . , x(t), (noisy) gradients â Ëf (x(1)), . . . , â Ëf (x(t)) and (noisy) objective values Ëf (x(1)), . . . , Ëf (x(t)). The ac- tion at is the step âx that will be used to update the iterate. The observation ot excludes x(t) and consists of features Ψ(·) that depend on the iterates, gradient and objective val- ues from recent iterations, and the previous memory state of the learned optimization algorithm, which takes the form of a recurrent neural net. This memory state can be viewed as a statistic of the previous observations that is learned jointly with the policy.
where 7 and 6 denote the parameters of y and 7 respec- tively, E, [-] denotes the expectation taken with respect to the trajectory induced by a policy p and 7 (a¢| 5430) = Jon T (ay| Or; 9) Po (o| %)°.
# ot
Since there are an inï¬nite number of equality constraints, the problem is relaxed by enforcing equality on the mean actions taken by Ï and Ï at every time step3. So, the prob- lem becomes:
min Ey b 7) s.t. Ey [ae] = Ey [Ex [ae| s¢]] Vt t=0
Under this formulation, the initial probability density p; captures how the initial iterate, gradient and objective value tend to be distributed. The transition probability density p captures the how the gradient and objective value are likely to change given the step that is taken currently; in other words, it encodes the local geometry of the training ob- jective functions. Assuming the goal is to learn an opti- mization algorithm that minimizes the objective function, the cost ¢ of a state s, = (ec, ® ())7 is simply the true objective value f(a).
This problem is solved using Bregman ADMM (Wang & Banerjee, 2014), which performs the following updates in each iteration:
T n+ arg min S> Ey [e(ss) - Aa] + Dz (0,4) 7 t=0 T Oe aremn AP Ey [Ex [ae se] + Di (0,7) Me = Ap + aM% (Ey [Ex [az| s2]] â Ey [ae]) Ve,
where D, (8,7) = Ey [Dict (m (ai| 8439) |] ¥ (ail 82, 67))] and D; (7,9) = Ey [Dxz (~ (ai| se, t; 9)|| + (ae| se; 9))).
Any particular policy Ï (at |ot, t ), which generates at = âx at every time step, corresponds to a particular (noisy) update formula Ï, and therefore a particular (noisy) opti- mization algorithm. Therefore, learning an optimization algorithm simply reduces to searching for the optimal pol- icy.
= that Ï ( at| st, t; η) The := (Kt, kt, Gt)T N (Ktst + kt, Gt), where η t=1 and Ï(ot), ΣÏ), where θ := (Ï, ΣÏ) Ï ( at| ot; θ) = N (ÂµÏ and ÂµÏ Ï(·) can be an arbitrary function that is typically modelled using a nonlinear function approximator like a neural net.
The mean of the policy is modelled as a recurrent neural net fragment that corresponds to a single time step, which takes the observation features Ψ(·) and the previous mem- ory state as input and outputs the step to take.
# 3.4. Guided Policy Search
The reinforcement learning method we use is guided pol- icy search (GPS) (Levine et al., 2015), which is a policy search method designed for searching over large classes of expressive non-linear policies in continuous state and ac- tion spaces. It maintains two policies, Ï and Ï, where the former lies in a time-varying linear policy class in which the optimal policy can found in closed form, and the latter lies in a stationary non-linear policy class in which policy
the algorithm con- At structs a model of the transition probability density Ëp ( st+1| st, at, t; ζ) = N (Atst+Btat+ct, Ft), where ζ := (At, Bt, ct, Ft)T t=1 is ï¬tted to samples of st drawn from the trajectory induced by Ï, which essentially amounts to a local linearization of the true transition probability p ( st+1| st, at, t). We will use E ËÏ [·] to denote expecta- tion taken with respect to the trajectory induced by Ï under
2In practice, the explicit form of the observation probability po is usually not known or the integral may be intractable to compute. So, a linear Gaussian model is ï¬tted to samples of st and at and used in place of the true Ï ( at| st; θ) where necessary.
3Though the Bregman divergence penalty is applied to the original probability distributions over at.
Learning to Optimize Neural Nets
the modelled transition probability Ëp. Additionally, the al- gorithm ï¬ts local quadratic approximations to c(st) around samples of st drawn from the trajectory induced by Ï so that c(st) â Ëc(st) := 1 t st + ht for stâs that are near the samples.
spaces. For example, in the case of GPS, because the run- ning time of LQG is cubic in dimensionality of the state space, performing policy search even in the simple class of linear-Gaussian policies would be prohibitively expen- sive when the dimensionality of the optimization problem is high.
With these assumptions, the subproblem that needs to be solved to update η = (Kt, kt, Gt)T
Tr min >? E; [é(sx) - Mai +â¢%Dz (n, 6) t=0 T s.t. SOE; [Die (w (az| 82, t;7) ||» (a,| st,t;7/))| <e, t=0
where 77â denotes the old 7 from the previous iteration. Be- cause p and @ are only valid locally around the trajectory induced by ~, the constraint is added to limit the amount by which 77 is updated. It turns out that the unconstrained prob- lem can be solved in closed form using a dynamic program- ming algorithm known as linear-quadratic-Gaussian (LQG) regulator in time linear in the time horizon T' and cubic in the dimensionality of the state space D. The constrained problem is solved using dual gradient descent, which uses LQG as a subroutine to solve for the primal variables in each iteration and increments the dual variable on the con- straint until it is satisfied.
Updating θ is straightforward, since expectations taken with respect to the trajectory induced by Ï are always con- ditioned on st and all outer expectations over st are taken with respect to the trajectory induced by Ï. Therefore, Ï is essentially decoupled from the transition probabil- ity p ( st+1| st, at, t) and so its parameters can be updated without affecting the distribution of stâs. The subproblem that needs to be solved to update θ therefore amounts to a standard supervised learning problem.
Since Ï ( at| st, t; η) and Ï ( at| st; θ) are Gaussian, D (θ, η) can be computed analytically. More concretely, if we assume Î£Ï to be ï¬xed for simplicity, the subproblem that is solved for updating θ = (Ï, ΣÏ) is:
T . 7 âA lyr 7 minEy > At HE (08) + z (tr (G7*E ) â log |=*|) +24 (u5(0t) â Ey [ail set)â Gr? (WS (or) â By [ael se, |
Note that the last term is the squared Mahalanobis distance between the mean actions of Ï and Ï at time step t, which is intuitive as we would like to encourage Ï to match Ï.
Fortunately, many high-dimensional optimization prob- lems have underlying structure that can be exploited. For example, the parameters of neural nets are equivalent up to permutation among certain coordinates. More concretely, for fully connected neural nets, the dimensions of a hidden layer and the corresponding weights can be permuted ar- bitrarily without changing the function they compute. Be- cause permuting the dimensions of two adjacent layers can permute the weight matrix arbitrarily, an optimization algo- rithm should be invariant to permutations of the rows and columns of a weight matrix. A reasonable prior to impose is that the algorithm should behave in the same manner on all coordinates that correspond to entries in the same ma- trix. That is, if the values of two coordinates in all cur- rent and past gradients and iterates are identical, then the step vector produced by the algorithm should have identi- cal values in these two coordinates. We will refer to the set of coordinates on which permutation invariance is en- forced as a coordinate group. For the purposes of learning an optimization algorithm for neural nets, a natural choice would be to make each coordinate group correspond to a weight matrix or a bias vector. Hence, the total number of coordinate groups is twice the number of layers, which is usually fairly small.
In the case of GPS, we impose this prior on both w and 7. For the purposes of updating 7, we first impose a block- diagonal structure on the parameters A;, B, and F; of the fitted transition probability density p (s:41| s:,42,t;¢) = N(Atse + Brat + ce, Fi), so that for each coordinate in the optimization problem, the dimensions of s;4 1 that cor- respond to the coordinate only depend on the dimensions of s; and a, that correspond to the same coordinate. As a result, p ($:41| Sz, at, t;¢) decomposes into multiple inde- pendent probability densities p/ (sha| sl, ai, t; @â), one for each coordinate 7. Similarly, we also impose a block- diagonal structure on C; for fitting ¢(s;) and on the pa- rameter matrix of the fitted model for 7 (a;| s,;0). Under these assumptions, Aâ, and G;, are guaranteed to be block- diagonal as well. Hence, the Bregman divergence penalty term, D (7,6) decomposes into a sum of Bregman diver- gence terms, one for each coordinate.
# 3.5. Convolutional GPS
The problem of learning high-dimensional optimization al- gorithms presents challenges for reinforcement learning al- gorithms due to high dimensionality of the state and action
We then further constrain dual variables λt, sub-vectors of parameter vectors and sub-matrices of parameter matri- ces corresponding to each coordinate group to be identical across the group. Additionally, we replace the weight νt on D (η, θ) with an individual weight on each Bregman
Learning to Optimize Neural Nets
(a) (b) (c)
Figure 1. Comparison of the various hand-engineered and learned algorithms on training neural nets with 48 input and hidden units on (a) TFD, (b) CIFAR-10 and (c) CIFAR-100 with mini-batches of size 64. The vertical axis is the true objective value and the horizontal axis represents the iteration. Best viewed in colour.
divergence term for each coordinate group. The problem then decomposes into multiple independent subproblems, one for each coordinate group. Because the dimensionality of the state subspace corresponding to each coordinate is constant, LQG can be executed on each subproblem much more efï¬ciently.
25 e {VF(e)/ (| VF(em@-564) tmoasy)| +4 1)} 24 ha i=0 { |[2GnaxGâ8G4D) tmods+5)) _p@max(tâ5(i+2),tmods)) . wâ5i) â@(t=90FD))] 40,1
Similarly, for Ï, we choose a ÂµÏ Ï(·) that shares parameters across different coordinates in the same group. We also impose a block-diagonal structure on Î£Ï and constrain the appropriate sub-matrices to share their entries.
Note that all operations are applied element-wise. Also, whenever a feature becomes undeï¬ned (i.e.: when the time step index becomes negative), it is replaced with the all- zeros vector.
# 3.6. Features
We describe the features Φ(·) and Ψ(·) at time step t, which deï¬ne the state st and observation ot respectively.
Unlike state features, which are only used when training the optimization algorithm, observation features Ψ(·) are used both during training and at test time. Consequently, we use noisier observation features that can be computed more efï¬ciently and require less memory overhead. The observation features consist of the following:
Because of the stochasticity of gradients and objective val- ues, the state features ®(-) are defined in terms of sum- mary statistics of the history of iterates {2 gradi- * t yt ents {VF} and objective values {fe} . i=0 i=0 We define the following statistics, which we will refer to as the average recent iterate, gradient and objective value respectively:
e (f@) ~ fw) ica © VF(@)/(|VFeemâ¢â2)| +1) [a (mare(#â2,1)) _ p(max(tâ2,0)) | . a@)â2(-D] 40.1
i) 1 7 j ets min(i+1,3) Dj =max(iâ2,0) ol) eo VE(e®) = BINGE) Lj=max(éâ2,0) Vi (2) * £2) = gagrtsy Cj-maxce2,0) fe)
# 4. Experiments
For clarity, we will refer to training of the optimization algorithm as âmeta-trainingâ to differentiate it from base- level training, which will simply be referred to as âtrain- ingâ.
The state features Φ(·) consist of the relative change in the average recent objective value, the average recent gradient normalized by the magnitude of the a previous average re- cent gradient and a previous change in average recent iter- ate relative to the current change in average recent iterate:
© {FEM â FED) FEM} 24 i=
We meta-trained an optimization algorithm on a single ob- jective function, which corresponds to the problem of train- ing a two-layer neural net with 48 input units, 48 hidden units and 10 output units on a randomly projected and nor- malized version of the MNIST training set with dimension- ality 48 and unit variance in each dimension. We modelled the optimization algorithm using an recurrent neural net
i=0
Learning to Optimize Neural Nets
(a) (b) (c)
Figure 2. Comparison of the various hand-engineered and learned algorithms on training neural nets with 100 input units and 200 hidden units on (a) TFD, (b) CIFAR-10 and (c) CIFAR-100 with mini-batches of size 64. The vertical axis is the true objective value and the horizontal axis represents the iteration. Best viewed in colour.
(a) (b) (c)
Figure 3. Comparison of the various hand-engineered and learned algorithms on training neural nets with 48 input and hidden units on (a) TFD, (b) CIFAR-10 and (c) CIFAR-100 with mini-batches of size 10. The vertical axis is the true objective value and the horizontal axis represents the iteration. Best viewed in colour.
with a single layer of 128 LSTM (Hochreiter & Schmid- huber, 1997) cells. We used a time horizon of 400 itera- tions and a mini-batch size of 64 for computing stochas- tic gradients and objective values. We evaluate the opti- mization algorithm on its ability to generalize to unseen objective functions, which correspond to the problems of training neural nets on different tasks/datasets. We evalu- ate the learned optimization algorithm on three datasets, the Toronto Faces Dataset (TFD), CIFAR-10 and CIFAR-100. These datasets are chosen for their very different character- istics from MNIST and each other: TFD contains 3300 grayscale images that have relatively little variation and has seven different categories, whereas CIFAR-100 con- tains 50,000 colour images that have varied appearance and has 100 different categories.
All algorithms are tuned on the training objective function. For hand-engineered algorithms, this entails choosing the best hyperparameters; for learned algorithms, this entails meta-training on the objective function. We compare to the seven hand-engineered algorithms: stochastic gradient de- scent, momentum, conjugate gradient, L-BFGS, ADAM, AdaGrad and RMSprop. In addition, we compare to an optimization algorithm meta-trained using the method de-
scribed in (Andrychowicz et al., 2016) on the same train- ing objective function (training two-layer neural net on ran- domly projected and normalized MNIST) under the same setting (a time horizon of 400 iterations and a mini-batch size of 64).
First, we examine the performance of various optimization algorithms on similar objective functions. The optimiza- tion problems under consideration are those for training neural nets that have the same number of input and hidden units (48 and 48) as those used during meta-training. The number of output units varies with the number of categories in each dataset. We use the same mini-batch size as that used during meta-training. As shown in Figure 1, the opti- mization algorithm meta-trained using our method (which we will refer to as Predicted Step Descent) consistently de- scends to the optimum the fastest across all datasets. On the other hand, other algorithms are not as consistent and the relative ranking of other algorithms varies by dataset. This suggests that Predicted Step Descent has learned to be robust to variations in the data distributions, despite be- ing trained on only one objective function, which is associ- ated with a very speciï¬c data distribution that character- izes MNIST. It is also interesting to note that while the
Learning to Optimize Neural Nets
(a) (b) (c)
Figure 4. Comparison of the various hand-engineered and learned algorithms on training neural nets with 100 input units and 200 hidden units on (a) TFD, (b) CIFAR-10 and (c) CIFAR-100 with mini-batches of size 10. The vertical axis is the true objective value and the horizontal axis represents the iteration. Best viewed in colour.
(a) (b) (c)
Figure 5. Comparison of the various hand-engineered and learned algorithms on training neural nets with 100 input units and 200 hidden units on (a) TFD, (b) CIFAR-10 and (c) CIFAR-100 for 800 iterations with mini-batches of size 64. The vertical axis is the true objective value and the horizontal axis represents the iteration. Best viewed in colour.
algorithm meta-trained using (Andrychowicz et al., 2016) (which we will refer to as L2LBGDBGD) performs well on CIFAR, it is unable to reach the optimum on TFD.
Next, we change the architecture of the neural nets and see if Predicted Step Descent generalizes to the new architec- ture. We increase the number of input units to 100 and the number of hidden units to 200, so that the number of pa- rameters is roughly increased by a factor of 8. As shown in Figure 2, Predicted Step Descent consistently outperforms other algorithms on each dataset, despite having not been trained to optimize neural nets of this architecture. Interest- ingly, while it exhibited a bit of oscillation initially on TFD and CIFAR-10, it quickly recovered and overtook other al- gorithms, which is reminiscent of the phenomenon reported in (Li & Malik, 2016) for low-dimensional optimization problems. This suggests that it has learned to detect when it is performing poorly and knows how to change tack ac- cordingly. L2LBGDBGD experienced difï¬culties on TFD and CIFAR-10 as well, but slowly diverged.
from 64 to 10 on both the original architecture with 48 in- put and hidden units and the enlarged architecture with 100 input units and 200 hidden units. As shown in Figure 3, on the original architecture, Predicted Step Descent still out- performs all other algorithms and is able to handle the in- creased stochasticity fairly well. In contrast, conjugate gra- dient and L2LBGDBGD had some difï¬culty handling the increased stochasticity on TFD and to a lesser extent, on CIFAR-10. In the former case, both diverged; in the latter case, both were progressing slowly towards the optimum.
On the enlarged architecture, Predicted Step Descent expe- rienced some signiï¬cant oscillations on TFD and CIFAR- 10, but still managed to achieve a much better objective value than all the other algorithms. Many hand-engineered algorithms also experienced much greater oscillations than previously, suggesting that the optimization problems are inherently harder. L2LBGDBGD diverged fairly quickly on these two datasets.
We now investigate how robust Predicted Step Descent is to stochasticity of the gradients. To this end, we take a look at its performance when we reduce the mini-batch size
Finally, we try doubling the number of iterations. As shown in Figure 5, despite being trained over a time horizon of 400 iterations, Predicted Step Descent behaves reasonably beyond the number of iterations it is trained for.
Learning to Optimize Neural Nets
# 5. Conclusion
In this paper, we presented a new method for learning opti- mization algorithms for high-dimensional stochastic prob- lems. We applied the method to learning an optimization algorithm for training shallow neural nets. We showed that the algorithm learned using our method on the problem of training a neural net on MNIST generalizes to the prob- lems of training neural nets on unrelated tasks/datasets like the Toronto Faces Dataset, CIFAR-10 and CIFAR-100. We also demonstrated that the learned optimization algorithm is robust to changes in the stochasticity of gradients and the neural net architecture.
and Da Costa, Joaquim Pinto. Ranking learning algorithms: Using ibl and meta-learning on accuracy and time results. Machine Learning, 50(3):251â277, 2003.
Daniel, Christian, Taylor, Jonathan, and Nowozin, Sebas- tian. Learning step size controllers for robust neural net- work training. In Thirtieth AAAI Conference on Artiï¬cial Intelligence, 2016.
Domke, Justin. Generic methods for optimization-based modeling. In AISTATS, volume 22, pp. 318â326, 2012.
# References
Andrychowicz, Marcin, Denil, Misha, Gomez, Sergio, Hoffman, Matthew W, Pfau, David, Schaul, Tom, and de Freitas, Nando. Learning to learn by gradient descent by gradient descent. arXiv preprint arXiv:1606.04474, 2016.
Duchi, John, Hazan, Elad, and Singer, Yoram. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121â2159, 2011.
Feurer, Matthias, Springenberg, Jost Tobias, and Hutter, Initializing bayesian hyperparameter optimiza- Frank. tion via meta-learning. In AAAI, pp. 1128â1135, 2015.
Baxter, Jonathan, Caruana, Rich, Mitchell, Tom, Pratt, Lorien Y, Silver, Daniel L, and Thrun, Sebastian. NIPS 1995 workshop on learning to learn: Knowledge con- solidation and transfer in inductive systems. https: //web.archive.org/web/20000618135816/ http://www.cs.cmu.edu/afs/cs.cmu.edu/ user/caruana/pub/transfer.html, 1995. Accessed: 2015-12-05.
Fu, Jie, Lin, Zichuan, Liu, Miao, Leonard, Nicholas, Feng, Jiashi, and Chua, Tat-Seng. Deep q-networks for acceler- ating the training of deep neural networks. arXiv preprint arXiv:1606.01467, 2016.
Gregor, Karol and LeCun, Yann. Learning fast approxima- tions of sparse coding. In Proceedings of the 27th Inter- national Conference on Machine Learning (ICML-10), pp. 399â406, 2010.
Bengio, Y, Bengio, S, and Cloutier, J. Learning a synaptic In Neural Networks, 1991., IJCNN-91- learning rule. Seattle International Joint Conference on, volume 2, pp. 969âvol. IEEE, 1991.
Hansen, Samantha. Using deep q-learning to con- arXiv preprint trol optimization hyperparameters. arXiv:1602.04062, 2016.
Bengio, Yoshua. Gradient-based optimization of hyperpa- rameters. Neural computation, 12(8):1889â1900, 2000.
Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short- term memory. Neural computation, 9(8):1735â1780, 1997.
Bergstra, James and Bengio, Yoshua. Random search for hyper-parameter optimization. The Journal of Machine Learning Research, 13(1):281â305, 2012.
Bergstra, James S, Bardenet, R´emi, Bengio, Yoshua, and K´egl, Bal´azs. Algorithms for hyper-parameter optimiza- tion. In Advances in Neural Information Processing Sys- tems, pp. 2546â2554, 2011.
Bray, M, Koller-Meier, E, Muller, P, Van Gool, L, and Schraudolph, NN. 3D hand tracking by rapid stochas- In Visual tic gradient descent using a skinning model. Media Production, 2004.(CVMP). 1st European Confer- ence on, pp. 59â68. IET, 2004.
Hochreiter, Sepp, Younger, A Steven, and Conwell, Pe- ter R. Learning to learn using gradient descent. In Inter- national Conference on Artiï¬cial Neural Networks, pp. 87â94. Springer, 2001.
Hutter, Frank, Hoos, Holger H, and Leyton-Brown, Kevin. Sequential model-based optimization for general algo- In Learning and Intelligent Opti- rithm conï¬guration. mization, pp. 507â523. Springer, 2011.
Kingma, Diederik and Ba, Jimmy. method for stochastic optimization. arXiv:1412.6980, 2014. A arXiv preprint Adam:
Brazdil, Pavel, Carrier, Christophe Giraud, Soares, Carlos, and Vilalta, Ricardo. Metalearning: applications to data mining. Springer Science & Business Media, 2008.
Levine, Sergey, Finn, Chelsea, Darrell, Trevor, and Abbeel, Pieter. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015.
Learning to Optimize Neural Nets
Li, Ke and Malik, Jitendra. Learning to optimize. CoRR, abs/1606.01885, 2016.
Maclaurin, Dougal, Duvenaud, David, and Adams, Ryan P. Gradient-based hyperparameter optimization through re- arXiv preprint arXiv:1502.03492, versible learning. 2015.
Ruvolo, Paul L, Fasel, Ian, and Movellan, Javier R. Op- timization on a budget: A reinforcement learning ap- proach. In Advances in Neural Information Processing Systems, pp. 1385â1392, 2009.
Schmidhuber, J¨urgen. Optimal ordered problem solver. Machine Learning, 54(3):211â254, 2004.
Snoek, Jasper, Larochelle, Hugo, and Adams, Ryan P. Practical bayesian optimization of machine learning al- gorithms. In Advances in neural information processing systems, pp. 2951â2959, 2012.
Sprechmann, Pablo, Litman, Roee, Yakar, Tal Ben, Bron- stein, Alexander M, and Sapiro, Guillermo. Supervised sparse analysis and synthesis operators. In Advances in Neural Information Processing Systems, pp. 908â916, 2013.
Swersky, Kevin, Snoek, Jasper, and Adams, Ryan P. Multi- task bayesian optimization. In Advances in neural infor- mation processing systems, pp. 2004â2012, 2013.
Thrun, Sebastian and Pratt, Lorien. Learning to learn. Springer Science & Business Media, 2012.
Tieleman, Tijmen and Hinton, Geoffrey. Lecture 6.5- rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 2012.
Vilalta, Ricardo and Drissi, Youssef. A perspective view and survey of meta-learning. Artiï¬cial Intelligence Re- view, 18(2):77â95, 2002.
Wang, Huahua and Banerjee, Arindam. Bregman al- CoRR, ternating direction method of multipliers. abs/1306.3203, 2014. | {
"id": "1606.01467"
} |
1702.08734 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | 7 1 0 2
b e F 8 2 ] V C . s c [
1 v 4 3 7 8 0 . 2 0 7 1 : v i X r a
# Billion-scale similarity search with GPUs
Jeff Johnson Facebook AI Research New York
Matthijs Douze Facebook AI Research Paris
Herv ´e J ´egou Facebook AI Research Paris
ABSTRACT Similarity search ï¬nds application in specialized database systems handling complex data such as images or videos, which are typically represented by high-dimensional features and require speciï¬c indexing structures. This paper tackles the problem of better utilizing GPUs for this task. While GPUs excel at data-parallel tasks, prior approaches are bot- tlenecked by algorithms that expose less parallelism, such as k-min selection, or make poor use of the memory hierarchy. We propose a design for k-selection that operates at up to 55% of theoretical peak performance, enabling a nearest neighbor implementation that is 8.5à faster than prior GPU state of the art. We apply it in diï¬erent similarity search scenarios, by proposing optimized design for brute-force, ap- proximate and compressed-domain search based on product quantization. In all these setups, we outperform the state of the art by large margins. Our implementation enables the construction of a high accuracy k-NN graph on 95 million images from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced our approach1 for the sake of comparison and reproducibility.
as the underlying processes either have high arithmetic com- plexity and/or high data bandwidth demands [28], or cannot be eï¬ectively partitioned without failing due to communi- cation overhead or representation quality [38]. Once pro- duced, their manipulation is itself arithmetically intensive. However, how to utilize GPU assets is not straightforward. More generally, how to exploit new heterogeneous architec- tures is a key subject for the database community [9].
In this context, searching by numerical similarity rather than via structured relations is more suitable. This could be to ï¬nd the most similar content to a picture, or to ï¬nd the vectors that have the highest response to a linear classiï¬er on all vectors of a collection.
One of the most expensive operations to be performed on large collections is to compute a k-NN graph. It is a directed graph where each vector of the database is a node and each edge connects a node to its k nearest neighbors. This is our ï¬agship application. Note, state of the art methods like NN-Descent [15] have a large memory overhead on top of the dataset itself and cannot readily scale to the billion-sized databases we consider.
# INTRODUCTION
Images and videos constitute a new massive source of data for indexing and search. Extensive metadata for this con- tent is often not available. Search and interpretation of this and other human-generated content, like text, is diï¬cult and important. A variety of machine learning and deep learn- ing algorithms are being used to interpret and classify these complex, real-world entities. Popular examples include the text representation known as word2vec [32], representations of images by convolutional neural networks [39, 19], and im- age descriptors for instance search [20]. Such representations or embeddings are usually real-valued, high-dimensional vec- tors of 50 to 1000+ dimensions. Many of these vector repre- sentations can only eï¬ectively be produced on GPU systems,
1https://github.com/facebookresearch/faiss
Such applications must deal with the curse of dimension- ality [46], rendering both exhaustive search or exact index- ing for non-exhaustive search impractical on billion-scale databases. This is why there is a large body of work on approximate search and/or graph construction. To handle huge datasets that do not ï¬t in RAM, several approaches employ an internal compressed representation of the vec- tors using an encoding. This is especially convenient for memory-limited devices like GPUs. It turns out that accept- ing a minimal accuracy loss results in orders of magnitude of compression [21]. The most popular vector compression methods can be classiï¬ed into either binary codes [18, 22], or quantization methods [25, 37]. Both have the desirable property that searching neighbors does not require recon- structing the vectors.
Our paper focuses on methods based on product quanti- zation (PQ) codes, as these were shown to be more eï¬ective than binary codes [34]. In addition, binary codes incur im- portant overheads for non-exhaustive search methods [35]. Several improvements were proposed after the original prod- uct quantization proposal known as IVFADC [25]; most are diï¬cult to implement eï¬ciently on GPU. For instance, the inverted multi-index [4], useful for high-speed/low-quality operating points, depends on a complicated âmulti-sequenceâ algorithm. The optimized product quantization or OPQ [17] is a linear transformation on the input vectors that improves the accuracy of the product quantization; it can be applied
1
as a pre-processing. The SIMD-optimized IVFADC imple- mentation from [2] operates only with sub-optimal parame- ters (few coarse quantization centroids). Many other meth- ods, like LOPQ and the Polysemous codes [27, 16] are too complex to be implemented eï¬ciently on GPUs.
There are many implementations of similarity search on GPUs, but mostly with binary codes [36], small datasets [44], or exhaustive search [14, 40, 41]. To the best of our knowl- edge, only the work by Wieschollek et al. [47] appears suit- able for billion-scale datasets with quantization codes. This is the prior state of the art on GPUs, which we compare against in Section 6.4.
This paper makes the following contributions:
⢠a GPU k-selection algorithm, operating in fast register memory and ï¬exible enough to be fusable with other kernels, for which we provide a complexity analysis;
⢠a near-optimal algorithmic layout for exact and ap- proximate k-nearest neighbor search on GPU;
⢠a range of experiments that show that these improve- ments outperform previous art by a large margin on mid- to large-scale nearest-neighbor search tasks, in single or multi-GPU conï¬gurations.
The paper is organized as follows. Section 2 introduces the context and notation. Section 3 reviews GPU archi- tecture and discusses problems appearing when using it for similarity search. Section 4 introduces one of our main con- tributions, i.e., our k-selection method for GPUs, while Sec- tion 5 provides details regarding the algorithm computation layout. Finally, Section 6 provides extensive experiments for our approach, compares it to the state of the art, and shows concrete use cases for image collections.
# 2. PROBLEM STATEMENT
We are concerned with similarity search in vector collec- tions. Given the query vector x ⬠R? and the collectio: [yilisore (Yi ⬠Râ), we search:
L = k-argmin,_o.¢|/¢ â yi| 2, (1)
i.e., we search the k nearest neighbors of x in terms of L2 distance. The L2 distance is used most often, as it is op- timized by design when learning several embeddings (e.g., [20]), due to its attractive linear algebra properties.
The lowest distances are collected by k-selection. For an array [ai]i=o:c, k-selection finds the k lowest valued elements [as;Jiso:k, @s; < Gs;,,, along with the indices [s;J]i=0:%, 0 < 8; < 4, of those elements from the input array. The a; will be 32-bit floating point values; the s; are 32- or 64-bit integers. Other comparators are sometimes desired; e.g., for cosine similarity we search for highest values. The order between equivalent keys as; = @s,; is not specified.
Batching. Typically, searches are performed in batches of nq query vectors [17;]j=0:n, (ej ⬠R*) in parallel, which allows for more flexibility when executing on multiple CPU threads or on GPU. Batching for k-selection entails selecting Nq X k elements and indices from nq separate arrays, where each array is of a potentially different length ¢; > k.
?To avoid clutter in 0-based indexing, we use the array no- tation 0: £ to denote the range {0 â 1} inclusive.
2
Exact search. The exact solution computes the full pair- wise distance matrix D = [||xj â Yill3]j=0:ng,i=020 ⬠RX! In practice, we use the decomposition
Ilxj â yell = |lxall? + llyill? â 2(a3,.m). (2)
The two first terms can be precomputed in one pass over the matrices X and Y whose rows are the [x;] and [y;]. The bottleneck is to evaluate (x;,y:), equivalent to the matrix multiplication XY'. The k-nearest neighbors for each of the nq queries are k-selected along each row of D.
Compressed-domain search. From now on, we focus on approximate nearest-neighbor search. We consider, in par- ticular, the IVFADC indexing structure [25]. The IVFADC index relies on two levels of quantization, and the database vectors are encoded. The database vector y is approximated as:
y â q(y) = q1(y) + q2(y â q1(y)) (3) where q1 : Rd â C1 â Rd and q2 : Rd â C2 â Rd are quan- tizers; i.e., functions that output an element from a ï¬nite set. Since the sets are ï¬nite, q(y) is encoded as the index of q1(y) and that of q2(y â q1(y)). The ï¬rst-level quantizer is a coarse quantizer and the second level ï¬ne quantizer encodes the residual vector after the ï¬rst level.
The Asymmetric Distance Computation (ADC) search method returns an approximate result:
Lave = k-argmin,âo.¢||% â 4(y:)|l2- (4)
For IVFADC the search is not exhaustive. Vectors for which the distance is computed are pre-selected depending on the ï¬rst-level quantizer q1:
(5) Live = T-argmin.ec, lle â
The multi-probe parameter Ï is the number of coarse-level centroids we consider. The quantizer operates a nearest- neighbor search with exact distances, in the set of reproduc- tion values. Then, the IVFADC search computes
k-argmin i=0:£ s.t. ai (yi)ELIVE Livrapc = llx â a(ya)|l2- (6)
Hence, IVFADC relies on the same distance estimations as the two-step quantization of ADC, but computes them only on a subset of vectors.
The corresponding data structure, the inverted ï¬le, groups the vectors yi into |C1| inverted lists I1, ..., I|C1| with homo- geneous q1(yi). Therefore, the most memory-intensive op- eration is computing LIVFADC, and boils down to linearly scanning Ï inverted lists.
The quantizers. The quantizers q: and q2 have different properties. qi needs to have a relatively low number of repro- duction values so that the number of inverted lists does not explode. We typically use |Ci| ~ V@, trained via k-means. For q2, we can afford to spend more memory for a more ex- tensive representation. The ID of the vector (a 4- or 8-byte integer) is also stored in the inverted lists, so it makes no sense to have shorter codes than that; , log, |C2| > 4x 8.
Product quantizer. We use a product quantizer [25] for q2, which provides a large number of reproduction values with- out increasing the processing cost. It interprets the vector y as b sub-vectors y = [y0...ybâ1], where b is an even divisor of
the dimension d. Each sub-vector is quantized with its own quantizer, yielding the tuple (q0(y0), ..., qbâ1(ybâ1)). The sub-quantizers typically have 256 reproduction values, to ï¬t in one byte. The quantization value of the product quantizer is then q2(y) = q0(y0) + 256 à q1(y1) + ... + 256bâ1 à qbâ1, which from a storage point of view is just the concatena- tion of the bytes produced by each sub-quantizer. Thus, the product quantizer generates b-byte codes with |C2| = 256b reproduction values. The k-means dictionaries of the quan- tizers are small and quantization is computationally cheap.
3. GPU: OVERVIEW AND K-SELECTION This section reviews salient details of Nvidiaâs general- purpose GPU architecture and programming model [30]. We then focus on one of the less GPU-compliant parts involved in similarity search, namely the k-selection, and discuss the literature and challenges.
# 3.1 Architecture
GPU lanes and warps. The Nvidia GPU is a general- purpose computer that executes instruction streams using a 32-wide vector of CUDA threads (the warp); individual threads in the warp are referred to as lanes, with a lane ID from 0 â 31. Despite the âthreadâ terminology, the best analogy to modern vectorized multicore CPUs is that each warp is a separate CPU hardware thread, as the warp shares an instruction counter. Warp lanes taking diï¬erent execu- tion paths results in warp divergence, reducing performance. Each lane has up to 255 32-bit registers in a shared register ï¬le. The CPU analogy is that there are up to 255 vector registers of width 32, with warp lanes as SIMD vector lanes.
Collections of warps. A user-conï¬gurable collection of 1 to 32 warps comprises a block or a co-operative thread ar- ray (CTA). Each block has a high speed shared memory, up to 48 KiB in size. Individual CUDA threads have a block- relative ID, called a thread id, which can be used to parti- tion and assign work. Each block is run on a single core of the GPU called a streaming multiprocessor (SM). Each SM has functional units, including ALUs, memory load/store units, and various special instruction units. A GPU hides execution latencies by having many operations in ï¬ight on warps across all SMs. Each individual warp lane instruction throughput is low and latency is high, but the aggregate arithmetic throughput of all SMs together is 5 â 10à higher than typical CPUs.
Grids and kernels. Blocks are organized in a grid of blocks in a kernel. Each block is assigned a grid relative ID. The kernel is the unit of work (instruction stream with argu- ments) scheduled by the host CPU for the GPU to execute. After a block runs through to completion, new blocks can be scheduled. Blocks from diï¬erent kernels can run concur- rently. Ordering between kernels is controllable via ordering primitives such as streams and events.
Resources and occupancy. The number of blocks execut- ing concurrently depends upon shared memory and register resources used by each block. Per-CUDA thread register us- age is determined at compilation time, while shared memory usage can be chosen at runtime. This usage aï¬ects occu- pancy on the GPU. If a block demands all 48 KiB of shared memory for its private usage, or 128 registers per thread as
3
opposed to 32, then only 1 â 2 other blocks can run concur- rently on the same SM, resulting in low occupancy. Under high occupancy more blocks will be present across all SMs, allowing more work to be in ï¬ight at once.
Memory types. Diï¬erent blocks and kernels communicate through global memory, typically 4 â 32 GB in size, with 5 â 10à higher bandwidth than CPU main memory. Shared memory is analogous to CPU L1 cache in terms of speed. GPU register ï¬le memory is the highest bandwidth memory. In order to maintain the high number of instructions in ï¬ight on a GPU, a vast register ï¬le is also required: 14 MB in the latest Pascal P100, in contrast with a few tens of KB on CPU. A ratio of 250 : 6.25 : 1 for register to shared to global memory aggregate cross-sectional bandwidth is typical on GPU, yielding 10 â 100s of TB/s for the register ï¬le [10].
# 3.2 GPU register ï¬le usage
Structured register data. Shared and register memory usage involves eï¬ciency tradeoï¬s; they lower occupancy but can increase overall performance by retaining a larger work- ing set in a faster memory. Making heavy use of register- resident data at the expense of occupancy or instead of shared memory is often proï¬table [43].
As the GPU register ï¬le is very large, storing structured data (not just temporary operands) is useful. A single lane can use its (scalar) registers to solve a local task, but with limited parallelism and storage. Instead, lanes in a GPU warp can instead exchange register data using the warp shuf- ï¬e instruction, enabling warp-wide parallelism and storage.
Lane-stride register array. A common pattern to achieve this is a lane-stride register array. That is, given elements [ai]i=o:e, each successive value is held in a register by neigh- boring lanes. The array is stored in ¢/32 registers per lane, with £a multiple of 32. Lane j stores {a;, 4324), -.-, 43245}, while register r holds {a32;, @32r41, ---; @32r+31 }-
For manipulating the [ai], the register in which a; is stored (i.e., [¢/32]) and @ must be known at assembly time, while the lane (i.e., i mod 32) can be runtime knowledge. A wide variety of access patterns (shift, any-to-any) are provided; we use the butterfly permutation extensively.
# 3.3 k-selection on CPU versus GPU
k-selection algorithms, often for arbitrarily large £ and k, can be translated to a GPU, including radiz_ selection and bucket selection (1], probabilistic selection [33], quick- , and truncated sorts |. Their performance is dominated by multiple passes over the input in global mem- ory. Sometimes for similarity search, the input distances are computed on-the-fly or stored only in small blocks, not in their entirety. The full, explicit array might be too large to fit into any memory, and its size could be unknown at the start of the processing, rendering algorithms that require multiple passes impractical. They suffer from other issues as well. Quickselect requires partitioning on a storage of size O(â¬), a data-dependent memory movement. This can result in excessive memory transactions, or requiring parallel prefix sums to determine write offsets, with synchronization overhead. Radix selection has no partitioning but multiple passes are still required.
Heap parallelism. In similarity search applications, one is usually interested only in a small number of results, k <
1000 or so. In this regime, selection via max-heap is a typi- cal choice on the CPU, but heaps do not expose much data parallelism (due to serial tree update) and cannot saturate SIMD execution units. The ad-heap [31] takes better advan- tage of parallelism available in heterogeneous systems, but still attempts to partition serial and parallel work between appropriate execution units. Despite the serial nature of heap update, for small k the CPU can maintain all of its state in the L1 cache with little eï¬ort, and L1 cache latency and bandwidth remains a limiting factor. Other similarity search components, like PQ code manipulation, tend to have greater impact on CPU performance [2].
GPU heaps. Heaps can be similarly implemented on a GPU [7]. However, a straightforward GPU heap implemen- tation suï¬ers from high warp divergence and irregular, data- dependent memory movement, since the path taken for each inserted element depends upon other values in the heap.
GPU parallel priority queues [24] improve over the serial heap update by allowing multiple concurrent updates, but they require a potential number of small sorts for each insert and data-dependent memory movement. Moreover, it uses multiple synchronization barriers through kernel launches in diï¬erent streams, plus the additional latency of successive kernel launches and coordination with the CPU host.
Other more novel GPU algorithms are available for small k, namely the selection algorithm in the fgknn library [41]. This is a complex algorithm that may suï¬er from too many synchronization points, greater kernel launch overhead, us- age of slower memories, excessive use of hierarchy, partition- ing and buï¬ering. However, we take inspiration from this particular algorithm through the use of parallel merges as seen in their merge queue structure.
# 4. FAST K-SELECTION ON THE GPU
For any CPU or GPU algorithm, either memory or arith- metic throughput should be the limiting factor as per the rooï¬ine performance model [48]. For input from global mem- ory, k-selection cannot run faster than the time required to scan the input once at peak memory bandwidth. We aim to get as close to this limit as possible. Thus, we wish to per- form a single pass over the input data (from global memory or produced on-the-ï¬y, perhaps fused with a kernel that is generating the data).
We want to keep intermediate state in the fastest memory: the register ï¬le. The major disadvantage of register memory is that the indexing into the register ï¬le must be known at assembly time, which is a strong constraint on the algorithm.
# In-register sorting
We use an in-register sorting primitive as a building block. Sorting networks are commonly used on SIMD architec- tures [13], as they exploit vector parallelism. They are eas- ily implemented on the GPU, and we build sorting networks with lane-stride register arrays.
We use a variant of Batcherâs bitonic sorting network sl. which is a set of parallel merges on an array of size 2". Each merge takes s arrays of length t (s and t a power of 2) to s/2 arrays of length 2¢, using log,(t) parallel steps. A bitonic sort applies this merge recursively: to sort an array of length é, merge @ arrays of length 1 to ¢/2 arrays of length 2, to £/4 arrays of length 4, successively to 1 sorted array of length @, leading to $(log,(¢)? + log,(¢)) parallel merge steps.
4
Algorithm 1 Odd-size merging network
function MERGE-ODD((Li]i=0:¢, , [Ri]i=o:ep ) parallel for i + 0: min(éz, zg) do > inverted 1st stage; inputs are already sorted COMPARE-SWAP(L¢, ~iâ1, Ri) end for parallel do > If £p = â¬p and a power-of-2, these are equivalent MERGE-ODD-CONTINUE(([Li]i=0:¢,, left) MERGE-ODD-CONTINUE([Ri]i=o:¢,, right) end do end function function MERGE-ODD-CONTINUE(([2i]i=0:¢, P) if â¬>1 then he Qileg2 1-1 > largest power-of-2 < ¢ parallel for i+ 0:âhdo > Implemented with warp shuffle butterfly COMPARE-SWAP(2i, Li+h) end for parallel do if p = left then > left side recursion MERGE-ODD-CONTINUE((2;]i=0:¢âh, Left) MERGE-ODD-CONTINUE(([;]i=¢ân:¢, Fight) else > right side recursion MERGE-ODD-CONTINUE(([2i]i=0:h, Left) MERGE-ODD-CONTINUE(([2i]i=n:¢, right) end if end do end if
# end if end function
Odd-size merging and sorting networks. If some input data is already sorted, we can modify the network to avoid merging steps. We may also not have a full power-of-2 set of data, in which case we can eï¬ciently shortcut to deal with the smaller size.
Algorithm 1 is an odd-sized merging network that merges already sorted left and right arrays, each of arbitrary length. While the bitonic network merges bitonic sequences, we start with monotonic sequences: sequences sorted monotonically. A bitonic merge is made monotonic by reversing the ï¬rst comparator stage.
The odd size algorithm is derived by considering arrays to be padded to the next highest power-of-2 size with dummy
GBT4 o[3T7]. step 1 step 2 step 3 step 4
Figure 1: Odd-size network merging arrays of sizes 5 and 3. Bullets indicate parallel compare/swap. Dashed lines are elided elements or comparisons.
input thread queue warp queue ao : [esa ââ>} T)0 «+e Teak Wo Waa lane 0 i insertion : : a Tvs T z lane 1 : fs... rd TLik> 2 Wy lane i 3 ; coalesced sk: read z : i ES : ; bag [Tp TE] War We-1) lane 31 fac.
Figure 2: Overview of WarpSelect. The input val- ues stream in on the left, and the warp queue on the right holds the output result.
elements that are never swapped (the merge is monotonic) and are already properly positioned; any comparisons with dummy elements are elided. A left array is considered to be padded with dummy elements at the start; a right ar- ray has them at the end. A merge of two sorted arrays length £, and ép to a sorted array of ¢; + &r requires log, (max(¢z, £r))] +1 parallel steps. =0 ri parallel steps.
The compare-swap is implemented using warp shuï¬es on a lane-stride register array. Swaps with a stride a multiple of 32 occur directly within a lane as the lane holds both elements locally. Swaps of stride ⤠16 or a non-multiple of 32 occur with warp shuï¬es. In practice, used array lengths are multiples of 32 as they are held in lane-stride arrays.
Algorithm 2 Odd-size sorting networ function SORT-ODD([z;i]i=0:¢) if £>1 then parallel do SORT-ODD((2iJi=0:|¢/2) ) SORT-ODD((2iJi=[¢/2]:0) end do MERGE-ODD( [ai] i=0:[¢/2); [@é]i=[e/2):0) end if end function
Algorithm|2]extends the merge to a full sort. Assuming no structure present in the input data, 4(log.(¢)]? + [log.(â¬)]) parallel steps are required for sorting data of length ¢.
# 4.2 WarpSelect
Our k-selection implementation, WARPSELECT, maintains state entirely in registers, requires only a single pass over data and avoids cross-warp synchronization. It uses MERGE- ODD and SORT-ODD as primitives. Since the register file pro- vides much more storage than shared memory, it supports k < 1024. Each warp is dedicated to k-selection to a single one of the n arrays [aj]. If n is large enough, a single warp per each [a;] will result in full GPU occupancy. Large £ per warp is handled by recursive decomposition, if £ is known in advance.
Overview. Our approach (Algorithm B]and Figure[2) oper- ates on values, with associated indices carried along (omit- ted from the description for simplicity). It selects the k least values that come from global memory, or from intermediate value registers if fused into another kernel providing the val- ues. Let [ai]i=o:¢ be the sequence provided for selection.
5
The elements (on the left of Figure 2) are processed in groups of 32, the warp size. Lane j is responsible for pro- cessing {aj, a32+j, ...}; thus, if the elements come from global memory, the reads are contiguous and coalesced into a min- imal number of memory transactions.
Data structures. Each lane j maintains a small queue of t elements in registers, called the thread queues [T j i ]i=0:t, ordered from largest to smallest (T j i+1). The choice of t is made relative to k, see Section 4.3. The thread queue is a ï¬rst-level ï¬lter for new values coming in. If a new a32i+j is greater than the largest key currently in the queue, T j 0 , it is guaranteed that it wonât be in the k smallest ï¬nal results. The warp shares a lane-stride register array of k smallest seen elements, [Wi]i=0:k, called the warp queue. It is ordered from smallest to largest (Wi ⤠Wi+1); if the requested k is not a multiple of 32, we round it up. This is a second level data structure that will be used to maintain all of the k smallest warp-wide seen values. The thread and warp queues are initialized to maximum sentinel values, e.g., +â.
Update. The three invariants maintained are:
⢠all per-lane T j 0 are not in the min-k
⢠all per-lane T j 0 are greater than all warp queue keys Wi
⢠all ai seen so far in the min-k are contained in either i ]i=0:t,j=0:32), or in the some laneâs thread queue ([T j warp queue.
Lane j receives a new a32i+j and attempts to insert it into 0 , then the new pair is by its thread queue. If a32i+j > T j deï¬nition not in the k minimum, and can be rejected.
Otherwise, it is inserted into its proper sorted position in the thread queue, thus ejecting the old T j 0 . All lanes complete doing this with their new received pair and their thread queue, but it is now possible that the second invariant have been violated. Using the warp ballot instruction, we determine if any lane has violated the second invariant. If not, we are free to continue processing new elements.
Restoring the invariants. If any lane has its invariant violated, then the warp uses odd-merge to merge and sort the thread and warp queues together. The new warp queue
Algorithm 3 WARPSELECT pseudocode for lane j function WARPSELECT(a) if a< Tj then insert a into our [T?i=o:¢ end if if WARP-BALLOT(T) < W,-1) then > Reinterpret thread queues as lane-stride array [ai]io:32¢ - cast ([T? ]i=0:t,j)=0:32) > concatenate and sort thread queues SORT-ODD([aii]i=0:32¢) MERGE-ODD([W,]i=0:k; [@i]i=0:32¢) > Reinterpret lane-stride array as thread queues [T?]i=0:t,j=0:32 - CAST ([ai]i=0:32¢) REVERSE-ARRAY ([T;]i=0:) > Back in thread queue order, invariant restored end if end function
will be the min-k elements across the merged, sorted queues, and the new thread queues will be the remainder, from min- (k + 1) to min-(k + 32t + 1). This restores the invariants and we are free to continue processing subsequent elements.
Since the thread and warp queues are already sorted, we merge the sorted warp queue of length k with 32 sorted arrays of length t. Supporting odd-sized merges is important because Batcherâs formulation would require that 32t = k and is a power-of-2; thus if k = 1024, t must be 32. We found that the optimal t is way smaller (see below).
Using odd-merge to merge the 32 already sorted thread queues would require a struct-of-arrays to array-of-structs transposition in registers across the warp, since the t succes- sive sorted values are held in diï¬erent registers in the same lane rather than a lane-stride array. This is possible [12], but would use a comparable number of warp shuï¬es, so we just reinterpret the thread queue registers as an (unsorted) lane-stride array and sort from scratch. Signiï¬cant speedup is realizable by using odd-merge for the merge of the ag- gregate sorted thread queues with the warp queue.
Handling the remainder. If there are remainder elements ecause @ is not a multiple of 32, those are inserted into the thread queues for the lanes that have them, after which we proceed to the output stage.
Output. A ï¬nal sort and merge is made of the thread and warp queues, after which the warp queue holds all min-k values.
# 4.3 Complexity and parameter selection
For each incoming group of 32 elements, WarpSelect can perform 1, 2 or 3 constant-time operations, all happen- ing in warp-wide parallel time:
1. read 32 elements, compare to all thread queue heads T j 0 , cost C1, happens N1 times;
0 , perform insertion sort on those speciï¬c thread queues, cost C2 = O(t), hap- pens N2 times;
0 < Wkâ1, sort and merge queues, cost C3 = O(t log(32t)2 + k log(max(k, 32t))), happens N3 times.
Thus, the total cost is NiC, + N2C2 + N3C3. Ny = ¢/32, and on random data drawn independently, N2 = O(k log(é)) and N3 = O(klog(é)/t), see the Appendix for a full deriva- tion. Hence, the trade-off is to balance a cost in N2C2 and one in N3C3. The practical choice for t given k and £ was made by experiment on a variety of k-NN data. For k < 32, we use t = 2, k < 128 uses t = 3, k < 256 uses t = 4, and k < 1024 uses t = 8, all irrespective of ¢.
# 5. COMPUTATION LAYOUT
This section explains how IVFADC, one of the indexing methods originally built upon product quantization [25], is implemented eï¬ciently. Details on distance computations and articulation with k-selection are the key to understand- ing why this method can outperform more recent GPU- compliant approximate nearest neighbor strategies [47].
# 5.1 Exact search
We brieï¬y come back to the exhaustive search method, often referred to as exact brute-force. It is interesting on its
6
own for exact nearest neighbor search in small datasets. It is also a component of many indexes in the literature. In our case, we use it for the IVFADC coarse quantizer q1.
As stated in Section the distance computation boils down to a matrix multiplication. We use optimized GEMM routines in the cuBLAS library to calculate the â2(x;, yi) term for L2 distance, resulting in a partial distance matrix Dâ. To complete the distance calculation, we use a fused k-selection kernel that adds the ||y;||? term to each entry of the distance matrix and immediately submits the value to k-selection in registers. The ||2;||? term need not be taken into account before k-selection. Kernel fusion thus allows for only 2 passes (GEMM write, k-select read) over Dâ, com- pared to other implementations that may require 3 or more. Row-wise k-selection is likely not fusable with a well-tuned GEMM kernel, or would result in lower overall efficiency.
As Dâ does not fit in GPU memory for realistic problem sizes, the problem is tiled over the batch of queries, with tg < mq queries being run in a single tile. Each of the [ng/tg| tiles are independent problems, but we run two in parallel on different streams to better occupy the GPU, so the effec- tive memory requirement of D is O(2¢t,). The computation can similarly be tiled over ¢. For very large input coming from the CPU, we support buffering with pinned memory to overlap CPU to GPU copy with GPU compute.
# IVFADC indexing
PQ lookup tables. At its core, the IVFADC requires com- puting the distance from a vector to a set of product quanti- zation reproduction values. By developing Equation (6) for a database vector y, we obtain:
Iz â a(y)I3 = lz ay) -âe2y-a@))lz.
If we decompose the residual vectors left after q1 as:
yâuy) = [yt---y?] and (8)
a) ee (9)
then the distance is rewritten as:
\lx â a(y) 3 = lla? - D3 +. + [12 â a NII. (20)
Each quantizer q1, ..., qb has 256 reproduction values, so when x and q1(y) are known all distances can be precom- puted and stored in tables T1, ..., Tb each of size 256 [25]. Computing the sum (10) consists of b look-ups and addi- tions. Comparing the cost to compute n distances:
⢠Explicit computation: n à d mutiply-adds;
⢠With lookup tables: 256 à d multiply-adds and n à b lookup-adds.
This is the key to the eï¬ciency of the product quantizer. In our GPU implementation, b is any multiple of 4 up to 64. The codes are stored as sequential groups of b bytes per vector within lists.
IVFADC lookup tables. When scanning over the ele- ments of the inverted list IL (where by deï¬nition q1(y) is constant), the look-up table method can be applied, as the query x and q1(y) are known.
Moreover, the computation of the tables T;...T, is fur- ther optimized [5]. The expression of ||aâq(y)||3 in Equation can be decomposed as:
ila2(..-)II2 + 2(qr(y), a2(..-)) + [le â aa (y)II2 -2 (x, a2(..)) « Se a a term 1 term 2 term 3 (11)
(11) The objective is to minimize inner loop computations. The computations we can do in advance and store in lookup tables are as follows:
⢠Term 1 is independent of the query. It can be precom- puted from the quantizers, and stored in a table T of size |C1| à 256 à b;
⢠Term 2 is the distance to q1âs reproduction value. It is thus a by-product of the ï¬rst-level quantizer q1;
⢠Term 3 can be computed independently of the inverted list. Its computation costs d à 256 multiply-adds.
This decomposition is used to produce the lookup tables T1 . . . Tb used during the scan of the inverted list. For a single query, computing the Ï Ã b tables from scratch costs Ï Ã d à 256 multiply-adds, while this decomposition costs 256Ãd multiply-adds and Ï ÃbÃ256 additions. On the GPU, the memory usage of T can be prohibitive, so we enable the decomposition only when memory is a not a concern.
# 5.3 GPU implementation
Algorithm 4 summarizes the process as one would im- plement it on a CPU. The inverted lists are stored as two separate arrays, for PQ codes and associated IDs. IDs are resolved only if k-selection determines k-nearest member- ship. This lookup yields a few sparse memory reads in a large array, thus the IDs can optionally be stored on CPU for tiny performance cost.
List scanning. A kernel is responsible for scanning the Ï closest inverted lists for each query, and calculating the per- vector pair distances using the lookup tables Ti. The Ti are stored in shared memory: up to nq ÃÏ Ãmaxi |Ii|Ãb lookups are required for a query set (trillions of accesses in practice), and are random access. This limits b to at most 48 (32- bit ï¬oating point) or 96 (16-bit ï¬oating point) with current architectures. In case we do not use the decomposition of Equation (11), the Ti are calculated by a separate kernel before scanning.
Multi-pass kernels. Each nq Ã Ï pairs of query against inverted list can be processed independently. At one ex- treme, a block is dedicated to each of these, resulting in up to nq Ã Ï Ã maxi |Ii| partial results being written back to global memory, which is then k-selected to nq à k ï¬nal re- sults. This yields high parallelism but can exceed available GPU global memory; as with exact search, we choose a tile size tq ⤠nq to reduce memory consumption, bounding its complexity by O(2tqÏ maxi |Ii|) with multi-streaming.
A single warp could be dedicated to k-selection of each tq set of lists, which could result in low parallelism. We introduce a two-pass k-selection, reducing tq Ã Ï Ã maxi |Ii| to tq à f à k partial results for some subdivision factor f . This is reduced again via k-selection to the ï¬nal tqÃk results.
Fused kernel. As with exact search, we experimented with a kernel that dedicates a single block to scanning all Ï lists
7
for a single query, with k-selection fused with distance com- putation. This is possible as WarpSelect does not ï¬ght for the shared memory resource which is severely limited. This reduces global memory write-back, since almost all interme- diate results can be eliminated. However, unlike k-selection overhead for exact computation, a signiï¬cant portion of the runtime is the gather from the Ti in shared memory and lin- ear scanning of the Ii from global memory; the write-back is not a dominant contributor. Timing for the fused kernel is improved by at most 15%, and for some problem sizes would be subject to lower parallelism and worse performance with- out subsequent decomposition. Therefore, and for reasons of implementation simplicity, we do not use this layout.
Algorithm 4 IVFPQ batch search routine
function IVFPQ-SEARCH((21, ..., Lng]; Ti, Zye,)) for i + 0: nq do > batch quantization of Section[5 Live + T-argmin,¢¢, lle ⢠end for for i+ 0: nq do Led Compute term 3 (see Sectio: for L in Liyp do Compute distance tables 7}, ...,T for j in Z;, do > distance estimation, Equation d& jai â q(ys)I|3 Append (d, L,j) to L end for end for R; < k-select smallest distances d from L end for return R end function 2 > distance table > T loops
# 5.4 Multi-GPU parallelism
Modern servers can support several GPUs. We employ this capability for both compute power and memory.
Replication. If an index instance ï¬ts in the memory of a single GPU, it can be replicated across R diï¬erent GPUs. To query nq vectors, each replica handles a fraction nq/R of the queries, joining the results back together on a single GPU or in CPU memory. Replication has near linear speedup, except for a potential loss in eï¬ciency for small nq.
Sharding. If an index instance does not fit in the memory of a single GPU, an index can be sharded across S differ- ent GPUs. For adding ¢ vectors, each shard receives ¢/S of the vectors, and for query, each shard handles the full query set Nq, joining the partial results (an additional round of k- selection is still required) on a single GPU or in CPU mem- ory. For a given index size ¢, sharding will yield a speedup (sharding has a query of ng against ¢/S versus replication with a query of ng/R against @), but is usually less than pure replication due to fixed overhead and cost of subse- quent k-selection.
Replication and sharding can be used together (S shards, each with R replicas for S Ã R GPUs in total). Sharding or replication are both fairly trivial, and the same principle can be used to distribute an index across multiple machines.
100 F ° runtime (ms) truncated bitonic sort fgknn select ââ WarpSelect â*â memory bandwidth limit ââ 0.1 1024 4096 16384 65536 array length
Figure 3: Runtimes for different k-selection meth- ods, as a function of array length ¢. Simultaneous arrays processed are n, = 10000. k = 100 for full lines, k = 1000 for dashed lines.
# 6. EXPERIMENTS & APPLICATIONS
This section compares our GPU k-selection and nearest- neighbor approach to existing libraries. Unless stated other- wise, experiments are carried out on a 2Ã2.8GHz Intel Xeon E5-2680v2 with 4 Maxwell Titan X GPUs on CUDA 8.0.
# 6.1 k-selection performance
We compare against two other GPU small k-selection im- plementations: the row-based Merge Queue with Buï¬ered Search and Hierarchical Partition extracted from the fgknn library of Tang et al. [41] and Truncated Bitonic Sort (TBiS ) from Sismanis et al. [40]. Both were extracted from their re- spective exact search libraries.
We evaluate k-selection for k = 100 and 1000 of each row from a row-major matrix ng x ¢ of random 32-bit floating point values on a single Titan X. The batch size ng is fixed at 10000, and the array lengths ¢ vary from 1000 to 128000. Inputs and outputs to the problem remain resident in GPU memory, with the output being of size ng x k, with corre- sponding indices. Thus, the input problem sizes range from 40 MB (£= 1000) to 5.12 GB (¢= 128k). TBiS requires large auxiliary storage, and is limited to @ < 48000 in our tests.
Figure[3]shows our relative performance against TBiS and fgknn. It also includes the peak possible performance given by the memory bandwidth limit of the Titan X. The rela- tive performance of WARPSELECT over fgknn increases for larger k; even TBiS starts to outperform fgknn for larger ¢ at k = 1000. We look especially at the largest ¢ = 128000. WARPSELECT is 1.62x faster at k = 100, 2.01x at k = 1000. Performance against peak possible drops off for all imple- mentations at larger k. WARPSELECT operates at 55% of peak at k = 100 but only 16% of peak at k = 1000. This is due to additional overhead assocated with bigger thread queues and merge/sort networks for large k.
Diï¬erences from fgknn. WarpSelect is inï¬uenced by fgknn, but has several improvements: all state is maintained in registers (no shared memory), no inter-warp synchroniza- tion or buï¬ering is used, no âhierarchical partitionâ, the k- selection can be fused into other kernels, and it uses odd-size networks for eï¬cient merging and sorting.
8
method BIDMach [11] Ours Ours # GPUs 1 1 4 # centroids 4096 256 735 s 320 s 316 s 140 s 100 s 84 s
Table 1: MNIST8m k-means performance
# 6.2 k-means clustering
The exact search method with k = 1 can be used by a k- means clustering method in the assignment stage, to assign nq training vectors to |C1| centroids. Despite the fact that it does not use the IVFADC and k = 1 selection is trivial (a parallel reduction is used for the k = 1 case, not WarpSe- lect), k-means is a good benchmark for the clustering used to train the quantizer q1.
We apply the algorithm on MNIST8m images. The 8.1M images are graylevel digits in 28x28 pixels, linearized to vec- tors of 784-d. We compare this k-means implementation to the GPU k-means of BIDMach [11], which was shown to be more eï¬cient than several distributed k-means implemen- tations that require dozens of machines3. Both algorithms were run for 20 iterations. Table 1 shows that our imple- mentation is more than 2à faster, although both are built upon cuBLAS. Our implementation receives some beneï¬t from the k-selection fusion into L2 distance computation. For multi-GPU execution via replicas, the speedup is close to linear for large enough problems (3.16à for 4 GPUs with 4096 centroids). Note that this benchmark is somewhat un- realistic, as one would typically sub-sample the dataset ran- domly when so few centroids are requested.
Large scale. We can also compare to [3], an approximate CPU method that clusters 108 128-d vectors to 85k cen- troids. Their clustering method runs in 46 minutes, but re- quires 56 minutes (at least) of pre-processing to encode the vectors. Our method performs exact k-means on 4 GPUs in 52 minutes without any pre-processing.
# 6.3 Exact nearest neighbor search
We consider a classical dataset used to evaluate nearest neighbor search: SirT1M G5. Its characteristic sizes are £= 10°, d= 128, nq = 10. Computing the partial distance matrix Dâ costs ng x £ x d = 1.28 Tflop, which runs in less than one second on current GPUs. Figure|4]shows the cost of the distance computations against the cost of our tiling of the GEMM for the â2 (x;,y:) term of Equation |2| and the peak possible k-selection performance on the distance matrix of size nq x ¢, which additionally accounts for reading the tiled result matrix Dâ at peak memory bandwidth.
In addition to our method from Section 5, we include times from the two GPU libraries evaluated for k-selection performance in Section 6.1. We make several observations:
⢠for k-selection, the naive algorithm that sorts the full result array for each query using thrust::sort_by_key is more than 10à slower than the comparison methods;
⢠L2 distance and k-selection cost is dominant for all but our method, which has 85 % of the peak possible performance, assuming GEMM usage and our tiling
3BIDMach numbers from https://github.com/BIDData/ BIDMach/wiki/Benchmarks#KMeans
-2xy SGEMM (as tiled) ââ peak possible k-select our method ââ 3 truncated bitonic sort â=â _ fgknn â B@ 25+ @ E 2h a â- ⬠2 15 256 1024
Figure 4: Exact search k-NN time for the SIFT1M dataset with varying k on 1 Titan X GPU.
of the partial distance matrix Dâ on top of GEMM is close to optimal. The cuBLAS GEMM itself has low efficiency for small reduction sizes (d = 128);
e Our fused L2/k-selection kernel is important. Our same exact algorithm without fusion (requiring an ad- ditional pass through Dâ) is at least 25% slower.
Eï¬cient k-selection is even more important in situations where approximate methods are used to compute distances, because the relative cost of k-selection with respect to dis- tance computation increases.
# 6.4 Billion-scale approximate search
There are few studies on GPU-based approximate nearest- neighbor search on large datasets (¢ >> 10°). We report a few comparison points here on index search, using standard datasets and evaluation protocol in this field.
SIFT1M. For the sake of completeness, we ï¬rst compare our GPU search speed on Sift1M with the implementation of Wieschollek et al. [47]. They obtain a nearest neighbor re- call at 1 (fraction of queries where the true nearest neighbor is in the top 1 result) of R@1 = 0.51, and R@100 = 0.86 in 0.02 ms per query on a Titan X. For the same time budget, our implementation obtains R@1 = 0.80 and R@100 = 0.95.
SIFT1B. We compare again with Wieschollek et al., on the Sift1B dataset [26] of 1 billion SIFT image features at nq = 104. We compare the search performance in terms of same memory usage for similar accuracy (more accurate methods may involve greater search time or memory usage). On a single GPU, with m = 8 bytes per vector, R@10 = 0.376 in 17.7 µs per query vector, versus their reported R@10 = 0.35 in 150 µs per query vector. Thus, our implementation is more accurate at a speed 8.5à faster.
DEEP 1B. We also experimented on the DEEP1B dataset of â¬=1 billion CNN representations for images at nq = 10°. The paper that introduces the dataset reports CPU results (1 thread): R@1=0.45 in 20 ms search time per vector. We use a PQ encoding of m = 20, with d = 80 via OPQ {17}, and |C:| = 2'*, which uses a comparable dataset storage as the original paper (20 GB). This requires multiple GPUs as it is too large for a single GPUâs global memory, so we con- sider 4 GPUs with S = 2, R = 2. We obtain a R@1 =0.4517 in 0.0133 ms per vector. While the hardware platforms are
9
120 o_o 4 Titan X: m=64, S=1, R=4. â+â © 100 + 4 Titan X: m=32, S=1, =~ 4 â 4 Titan X: m=16, S=1, R=4. â«â 2 sof | 3 2B 607 4 ra & © 40F 4 Da Zz = 20F 4 YFCC100M 0 1 1 1 A i i 1 O01 02 03 04 O05 06 O7 O08 09 10-intersection at 10 24 T T T T 7 7 4 Titan X: m=40, S=4, R=1 â+â i 20, S=2, 8 M40: m=40, S=4, 8 M40: m=20, S=2, R k-NN graph build time (hours) ry T 4b 4 DEEP1B Ld 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10-intersection at 10
Figure 5: Speed/accuracy trade-oï¬ of brute-force 10-NN graph construction for the YFCC100M and DEEP1B datasets.
diï¬erent, it shows that making searches on GPUs is a game- changer in terms of speed achievable on a single machine.
# 6.5 The k-NN graph
An example usage of our similarity search method is to construct a k-nearest neighbor graph of a dataset via brute force (all vectors queried against the entire index).
Experimental setup. We evaluate the trade-oï¬ between speed, precision and memory on two datasets: 95 million images from the Yfcc100M dataset [42] and Deep1B. For Yfcc100M, we compute CNN descriptors as the one-before- last layer of a ResNet [23], reduced to d = 128 with PCA.
The evaluation measures the trade-oï¬ between:
⢠Speed: How much time it takes to build the IVFADC index from scratch and construct the whole k-NN graph (k = 10) by searching nearest neighbors for all vectors in the dataset. Thus, this is an end-to-end test that includes indexing as well as search time;
⢠Quality: We sample 10,000 images for which we com- pute the exact nearest neighbors. Our accuracy mea- sure is the fraction of 10 found nearest neighbors that are within the ground-truth 10 nearest neighbors.
For Yfcc100M, we use a coarse quantizer (216 centroids), and consider m = 16, 32 and 64 byte PQ encodings for each vector. For Deep1B, we pre-process the vectors to d = 120 via OPQ, use |C1| = 218 and consider m = 20, 40. For a given encoding, we vary Ï from 1 to 256, to obtain trade- oï¬s between eï¬ciency and quality, as seen in Figure 5.
Figure 6: Path in the k-NN graph of 95 million images from YFCC100M. The ï¬rst and the last image are given; the algorithm computes the smoothest path between them.
Discussion. For Yfcc100M we used S = 1, R = 4. An accuracy of more than 0.8 is obtained in 35 minutes. For Deep1B, a lower-quality graph can be built in 6 hours, with higher quality in about half a day. We also experi- mented with more GPUs by doubling the replica set, us- ing 8 Maxwell M40s (the M40 is roughly equivalent in per- formance to the Titan X). Performance is improved sub- linearly (â¼ 1.6Ã for m = 20, â¼ 1.7Ã for m = 40).
# 7. CONCLUSION
The arithmetic throughput and memory bandwidth of GPUs are well into the teraï¬ops and hundreds of gigabytes per second. However, implementing algorithms that ap- proach these performance levels is complex and counter- intuitive. In this paper, we presented the algorithmic struc- ture of similarity search methods that achieves near-optimal performance on GPUs.
For comparison, the largest k-NN graph construction we are aware of used a dataset comprising 36.5 million 384- d vectors, which took a cluster of 128 CPU servers 108.7 hours of compute [45], using NN-Descent [15]. Note that NN-Descent could also build or reï¬ne the k-NN graph for the datasets we consider, but it has a large memory over- head over the graph storage, which is already 80 GB for Deep1B. Moreover it requires random access across all vec- tors (384 GB for Deep1B).
The largest GPU k-NN graph construction we found is a brute-force construction using exact search with GEMM, of a dataset of 20 million 15,000-d vectors, which took a cluster of 32 Tesla C2050 GPUs 10 days [14]. Assuming computa- tion scales with GEMM cost for the distance matrix, this approach for Deep1B would take an impractical 200 days of computation time on their cluster.
# 6.6 Using the k-NN graph
When a k-NN graph has been constructed for an image dataset, we can ï¬nd paths in the graph between any two images, provided there is a single connected component (this is the case). For example, we can search the shortest path between two images of ï¬owers, by propagating neighbors from a starting image to a destination image. Denoting by S and D the source and destination images, and dij the distance between nodes, we search the path P = {p1, ..., pn} with p1 = S and pn = D such that
This work enables applications that needed complex ap- proximate algorithms before. For example, the approaches presented here make it possible to do exact k-means cluster- ing or to compute the k-NN graph with simple brute-force approaches in less time than a CPU (or a cluster of them) would take to do this approximately.
GPU hardware is now very common on scientiï¬c work- stations, due to their popularity for machine learning algo- rithms. We believe that our work further demonstrates their interest for database applications. Along with this work, we are publishing a carefully engineered implementation of this paperâs algorithms, so that these GPUs can now also be used for eï¬cient similarity search.
8. REFERENCES [1] T. Alabi, J. D. Blanchard, B. Gordon, and R. Steinbach. Fast k-selection algorithms for graphics processing units. ACM Journal of Experimental Algorithmics, 17:4.2:4.1â4.2:4.29, October 2012.
[2] F. Andr´e, A.-M. Kermarrec, and N. L. Scouarnec. Cache locality is not enough: High-performance nearest neighbor search with product quantization fast scan. In Proc. International Conference on Very Large DataBases, pages 288â299, 2015.
[3] Y. Avrithis, Y. Kalantidis, E. Anagnostopoulos, and I. Z. Emiris. Web-scale image clustering revisited. In Proc. International Conference on Computer Vision, pages 1502â1510, 2015.
min P max i=1..n dpipi+1 , (12)
[4] A. Babenko and V. Lempitsky. The inverted multi-index. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 3069â3076, June 2012.
i.e., we want to favor smooth transitions. An example re- sult is shown in Figure 6 from Yfcc100M4. It was ob- tained after 20 seconds of propagation in a k-NN graph with k = 15 neighbors. Since there are many ï¬ower images in the dataset, the transitions are smooth.
[5] A. Babenko and V. Lempitsky. Improving bilayer product quantization for billion-scale approximate nearest neighbors in high dimensions. arXiv preprint arXiv:1404.1831, 2014.
[6] A. Babenko and V. Lempitsky. Eï¬cient indexing of billion-scale datasets of deep descriptors. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 2055â2063, June 2016.
4The mapping from vectors to images is not available for Deep1B
[7] R. Barrientos, J. G´omez, C. Tenllado, M. Prieto, and M. Marin. knn query processing in metric spaces using GPUs. In International European Conference on Parallel and Distributed Computing, volume 6852 of Lecture Notes
10
in Computer Science, pages 380â392, Bordeaux, France, September 2011. Springer.
[8] K. E. Batcher. Sorting networks and their applications. In Proc. Spring Joint Computer Conference, AFIPS â68 (Spring), pages 307â314, New York, NY, USA, 1968. ACM.
[9] P. Boncz, W. Lehner, and T. Neumann. Special issue: Modern hardware. The VLDB Journal, 25(5):623â624, 2016.
[10] J. Canny, D. L. W. Hall, and D. Klein. A multi-teraï¬op constituency parser using GPUs. In Proc. Empirical Methods on Natural Language Processing, pages 1898â1907. ACL, 2013.
[11] J. Canny and H. Zhao. Bidmach: Large-scale learning with zero memory allocation. In BigLearn workshop, NIPS, 2013.
[12] B. Catanzaro, A. Keller, and M. Garland. A decomposition for in-place matrix transposition. In Proc. ACM Symposium on Principles and Practice of Parallel Programming, PPoPP â14, pages 193â206, 2014.
[13] J. Chhugani, A. D. Nguyen, V. W. Lee, W. Macy, M. Hagog, Y.-K. Chen, A. Baransi, S. Kumar, and P. Dubey. Eï¬cient implementation of sorting on multi-core simd cpu architecture. Proc. VLDB Endow., 1(2):1313â1324, August 2008.
[14] A. Dashti. Eï¬cient computation of k-nearest neighbor graphs for large high-dimensional data sets on gpu clusters. Masterâs thesis, University of Wisconsin Milwaukee, August 2013.
[15] W. Dong, M. Charikar, and K. Li. Eï¬cient k-nearest neighbor graph construction for generic similarity measures. In WWW: Proceeding of the International Conference on World Wide Web, pages 577â586, March 2011.
[16] M. Douze, H. J´egou, and F. Perronnin. Polysemous codes. In Proc. European Conference on Computer Vision, pages 785â801. Springer, October 2016.
[17] T. Ge, K. He, Q. Ke, and J. Sun. Optimized product quantization. IEEE Trans. PAMI, 36(4):744â755, 2014.
[18] Y. Gong and S. Lazebnik. Iterative quantization: A procrustean approach to learning binary codes. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 817â824, June 2011.
[19] Y. Gong, L. Wang, R. Guo, and S. Lazebnik. Multi-scale orderless pooling of deep convolutional activation features. In Proc. European Conference on Computer Vision, pages 392â407, 2014.
[20] A. Gordo, J. Almazan, J. Revaud, and D. Larlus. Deep image retrieval: Learning global representations for image search. In Proc. European Conference on Computer Vision, pages 241â257, 2016.
[21] S. Han, H. Mao, and W. J. Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huï¬man coding. arXiv preprint arXiv:1510.00149, 2015.
[22] K. He, F. Wen, and J. Sun. K-means hashing: An aï¬nity-preserving quantization method for learning binary compact codes. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 2938â2945, June 2013.
[23] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 770â778, June 2016.
[24] X. He, D. Agarwal, and S. K. Prasad. Design and implementation of a parallel priority queue on many-core architectures. IEEE International Conference on High Performance Computing, pages 1â10, 2012.
[25] H. J´egou, M. Douze, and C. Schmid. Product quantization for nearest neighbor search. IEEE Trans. PAMI, 33(1):117â128, January 2011.
[26] H. J´egou, R. Tavenard, M. Douze, and L. Amsaleg. Searching in one billion vectors: re-rank with source coding. In International Conference on Acoustics, Speech,
11
and Signal Processing, pages 861â864, May 2011.
[27] Y. Kalantidis and Y. Avrithis. Locally optimized product quantization for approximate nearest neighbor search. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 2329â2336, June 2014.
[28] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097â1105, 2012.
[29] F. T. Leighton. Introduction to Parallel Algorithms and Architectures: Array, Trees, Hypercubes. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1992.
[30] E. Lindholm, J. Nickolls, S. Oberman, and J. Montrym. NVIDIA Tesla: a uniï¬ed graphics and computing architecture. IEEE Micro, 28(2):39â55, March 2008. [31] W. Liu and B. Vinter. Ad-heap: An eï¬cient heap data
structure for asymmetric multicore processors. In Proc. of Workshop on General Purpose Processing Using GPUs, pages 54:54â54:63. ACM, 2014.
[32] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and
J. Dean. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111â3119, 2013. [33] L. Monroe, J. Wendelberger, and S. Michalak. Randomized
selection on the GPU. In Proc. ACM Symposium on High Performance Graphics, pages 89â98, 2011.
[34] M. Norouzi and D. Fleet. Cartesian k-means. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 3017â3024, June 2013.
[35] M. Norouzi, A. Punjani, and D. J. Fleet. Fast search in Hamming space with multi-index hashing. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 3108â3115, 2012.
[36] J. Pan and D. Manocha. Fast GPU-based locality sensitive hashing for k-nearest neighbor computation. In Proc. ACM International Conference on Advances in Geographic Information Systems, pages 211â220, 2011.
[37] L. Paulev´e, H. J´egou, and L. Amsaleg. Locality sensitive hashing: a comparison of hash function types and querying mechanisms. Pattern recognition letters, 31(11):1348â1358, August 2010.
[38] O. Shamir. Fundamental limits of online and distributed algorithms for statistical learning and estimation. In Advances in Neural Information Processing Systems, pages 163â171, 2014.
[39] A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. CNN features oï¬-the-shelf: an astounding baseline for recognition. In CVPR workshops, pages 512â519, 2014.
[40] N. Sismanis, N. Pitsianis, and X. Sun. Parallel search of k-nearest neighbors with synchronous operations. In IEEE High Performance Extreme Computing Conference, pages 1â6, 2012.
[41] X. Tang, Z. Huang, D. M. Eyers, S. Mills, and M. Guo. Eï¬cient selection algorithm for fast k-nn search on GPUs. In IEEE International Parallel & Distributed Processing Symposium, pages 397â406, 2015.
[42] B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L.-J. Li. YFCC100M: The new data in multimedia research. Communications of the ACM, 59(2):64â73, January 2016.
[43] V. Volkov and J. W. Demmel. Benchmarking GPUs to tune dense linear algebra. In Proc. ACM/IEEE Conference on Supercomputing, pages 31:1â31:11, 2008.
[44] A. Wakatani and A. Murakami. GPGPU implementation of nearest neighbor search with product quantization. In IEEE International Symposium on Parallel and Distributed Processing with Applications, pages 248â253, 2014. [45] T. Warashina, K. Aoyama, H. Sawada, and T. Hattori.
Eï¬cient k-nearest neighbor graph construction using mapreduce for large-scale data sets. IEICE Transactions,
97-D(12):3142â3154, 2014.
[46] R. Weber, H.-J. Schek, and S. Blott. A quantitative analysis and performance study for similarity-search methods in high-dimensional spaces. In Proc. International Conference on Very Large DataBases, pages 194â205, 1998.
[47] P. Wieschollek, O. Wang, A. Sorkine-Hornung, and H. P. A. Lensch. Eï¬cient large-scale approximate nearest neighbor search on the GPU. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 2027â2035, June 2016.
[48] S. Williams, A. Waterman, and D. Patterson. Rooï¬ine: An insightful visual performance model for multicore architectures. Communications of the ACM, 52(4):65â76, April 2009.
Appendix: Complexity analysis of WarpSelect We derive the average number of times updates are triggered in WarpSelect, for use in Section 4.3.
Let the input to k-selection be a sequence {a1, a2, ..., ac} (1-based indexing), a randomly chosen permutation of a set of distinct elements. Elements are read sequentially in c groups of size w (the warp; in our case, w = 32); assume is a multiple of w, so c = ¢/w. Recall that t is the thread queue length. We call elements prior to or at position n in the min-k seen so far the successive min-k (at n). The likelihood that a, is in the successive min-k at n is:
α(n, k) := 1 k/n if n ⤠k if n > k (13)
as each an, n > k has a k/n chance as all permutations are equally likely, and all elements in the ï¬rst k qualify.
Counting the insertion sorts. In a given lane, an inser- tion sort is triggered if the incoming value is in the successive min-k + t values, but the lane has âseenâ only wc0 + (c â c0) values, where c0 is the previous won warp ballot. The prob- ability of this happening is:
α(wc0 + (c â c0), k + t) â k + t wc for c > k. (14)
The approximation considers that the thread queue has seen all the wc values, not just those assigned to its lane. The probability of any lane triggering an insertion sort is then:
1 â 1 â k + t wc â k + t c . (15)
Here the approximation is a first-order Taylor expansion. Summing up the probabilities over c gives an expected num- ber of insertions of N2 % (k + t) log(c) = O(k log(é/w)).
Counting full sorts. We seck N3 = m(é,k,t,w), the ex- pected number of full sorts required for WARPSELECT.
Single lane. For now, we assume w = 1, soc = @. Let (l,m, k) be the probability that in an sequence {a1,..., ac}, exactly m of the elements as encountered by a sequential scanner (w = 1) are in the successive min-k. Given m, there are ({) places where these successive min-k elements can occur. It is given by a recurrence relation:
1 £=0andm=0 0 £=Oandm>0 y(é,m,k) = 40 £>0Oandm=0 (y(@-â1,mâ1,k)-a(â¬,k)+ y(â1,m,k)-(1âa(é,k))) otherwise. (16) >
12
The last case is the probability of: there is a ¢â 1 se- quence with m â 1 successive min-k elements preceding us, and the current element is in the successive min-k, or the current element is not in the successive min-k, m ones are before us. We can then develop a recurrence relationship for m(l,k,t, 1). Note that
min((bt-+max(0,tâ1)),0) 6(6,b,k,t) := y(é,m,k) (17) m=bt
for b where 0 < bt < @ is the fraction of all sequences of length @ that will force b sorts of data by winning the thread queue ballot, as there have to be bt to (bt + max(0,t â 1)) elements in the successive min-k for these sorts to happen (as the min-k elements will overflow the thread queues). There are at most |¢/t| won ballots that can occur, as it takes t separate sequential current min-k seen elements to win the ballot. m(¢,k,t,1) is thus the expectation of this over all possible b:
Le/t) m(l,k,t,1) = S> d- 4(6,6,k, t). b=1 (18)
This can be computed by dynamic programming. Analyti- cally, note that for t = 1, k = 1, m(â¬,1,1,1) is the harmonic number Hy = 1+4+4+...+4, which converges to In(¢) +7 (the Euler-Mascheroni constant y) as £ > oo.
Fort =1,k > 1,¢> k, w(é,k,1,1) = k+k(He â Ae) or O(klog(é)), as the first k elements are in the successive min-k, and the expectation for the rest is ma + ms +. s.
Fort > 1,k > 1,£> k, note that there are some number D,k < D < £ of successive min-k determinations D made for each possible {a1,...,a¢}. The number of won ballots for each case is by definition |D/t], as the thread queue must fill up t times. Thus, 7(¢,k,t, 1) = O(k log(é)/t).
Multiple lanes. The w > 1 case is complicated by the fact that there are joint probabilities to consider (if more than one of the w workers triggers a sort for a given group, only one sort takes place). However, the likelihood can be bounded. Let 7â(¢,k,t,w) be the expected won ballots as- suming no mutual interference between the w workers for winning ballots (i.e., we win b ballots if there are b < w workers that independently win a ballot at a single step), but with the shared min-k set after each sort from the joint sequence. Assume that k > w. Then:
- [e/w]â[k/wl w'(â¬,k,1,w) <u( «| + » aera) â¬/w),k,1,1) = O(wk log(£/w)) < wa( (19)
(19) where the likelihood of the w workers seeing a successive min-k element has an upper bound of that of the first worker at each step. As before, the number of won ballots is scaled by t, so 1'(â¬,k,t, w) = O(wk log(é/w)/t). Mutual interfer- ence can only reduce the number of ballots, so we obtain the same upper bound for 7(é, k, t, w). | {
"id": "1510.00149"
} |
1702.08608 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | 7 1 0 2
r a M 2 ] L M . t a t s [
2 v 8 0 6 8 0 . 2 0 7 1 : v i X r a
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velezâ and Been Kimâ
From autonomous cars and adaptive email-ï¬lters to predictive policing systems, machine learn- ing (ML) systems are increasingly ubiquitous; they outperform humans on speciï¬c tasks [Mnih et al., 2013, Silver et al., 2016, Hamill, 2017] and often guide processes of human understanding and decisions [Carton et al., 2016, Doshi-Velez et al., 2014]. The deployment of ML systems in complex applications has led to a surge of interest in systems optimized not only for expected task performance but also other important criteria such as safety [Otte, 2013, Amodei et al., 2016, Varshney and Alemzadeh, 2016], nondiscrimination [Bostrom and Yudkowsky, 2014, Ruggieri et al., 2010, Hardt et al., 2016], avoiding technical debt [Sculley et al., 2015], or providing the right to explanation [Goodman and Flaxman, 2016]. For ML systems to be used safely, satisfying these auxiliary criteria is critical. However, unlike measures of performance such as accuracy, these crite- ria often cannot be completely quantiï¬ed. For example, we might not be able to enumerate all unit tests required for the safe operation of a semi-autonomous car or all confounds that might cause a credit scoring system to be discriminatory. In such cases, a popular fallback is the criterion of interpretability: if the system can explain its reasoning, we then can verify whether that reasoning is sound with respect to these auxiliary criteria.
Unfortunately, there is little consensus on what interpretability in machine learning is and how to evaluate it for benchmarking. Current interpretability evaluation typically falls into two categories. The ï¬rst evaluates interpretability in the context of an application: if the system is useful in either a practical application or a simpliï¬ed version of it, then it must be somehow interpretable (e.g. Ribeiro et al. [2016], Lei et al. [2016], Kim et al. [2015a], Doshi-Velez et al. [2015], Kim et al. [2015b]). The second evaluates interpretability via a quantiï¬able proxy: a researcher might ï¬rst sparse linear models, rule lists, gradient boosted treesâare claim that some model classâe.g. interpretable and then present algorithms to optimize within that class (e.g. Bucilu et al. [2006], Wang et al. [2017], Wang and Rudin [2015], Lou et al. [2012]).
To large extent, both evaluation approaches rely on some notion of âyouâll know it when you see it.â Should we be concerned about a lack of rigor? Yes and no: the notions of interpretability above appear reasonable because they are reasonable: they meet the ï¬rst test of having face- validity on the correct test set of subjects: human beings. However, this basic notion leaves many kinds of questions unanswerable: Are all models in all deï¬ned-to-be-interpretable model classes equally interpretable? Quantiï¬able proxies such as sparsity may seem to allow for comparison, but how does one think about comparing a model sparse in features to a model sparse in prototypes? Moreover, do all applications have the same interpretability needs? If we are to move this ï¬eld forwardâto compare methods and understand when methods may generalizeâwe need to formalize these notions and make them evidence-based.
The objective of this review is to chart a path toward the deï¬nition and rigorous evaluation of interpretability. The need is urgent: recent European Union regulation will require algorithms âAuthors contributed equally.
1
Humans Tasks Application-grounded Evaluation [nce More " . . Real Simple Specific Human-grounded Evaluation and Costly . . No Real Proxy Functionally-grounded Evaluation Humans Tasks
Figure 1: Taxonomy of evaluation approaches for interpretability
that make decisions based on user-level predictors, which âsigniï¬cantly aï¬ectâ users to provide explanation (âright to explanationâ) by 2018 [Parliament and of the European Union, 2016]. In addition, the volume of research on interpretability is rapidly growing.1 In section 1, we discuss what interpretability is and contrast with other criteria such as reliability and fairness. In section 2, we consider scenarios in which interpretability is needed and why. In section 3, we propose a taxonomy for the evaluation of interpretabilityâapplication-grounded, human-grounded and functionally- grounded. We conclude with important open questions in section 4 and speciï¬c suggestions for researchers doing work in interpretability in section 5.
# 1 What is Interpretability?
Deï¬nition Interpret means to explain or to present in understandable terms.2 In the context of ML systems, we deï¬ne interpretability as the ability to explain or to present in understandable terms to a human. A formal deï¬nition of explanation remains elusive; in the ï¬eld of psychology, Lombrozo [2006] states âexplanations are... the currency in which we exchanged beliefsâ and notes that questions such as what constitutes an explanation, what makes some explanations better than others, how explanations are generated and when explanations are sought are just beginning to be addressed. Researchers have classiï¬ed explanations from being âdeductive-nomologicalâ in nature [Hempel and Oppenheim, 1948] (i.e. as logical proofs) to providing some sense of mechanism [Bechtel and Abrahamsen, 2005, Chater and Oaksford, 2006, Glennan, 2002]. Keil [2006] considered a broader deï¬nition: implicit explanatory understanding. In this work, we propose data-driven ways to derive operational deï¬nitions and evaluations of explanations, and thus, interpretability.
Interpretability is used to conï¬rm other important desiderata of ML systems There exist many auxiliary criteria that one may wish to optimize. Notions of fairness or unbiasedness imply that protected groups (explicit or implicit) are not somehow discriminated against. Privacy means the method protects sensitive information in the data. Properties such as reliability and robustness ascertain whether algorithms reach certain levels of performance in the face of parameter or input variation. Causality implies that the predicted change in output due to a perturbation will occur in the real system. Usable methods provide information that assist users to accomplish a taskâe.g. a knob to tweak image lightingâwhile trusted systems have the conï¬dence of human usersâe.g. aircraft collision avoidance systems. Some areas, such as the fairness [Hardt et al.,
1Google Scholar ï¬nds more than 20,000 publications related to interpretability in ML in the last ï¬ve years. 2Merriam-Webster dictionary, accessed 2017-02-07
2
2016] and privacy [Toubiana et al., 2010, Dwork et al., 2012, Hardt and Talwar, 2010] the research communities have formalized their criteria, and these formalizations have allowed for a blossoming of rigorous research in these ï¬elds (without the need for interpretability). However, in many cases, formal deï¬nitions remain elusive. Following the psychology literature, where Keil et al. [2004] notes âexplanations may highlight an incompleteness,â we argue that interpretability can assist in qual- itatively ascertaining whether other desiderataâsuch as fairness, privacy, reliability, robustness, causality, usability and trustâare met. For example, one can provide a feasible explanation that fails to correspond to a causal structure, exposing a potential concern.
# 2 Why interpretability? Incompleteness
Not all ML systems require interpretability. Ad servers, postal code sorting, air craft collision avoidance systemsâall compute their output without human intervention. Explanation is not necessary either because (1) there are no signiï¬cant consequences for unacceptable results or (2) the problem is suï¬ciently well-studied and validated in real applications that we trust the systemâs decision, even if the system is not perfect.
So when is explanation necessary and appropriate? We argue that the need for interpretability stems from an incompleteness in the problem formalization, creating a fundamental barrier to optimization and evaluation. Note that incompleteness is distinct from uncertainty: the fused estimate of a missile location may be uncertain, but such uncertainty can be rigorously quantiï¬ed and formally reasoned about. In machine learning terms, we distinguish between cases where unknowns result in quantiï¬ed varianceâe.g. trying to learn from small data set or with limited the eï¬ect of sensorsâand incompleteness that produces some kind of unquantiï¬ed biasâe.g. including domain knowledge in a model selection process. Below are some illustrative scenarios:
⢠Scientiï¬c Understanding: The humanâs goal is to gain knowledge. We do not have a complete way of stating what knowledge is; thus the best we can do is ask for explanations we can convert into knowledge.
⢠Safety: For complex tasks, the end-to-end system is almost never completely testable; one cannot create a complete list of scenarios in which the system may fail. Enumerating all possible outputs given all possible inputs be computationally or logistically infeasible, and we may be unable to ï¬ag all undesirable outputs.
⢠Ethics: The human may want to guard against certain kinds of discrimination, and their notion of fairness may be too abstract to be completely encoded into the system (e.g., one might desire a âfairâ classiï¬er for loan approval). Even if we can encode protections for speciï¬c protected classes into the system, there might be biases that we did not consider a priori (e.g., one may not build gender-biased word embeddings on purpose, but it was a pattern in data that became apparent only after the fact).
⢠Mismatched objectives: The agentâs algorithm may be optimizing an incomplete objectiveâ that is, a proxy function for the ultimate goal. For example, a clinical system may be opti- mized for cholesterol control, without considering the likelihood of adherence; an automotive engineer may be interested in engine data not to make predictions about engine failures but to more broadly build a better car.
3
⢠Multi-objective trade-oï¬s: Two well-deï¬ned desiderata in ML systems may compete with each other, such as privacy and prediction quality [Hardt et al., 2016] or privacy and non- discrimination [Strahilevitz, 2008]. Even if each objectives are fully-speciï¬ed, the exact dy- namics of the trade-oï¬ may not be fully known, and the decision may have to be case-by-case.
In the presence of an incompleteness, explanations are one of ways to ensure that eï¬ects of gaps in problem formalization are visible to us.
# 3 How? A Taxonomy of Interpretability Evaluation
Even in standard ML settings, there exists a taxonomy of evaluation that is considered appropriate. In particular, the evaluation should match the claimed contribution. Evaluation of applied work should demonstrate success in the application: a game-playing agent might best a human player, a classiï¬er may correctly identify star types relevant to astronomers. In contrast, core methods work should demonstrate generalizability via careful evaluation on a variety of synthetic and standard benchmarks.
In this section we lay out an analogous taxonomy of evaluation approaches for interpretabil- ity: application-grounded, human-grounded, and functionally-grounded. These range from task- relevant to general, also acknowledge that while human evaluation is essential to assessing in- terpretability, human-subject evaluation is not an easy task. A human experiment needs to be well-designed to minimize confounding factors, consumed time, and other resources. We discuss the trade-oï¬s between each type of evaluation and when each would be appropriate.
# 3.1 Application-grounded Evaluation: Real humans, real tasks
Application-grounded evaluation involves conducting human experiments within a real application. If the researcher has a concrete application in mindâsuch as working with doctors on diagnosing patients with a particular diseaseâthe best way to show that the model works is to evaluate it with respect to the task: doctors performing diagnoses. This reasoning aligns with the methods of evaluation common in the human-computer interaction and visualization communities, where there exists a strong ethos around making sure that the system delivers on its intended task [Antunes et al., 2012, Lazar et al., 2010]. For example, a visualization for correcting segmentations from microscopy data would be evaluated via user studies on segmentation on the target image task [Suissa-Peleg et al., 2016]; a homework-hint system is evaluated on whether the student achieves better post-test performance [Williams et al., 2016].
Speciï¬cally, we evaluate the quality of an explanation in the context of its end-task, such as whether it results in better identiï¬cation of errors, new facts, or less discrimination. Examples of experiments include:
⢠Domain expert experiment with the exact application task.
⢠Domain expert experiment with a simpler or partial task to shorten experiment time and increase the pool of potentially-willing subjects.
In both cases, an important baseline is how well human-produced explanations assist in other humans trying to complete the task. To make high impact in real world applications, it is essential that we as a community respect the time and eï¬ort involved to do such evaluations, and also demand
4
high standards of experimental design when such evaluations are performed. As HCI community recognizes [Antunes et al., 2012], this is not an easy evaluation metric. Nonetheless, it directly tests the objective that the system is built for, and thus performance with respect to that objective gives strong evidence of success.
# 3.2 Human-grounded Metrics: Real humans, simpliï¬ed tasks
Human-grounded evaluation is about conducting simpler human-subject experiments that maintain the essence of the target application. Such an evaluation is appealing when experiments with the target community is challenging. These evaluations can be completed with lay humans, allowing for both a bigger subject pool and less expenses, since we do not have to compensate highly trained domain experts. Human-grounded evaluation is most appropriate when one wishes to test more general notions of the quality of an explanation. For example, to study what kinds of explanations are best understood under severe time constraints, one might create abstract tasks in which other factorsâsuch as the overall task complexityâcan be controlled [Kim et al., 2013, Lakkaraju et al., 2016]
The key question, of course, is how we can evaluate the quality of an explanation without a speciï¬c end-goal (such as identifying errors in a safety-oriented task or identifying relevant patterns in a science-oriented task). Ideally, our evaluation approach will depend only on the quality of the explanation, regardless of whether the explanation is the model itself or a post-hoc interpretation of a black-box model, and regardless of the correctness of the associated prediction. Examples of potential experiments include:
⢠Binary forced choice: humans are presented with pairs of explanations, and must choose the one that they ï¬nd of higher quality (basic face-validity test made quantitative).
⢠Forward simulation/prediction: humans are presented with an explanation and an input, and must correctly simulate the modelâs output (regardless of the true output).
⢠Counterfactual simulation: humans are presented with an explanation, an input, and an output, and are asked what must be changed to change the methodâs prediction to a desired output (and related variants).
Here is a concrete example. The common intrusion-detection test [Chang et al., 2009] in topic models is a form of the forward simulation/prediction task: we ask the human to ï¬nd the diï¬erence between the modelâs true output and some corrupted output as a way to determine whether the human has correctly understood what the modelâs true output is.
# 3.3 Functionally-grounded Evaluation: No humans, proxy tasks
Functionally-grounded evaluation requires no human experiments; instead, it uses some formal deï¬nition of interpretability as a proxy for explanation quality. Such experiments are appealing because even general human-subject experiments require time and costs both to perform and to get necessary approvals (e.g., IRBs), which may be beyond the resources of a machine learning researcher. Functionally-grounded evaluations are most appropriate once we have a class of models or regularizers that have already been validated, e.g. via human-grounded experiments. They may also be appropriate when a method is not yet mature or when human subject experiments are unethical.
5
The challenge, of course, is to determine what proxies to use. For example, decision trees have been considered interpretable in many situations [Freitas, 2014]. In section 4, we describe open problems in determining what proxies are reasonable. Once a proxy has been formalized, the challenge is squarely an optimization problem, as the model class or regularizer is likely to be discrete, non-convex and often non-diï¬erentiable. Examples of experiments include
⢠Show the improvement of prediction performance of a model that is already proven to be interpretable (assumes that someone has run human experiments to show that the model class is interpretable).
⢠Show that oneâs method performs better with respect to certain regularizersâfor example, is more sparseâcompared to other baselines (assumes someone has run human experiments to show that the regularizer is appropriate).
# 4 Open Problems in the Science of Interpretability, Theory and Practice
It is essential that the three types of evaluation in the previous section inform each other: the factors that capture the essential needs of real world tasks should inform what kinds of simpliï¬ed tasks we perform, and the performance of our methods with respect to functional proxies should reï¬ect their performance in real-world settings. In this section, we describe some important open problems for creating these links between the three types of evaluations:
1. What proxies are best for what real-world applications? (functionally to application-grounded)
2. What are the important factors to consider when designing simpler tasks that maintain the essence of the real end-task? (human to application-grounded)
3. What are the important factors to consider when characterizing proxies for explanation qual- ity? (human to functionally-grounded)
Below, we describe a path to answering each of these questions.
# 4.1 Data-driven approach to discover factors of interpretability
Imagine a matrix where rows are speciï¬c real-world tasks, columns are speciï¬c methods, and the entries are the performance of the method on the end-task. For example, one could represent how well a decision tree of depth less than 4 worked in assisting doctors in identifying pneumonia patients under age 30 in US. Once constructed, methods in machine learning could be used to identify latent dimensions that represent factors that are important to interpretability. This approach is similar to eï¬orts to characterize classiï¬cation [Ho and Basu, 2002] and clustering problems [Garg and Kalai, 2016]. For example, one might perform matrix factorization to embed both tasks and methods respectively in low-dimensional spaces (which we can then seek to interpret), as shown in Figure 2. These embeddings could help predict what methods would be most promising for a new problem, similarly to collaborative ï¬ltering.
The challenge, of course, is in creating this matrix. For example, one could imagine creating a repository of clinical cases in which the ML system has access to the patientâs record but not certain
6
methods K methods domain ~N f( domain 5 _)
Figure 2: An example of data-driven approach to discover factors in interpretability
current features that are only accessible to the clinician, or a repository of discrimination-in-loan cases where the ML system must provide outputs that assist a lawyer in their decision. Ideally these would be linked to domain experts who have agreed to be employed to evaluate methods when applied to their domain of expertise. Just as there are now large open repositories for problems in classiï¬cation, regression, and reinforcement learning [Blake and Merz, 1998, Brockman et al., 2016, Vanschoren et al., 2014], we advocate for the creation of repositories that contain problems corresponding to real-world tasks in which human-input is required. Creating such repositories will be more challenging than creating collections of standard machine learning datasets because they must include a system for human assessment, but with the availablity of crowdsourcing tools these technical challenges can be surmounted.
In practice, constructing such a matrix will be expensive since each cell must be evaluated in the context of a real application, and interpreting the latent dimensions will be an iterative eï¬ort of hypothesizing why certain tasks or methods share dimensions and then checking whether our hypotheses are true. In the next two open problems, we lay out some hypotheses about what latent dimensions may correspond to; these hypotheses can be tested via much less expensive human- grounded evaluations on simulated tasks.
# 4.2 Hypothesis: task-related latent dimensions of interpretability
Disparate-seeming applications may share common categories: an application involving preventing medical error at the bedside and an application involving support for identifying inappropriate language on social media might be similar in that they involve making a decision about a speciï¬c caseâa patient, a postâin a relatively short period of time. However, when it comes to time constraints, the needs in those scenarios might be diï¬erent from an application involving the un- derstanding of the main characteristics of a large omics data set, where the goalâscienceâis much more abstract and the scientist may have hours or days to inspect the model outputs.
Below, we list a (non-exhaustive!) set of hypotheses about what might make tasks similar in their explanation needs:
⢠Global vs. Local. Global interpretability implies knowing what patterns are present in general (such as key features governing galaxy formation), while local interpretability implies knowing the reasons for a speciï¬c decision (such as why a particular loan application was rejected). The former may be important for when scientiï¬c understanding or bias detection is the goal; the latter when one needs a justiï¬cation for a speciï¬c decision.
⢠Area, Severity of Incompleteness. What part of the problem formulation is incomplete, and how incomplete is it? We hypothesize that the types of explanations needed may vary de- pending on whether the source of concern is due to incompletely speciï¬ed inputs, constraints,
7
domains, internal model structure, costs, or even in the need to understand the training al- gorithm. The severity of the incompleteness may also aï¬ect explanation needs. For example, one can imagine a spectrum of questions about the safety of self-driving cars. On one end, one may have general curiosity about how autonomous cars make decisions. At the other, one may wish to check a speciï¬c list of scenarios (e.g., sets of sensor inputs that causes the car to drive oï¬ of the road by 10cm). In between, one might want to check a general propertyâsafe urban drivingâwithout an exhaustive list of scenarios and safety criteria.
⢠Time Constraints. How long can the user aï¬ord to spend to understand the explanation? A decision that needs to be made at the bedside or during the operation of a plant must be understood quickly, while in scientiï¬c or anti-discrimination applications, the end-user may be willing to spend hours trying to fully understand an explanation.
⢠Nature of User Expertise. How experienced is the user in the task? The userâs experience will aï¬ect what kind of cognitive chunks they have, that is, how they organize individual elements of information into collections [Neath and Surprenant, 2003]. For example, a clinician may have a notion that autism and ADHD are both developmental diseases. The nature of the userâs expertise will also inï¬uence what level of sophistication they expect in their explana- tions. For example, domain experts may expect or prefer a somewhat larger and sophisticated modelâwhich conï¬rms facts they knowâover a smaller, more opaque one. These preferences may be quite diï¬erent from hospital ethicist who may be more narrowly concerned about whether decisions are being made in an ethical manner. More broadly, decison-makers, sci- entists, compliance and safety engineers, data scientists, and machine learning researchers all come with diï¬erent background knowledge and communication styles.
Each of these factors can be isolated in human-grounded experiments in simulated tasks to deter- mine which methods work best when they are present.
# 4.3 Hypothesis: method-related latent dimensions of interpretability
Just as disparate applications may share common categories, disparate methods may share common qualities that correlate to their utility as explanation. As before, we provide a (non-exhaustive!) set of factors that may correspond to diï¬erent explanation needs: Here, we deï¬ne cognitive chunks to be the basic units of explanation.
⢠Form of cognitive chunks. What are the basic units of the explanation? Are they raw features? Derived features that have some semantic meaning to the expert (e.g. âneurological disorderâ for a collection of diseases or âchairâ for a collection of pixels)? Prototypes?
⢠Number of cognitive chunks. How many cognitive chunks does the explanation contain? How does the quantity interact with the type: for example, a prototype can contain a lot more information than a feature; can we handle them in similar quantities?
⢠Level of compositionality. Are the cognitive chunks organized in a structured way? Rules, hierarchies, and other abstractions can limit what a human needs to process at one time. For example, part of an explanation may involve deï¬ning a new unit (a chunk) that is a function of raw units, and then providing an explanation in terms of that new unit.
8
⢠Monotonicity and other interactions between cognitive chunks. Does it matter if the cognitive chunks are combined in linear or nonlinear ways? In monotone ways [Gupta et al., 2016]? Are some functions more natural to humans than others [Wilson et al., 2015, Schulz et al., 2016]?
⢠Uncertainty and stochasticity. How well do people understand uncertainty measures? To what extent is stochasticity understood by humans?
# 5 Conclusion: Recommendations for Researchers
In this work, we have laid the groundwork for a process to rigorously deï¬ne and evaluate inter- pretability. There are many open questions in creating the formal links between applications, the science of human understanding, and more traditional machine learning regularizers. In the mean time, we encourage the community to consider some general principles.
The claim of the research should match the type of the evaluation. Just as one would be critical of a reliability-oriented paper that only cites accuracy statistics, the choice of evaluation should match the speciï¬city of the claim being made. A contribution that is focused on a particular application should be expected to be evaluated in the context of that application (application- grounded evaluation), or on a human experiment with a closely-related task (human-grounded evaluation). A contribution that is focused on better optimizing a model class for some deï¬nition of interpretability should be expected to be evaluated with functionally-grounded metrics. As a community, we must be careful in the work on interpretability, both recognizing the need for and the costs of human-subject experiments.
In section 4, we hypothesized factors that may be the latent dimensions of interpretability. Creating a shared language around such factors is essential not only to evaluation, but also for the citation and comparison of related work. For example, work on creating a safe healthcare agent might be framed as focused on the need for explanation due to unknown inputs at the local scale, evaluated at the level of an application. In contrast, work on learning sparse linear models might also be framed as focused on the need for explanation due to unknown inputs, but this time evaluated at global scale. As we share each of our work with the community, we can do each other a service by describing factors such as
1. How is the problem formulation incomplete? (Section 2)
2. At what level is the evaluation being performed? (application, general user study, proxy; Section 3)
3. What are task-related relevant factors? (e.g. global vs. local, severity of incompleteness, level of user expertise, time constraints; Section 4.2)
4. What are method-related relevant factors being explored? (e.g. form of cognitive chunks, number of cognitive chunks, compositionality, monotonicity, uncertainty; Section 4.3)
and of course, adding and reï¬ning these factors as our taxonomies evolve. These considerations should move us away from vague claims about the interpretability of a particular model and toward classifying applications by a common set of terms.
9
Acknowledgments This piece would not have been possible without the dozens of deep conver- sations about interpretability with machine learning researchers and domain experts. Our friends and colleagues, we appreciate your support. We want to particularity thank Ian Goodfellow, Kush Varshney, Hanna Wallach, Solon Barocas, Stefan Rping and Jesse Johnson for their feedback.
# References
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Man´e. Concrete problems in AI safety. arXiv preprint arXiv:1606.06565, 2016.
Pedro Antunes, Valeria Herskovic, Sergio F Ochoa, and Jose A Pino. Structuring dimensions for collaborative systems evaluation. ACM Computing Surveys, 2012.
William Bechtel and Adele Abrahamsen. Explanation: A mechanist alternative. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 2005.
Catherine Blake and Christopher J Merz. {UCI} repository of machine learning databases. 1998.
Nick Bostrom and Eliezer Yudkowsky. The ethics of artiï¬cial intelligence. The Cambridge Handbook of Artiï¬cial Intelligence, 2014.
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
Cristian Bucilu, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2006.
Samuel Carton, Jennifer Helsby, Kenneth Joseph, Ayesha Mahmud, Youngsoo Park, Joe Walsh, Identifying police In ACM SIGKDD International Conference on Knowledge Crystal Cody, CPT Estella Patterson, Lauren Haynes, and Rayid Ghani. oï¬cers at risk of adverse events. Discovery and Data Mining. ACM, 2016.
Jonathan Chang, Jordan L Boyd-Graber, Sean Gerrish, Chong Wang, and David M Blei. Reading tea leaves: How humans interpret topic models. In NIPS, 2009.
Nick Chater and Mike Oaksford. Speculations on human causal learning and reasoning. Information sampling and adaptive cognition, 2006.
Finale Doshi-Velez, Yaorong Ge, and Isaac Kohane. Comorbidity clusters in autism spectrum disorders: an electronic health record time-series analysis. Pediatrics, 133(1):e54âe63, 2014.
Finale Doshi-Velez, Byron Wallace, and Ryan Adams. Graph-sparse lda: a topic model with structured sparsity. Association for the Advancement of Artiï¬cial Intelligence, 2015.
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Innovations in Theoretical Computer Science Conference. ACM, 2012.
10
Alex Freitas. Comprehensible classiï¬cation models: a position paper. ACM SIGKDD Explorations, 2014.
Vikas K Garg and Adam Tauman Kalai. Meta-unsupervised-learning: A supervised approach to unsupervised learning. arXiv preprint arXiv:1612.09030, 2016.
Stuart Glennan. Rethinking mechanistic explanation. Philosophy of science, 2002.
Bryce Goodman and Seth Flaxman. European union regulations on algorithmic decision-making and aâ right to explanationâ. arXiv preprint arXiv:1606.08813, 2016.
Maya Gupta, Andrew Cotter, Jan Pfeifer, Konstantin Voevodski, Kevin Canini, Alexander Mangylov, Wojciech Moczydlowski, and Alexander Van Esbroeck. Monotonic calibrated in- terpolated look-up tables. Journal of Machine Learning Research, 2016.
Sean Hamill. CMU computer won poker battle over humans by statistically signiï¬cant mar- http://www.post-gazette.com/business/tech-news/2017/01/31/CMU-computer- gin. won-poker-battle-over-humans-by-statistically-significant-margin/stories/ 201701310250, 2017. Accessed: 2017-02-07.
Moritz Hardt and Kunal Talwar. On the geometry of diï¬erential privacy. In ACM Symposium on Theory of Computing. ACM, 2010.
Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. Advances in Neural Information Processing Systems, 2016. In
Carl Hempel and Paul Oppenheim. Studies in the logic of explanation. Philosophy of science, 1948.
Tin Kam Ho and Mitra Basu. Complexity measures of supervised classiï¬cation problems. IEEE transactions on pattern analysis and machine intelligence, 2002.
Frank Keil. Explanation and understanding. Annu. Rev. Psychol., 2006.
Frank Keil, Leonid Rozenblit, and Candice Mills. What lies beneath? understanding the limits of understanding. Thinking and seeing: Visual metacognition in adults and children, 2004.
Been Kim, Caleb Chacha, and Julie Shah. Inferring robot task plans from human team meetings: A generative modeling approach with logic-based prior. Association for the Advancement of Artiï¬cial Intelligence, 2013.
Been Kim, Elena Glassman, Brittney Johnson, and Julie Shah. model empowering humans via intuitive interaction. 2015a. iBCM: Interactive bayesian case
Been Kim, Julie Shah, and Finale Doshi-Velez. Mind the gap: A generative approach to inter- pretable feature selection and extraction. In Advances in Neural Information Processing Systems, 2015b.
Himabindu Lakkaraju, Stephen H Bach, and Jure Leskovec. Interpretable decision sets: A joint framework for description and prediction. In Proceedings of the 22nd ACM SIGKDD Interna- tional Conference on Knowledge Discovery and Data Mining, pages 1675â1684. ACM, 2016.
11
Jonathan Lazar, Jinjuan Heidi Feng, and Harry Hochheiser. Research methods in human-computer interaction. John Wiley & Sons, 2010.
Tao Lei, Regina Barzilay, and Tommi Jaakkola. Rationalizing neural predictions. arXiv preprint arXiv:1606.04155, 2016.
Tania Lombrozo. The structure and function of explanations. Trends in cognitive sciences, 10(10): 464â470, 2006.
Yin Lou, Rich Caruana, and Johannes Gehrke. Intelligible models for classiï¬cation and regression. In ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2012.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier- stra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
Ian Neath and Aimee Surprenant. Human Memory. 2003.
Clemens Otte. Safe and interpretable machine learning: A methodological review. In Computational Intelligence in Intelligent Data Analysis. Springer, 2013.
Parliament and Council of the European Union. General data protection regulation. 2016.
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. âwhy should i trust you?â: Explaining the predictions of any classiï¬er. arXiv preprint arXiv:1602.04938, 2016.
Salvatore Ruggieri, Dino Pedreschi, and Franco Turini. Data mining for discrimination discovery. ACM Transactions on Knowledge Discovery from Data, 2010.
Eric Schulz, Joshua Tenenbaum, David Duvenaud, Maarten Speekenbrink, and Samuel Gershman. Compositional inductive biases in function learning. bioRxiv, 2016.
D Sculley, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, Michael Young, Jean-Fran¸cois Crespo, and Dan Dennison. Hidden technical debt in machine learning systems. In Advances in Neural Information Processing Systems, 2015.
David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 2016.
Lior Jacob Strahilevitz. Privacy versus antidiscrimination. University of Chicago Law School Working Paper, 2008.
Adi Suissa-Peleg, Daniel Haehn, Seymour Knowles-Barley, Verena Kaynig, Thouis R Jones, Alyssa Wilson, Richard Schalek, Jeï¬ery W Lichtman, and Hanspeter Pï¬ster. Automatic neural recon- struction from petavoxel of electron microscopy data. Microscopy and Microanalysis, 2016.
Vincent Toubiana, Arvind Narayanan, Dan Boneh, Helen Nissenbaum, and Solon Barocas. Adnos- tic: Privacy preserving targeted advertising. 2010.
Joaquin Vanschoren, Jan N Van Rijn, Bernd Bischl, and Luis Torgo. Openml: networked science in machine learning. ACM SIGKDD Explorations Newsletter, 15(2):49â60, 2014.
12
Kush Varshney and Homa Alemzadeh. On the safety of machine learning: Cyber-physical systems, decision sciences, and data products. CoRR, 2016.
Fulton Wang and Cynthia Rudin. Falling rule lists. In AISTATS, 2015.
Tong Wang, Cynthia Rudin, Finale Doshi-Velez, Yimin Liu, Erica Klampï¬, and Perry MacNeille. Bayesian rule sets for interpretable classiï¬cation. In International Conference on Data Mining, 2017.
Joseph Jay Williams, Juho Kim, Anna Raï¬erty, Samuel Maldonado, Krzysztof Z Gajos, Walter S Lasecki, and Neil Heï¬ernan. Axis: Generating explanations at scale with learnersourcing and machine learning. In ACM Conference on Learning@ Scale. ACM, 2016.
Andrew Wilson, Christoph Dann, Chris Lucas, and Eric Xing. The human kernel. In Advances in Neural Information Processing Systems, 2015.
13 | {
"id": "1606.04155"
} |
1702.08138 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Social media platforms provide an environment where people can freely engage
in discussions. Unfortunately, they also enable several problems, such as
online harassment. Recently, Google and Jigsaw started a project called
Perspective, which uses machine learning to automatically detect toxic
language. A demonstration website has been also launched, which allows anyone
to type a phrase in the interface and instantaneously see the toxicity score
[1]. In this paper, we propose an attack on the Perspective toxic detection
system based on the adversarial examples. We show that an adversary can subtly
modify a highly toxic phrase in a way that the system assigns significantly
lower toxicity score to it. We apply the attack on the sample phrases provided
in the Perspective website and show that we can consistently reduce the
toxicity scores to the level of the non-toxic phrases. The existence of such
adversarial examples is very harmful for toxic detection systems and seriously
undermines their usability. | http://arxiv.org/pdf/1702.08138 | Hossein Hosseini, Sreeram Kannan, Baosen Zhang, Radha Poovendran | cs.LG, cs.CY, cs.SI | 4 pages | null | cs.LG | 20170227 | 20170227 | 7 1 0 2
# b e F 7 2
]
# G L . s c [
1 v 8 3 1 8 0 . 2 0 7 1 : v i X r a
# Deceiving Googleâs Perspective API Built for Detecting Toxic Comments
Hossein Hosseini, Sreeram Kannan, Baosen Zhang and Radha Poovendran Network Security Lab (NSL), Department of Electrical Engineering, University of Washington, Seattle, WA Email: {hosseinh, ksreeram, zhangbao, rp3}@uw.edu
AbstractâSocial media platforms provide an environment where people can freely engage in discussions. Unfortunately, they also enable several problems, such as online harass- ment. Recently, Google and Jigsaw started a project called Perspective, which uses machine learning to automatically detect toxic language. A demonstration website has been also launched, which allows anyone to type a phrase in the interface and instantaneously see the toxicity score [1].
In this paper, we propose an attack on the Perspective toxic detection system based on the adversarial examples. We show that an adversary can subtly modify a highly toxic phrase in a way that the system assigns signiï¬cantly lower toxicity score to it. We apply the attack on the sample phrases provided in the Perspective website and show that we can consistently reduce the toxicity scores to the level of the non-toxic phrases. The existence of such adversarial examples is very harmful for toxic detection systems and seriously undermines their usability.
AI to help with providing a safe environment for online discussions [10].
Perspective is an API that enables the developers to use the toxic detector running on Googleâs servers, to identify harassment and abuse on social media or more efï¬ciently ï¬ltering invective from the comments on a news website. Jigsaw has partnered with online communities and publishers, such as Wikipedia [3] and The New York Times [11], to implement this toxicity measurement system.
Recently, a demonstration website has been launched, which allows anyone to type a phrase in the Perspectiveâs interface and instantaneously see how it rates on the âtoxicityâ scale [1]. The Perspective website has also open sourced the experiments, models and research data in order to explore the strengths and weaknesses of using machine learning as a tool for online discussion.
# I. INTRODUCTION
Social media platforms provide an environment where peo- ple can learn about the trends and news, freely share their opinions and engage in discussions. Unfortunately, the lack of a moderating entity in these platforms has caused several problems, ranging from the wide spread of fake news to online harassment [2]. Due to the growing concern about the impact of online harassment on the peopleâs experience of the Internet, many platforms are taking steps to enhance the safety of the online environments [3], [4].
Some of the platforms employ approaches such as reï¬ning the information based on crowdsourcing (upvotes/downvotes), turning off comments or manual moderation to mitigate the effect of the inappropriate contents [5]. These approaches however are inefï¬cient and not scalable. As a result, there has been many calls for researchers to develop methods to automatically detect abusive or toxic context in the real time [6].
Recent advances in machine learning have transformed many domains such as computer vision [7], speech recogni- tion [8], and language processing [9]. Many researchers have explored using machine learning to also tackle the problem of online harassment. Recently, Google and Jigsaw launched a project called Perspective [1], which uses machine learning to automatically detect online insults, harassment, and abusive speech. The system intends to bring Conversation
The implicit assumption of learning models is that they will be deployed in benign settings. However, many works have pointed out their vulnerability in adversarial scenarios [12]â [14]. One type of the vulnerabilities of machine learning algorithms is that an adversary can change the algorithm output by subtly perturbing the input, often unnoticeable by humans. Such inputs are called adversarial examples [15], and have been shown to be effective against different machine learning algorithms even when the adversary has only a black- box access to the target model [16].
In this paper, we demonstrate the vulnerability of the recently-released Googleâs Perspective system against the adversarial examples. In the text classiï¬cation task of the Perspective, adversarial examples can be deï¬ned as mod- iï¬ed texts which contain the same highly abusive language as the original text, yet receive a signiï¬cantly lower toxicity score from the learning model. Through different experiments, we show that an adversary can deceive the system by misspelling the abusive words or by adding punctuations between the let- ters. The existence of adversarial examples is very harmful for toxic detector systems and seriously undermines their usability, especially since these systems are likely to be employed in adversarial settings. We conclude the paper by proposing some countermeasures to the proposed attack.
# II. BACKGROUND
A. Brief Description of Googleâs Perspective API
This work was supported by ONR grants N00014-14-1-0029 and N00014- 16-1-2710, ARO grant W911NF-16-1-0485 and NSF grant CNS-1446866.
Perspective is an API created by Jigsaw and Googleâs Counter Abuse Technology team in Conversation-AI. Conver- sation AI is a collaborative research effort exploring ML as a
1
TABLE I: Demosntration of the Attack on the Perspective Toxic Detection System . All phrases in the ï¬rst column of the table are chosen from the examples provided by the Perspective website [1].
Original Phrase (Toxicity Score) Climate change is happening and itâs not changing in our favor. If you think differently youâre an idiot. (84%) Theyâre stupid, itâs getting warmer, we should enjoy it while it lasts (86%) They are liberal idiots who are uneducated (90%) idiots. backward thinking people. nationalists. not accept- ing facts. susceptible to lies. (80%) They are stupid and ignorant with no class (91%) Itâs stupid and wrong (89%) If they voted for Hilary they are idiots (90%) Anyone who voted for Trump is a moron (80%) Screw you trump supporters (79%) Modiï¬ed Phrase (Toxicity Score) Climate change is happening and itâs not changing in our favor. If you think differently youâre an idiiot. (20%) Theyâre st.upid, itâs getting warmer, we should enjoy it while it lasts (2%) They are liberal i.diots who are un.educated (15%) idiiots. backward thinking people. nationaalists. not accept- ing facts. susceptible to l.ies. (17%) They are st.upid and ig.norant with no class (11%) Itâs stuipd and wrong (17%) If they voted for Hilary they are id.iots (12%) Anyone who voted for Trump is a mo.ron (13%) S c r e w you trump supporters (17%)
tool for better discussions online [17]. The API uses machine learning models to score the toxicity of an input text, where toxic is deï¬ned as âa rude, disrespectful, or unreasonable comment that is likely to make one leave a discussion.â
Google and Jigsaw developed the measurement tool by taking millions of comments from different publishers and then asking panels of ten people to rate the comments on a scale from âvery toxicâ to âvery healthyâ contribution. The resulting judgments provided a large set of training examples for the machine learning model.
method modiï¬es the text such that the algorithm classiï¬es the writer gender as a certain target gender, under limited knowledge of the classiï¬er and while preserving the textâs ï¬uency and meaning. The modiï¬ed text is not required to be adversarial, i.e., a human may also classify it as the target gender. In contrast, in the application of toxic text detection, the adversary intends to deceive the classiï¬er, while maintaining the abusive content of the text.
# III. THE PROPOSED ATTACKS
Jigsaw has partnered with online communities and publish- ers to implement the toxicity measurement system. Wikipedia use it to perform a study of its editorial discussion pages [3] and The New York Times is planning to use it as a ï¬rst pass of all its comments, automatically ï¬agging abusive ones for its team of human moderators [11]. The API outputs the scores in real-time, so that publishers can integrate it into their website to show toxicity ratings to commenters even during the typing [5].
B. Adversarial Examples for Learning Systems
Machine learning models are generally designed to yield the best performance on clean data and in benign settings. As a result, they are subject to attacks in adversarial scenarios [12]â [14]. One type of the vulnerabilities of the machine learning algorithms is that an adversary can change the algorithm prediction score by perturbing the input slightly, often un- noticeable by humans. Such inputs are called adversarial examples [15].
Adversarial examples have been applied to models for different tasks, such as images classiï¬cation [15], [18], [19], music content analysis [20] and malware classiï¬cation [21]. In this work, we generate adversarial examples on a real-world text classiï¬er system. In the context of scoring the toxicity, adversarial examples can be deï¬ned as modiï¬ed phrases that contain the same highly abusive language as the original one, yet receive a signiï¬cantly lower toxicity score by the model. In a similar work [22], the authors presented a method for gender obfuscating in social media writing. The proposed
Recently, a website has been launched for Perspective demonstration, which allows anyone to type a phrase in the interface and instantaneously receive its toxicity score [1]. The website provides samples phrases for three categories of topics âthat are often difï¬cult to discuss onlineâ. The categories are 1) Climate Change, 2) Brexit and 3) US Election.
section, we demonstrate an attack on the Perspective toxic detection system, based on the adver- sarial examples. In particular, we show that an adversary can subtly modify a toxic phrase such that the model will output a very low toxicity score for the modiï¬ed phrase. The attack setting is as follows. The adversary possesses a phrase with a toxic content and tries different perturbations on the words, until she succeeds with signiï¬cantly reducing the conï¬dence of the model that the phrase is toxic. Note that the adversary does not have access to the model or training data, and can only query the model and get the toxicity score.
Table I demonstrates the attack on sample phrases provided by the Perspective website. The ï¬rst column represents the original phrases along with the toxicity scores and the second column provides the adversarially modiï¬ed phrases and their corresponding toxicity scores. 1 For better demonstration of the attack, we chose phrases with different toxic words and also introduced different types of errors, rather than searching for the best error type that would potentially yield lower toxicity score. The boldface words are the toxic words that the adversary has modiï¬ed. The modiï¬cations are adding a
1The experiments are done on the interface of the Perspective website on Feb. 24, 2017.
2
TABLE II: Demosntration of False Alarm on the Perspective Toxic Detection System . All phrases in the ï¬rst column of the table are chosen from the examples provided by the Perspective website [1]
Original Phrase (Toxicity Score) Climate change is happening and itâs not changing in our favor. If you think differently youâre an idiot (84%) Theyâre stupid, itâs getting warmer, we should enjoy it while it lasts (86%) They are liberal idiots who are uneducated. (90%) idiots. backward thinking people. nationalists. not accepting facts. susceptible to lies. (80%) They are stupid and ignorant with no class (91%) Itâs stupid and wrong (89%) If they voted for Hilary they are idiots (90%) Anyone who voted for Trump is a moron (80%) Screw you trump supporters (79%) Modiï¬ed Phrase (Toxicity Score) Climate change is happening and itâs not changing in our favor. If you think differently youâre not an idiot (73%) Theyâre not stupid, itâs getting warmer, we should enjoy it while it lasts (74%) They are not liberal idiots who are uneducated. (83%) not idiots. not backward thinking people. not nationalists. accepting facts. not susceptible to lies. (74%) They are not stupid and ignorant with no class (84%) Itâs not stupid and wrong (83%) If they voted for Hilary they are not idiots (81%) Anyone who voted for Trump is not a moron (65%) Will not screw you trump supporters (68%)
dot between two letters, adding spaces between all letters or misspelling the word (repeating one letter twice or swapping two letters). As can be seen, we can consistently reduce the toxicity score to the level of the benign phrases by subtly modifying the toxic words.
Moreover, we observed that the adversarial perturbations transfer among different phrases, i.e., if a certain modiï¬cation to a word reduces the toxicity score of a phrase, the same modiï¬cation to the word is likely to reduce the toxicity score also for another phrase. Using this property, an adversary can form a dictionary of the adversarial perturbations for every word and signiï¬cantly simplify the attack process.
Through the experiments, we made the following observa- tions:
the Perspective system also wrongly assigns high tox- icity scores to the apparently benign phrases. Table II demonstrates the false alarm on the same sample phrases of Table I. The ï¬rst column represents the original phrases along with the toxicity scores and the second column pro- vides the negated phrases and the corresponding toxicity scores. The boldface words are added to toxic phrases. As can be seen, the system consistently fails to capture the inherent semantic of the modiï¬ed phrases and wrongly assigns high toxicity scores to them.
Robustness to random misspellings: we observed that the system assigns 34% toxicity score to most of the misspelled and random words. Also, it is somewhat robust to phrases that contain randomly modiï¬ed toxic words. ⢠Vulnerability to poisoning attack: The Perspective interface allows users to provide a feedback on the toxicity score of phrases, suggesting that the learning algorithm updates itself using the new data. This can ex- pose the system to poisoning attacks, where an adversary modiï¬es the training data (in this case, the labels) so that the model assigns low toxicity scores to certain phrases.
IV. OPEN PROBLEMS IN DEFENSE METHODS The developers of Perspective have mentioned that the system is in the early days of research and development, and
that the experiments, models, and research data are published to explore the strengths and weaknesses of using machine learning as a tool for online discussion.
the Perspective system against the adversarial examples. Scoring the semantic toxicity of a phrase is clearly a very challenging task. In this following, we brieï¬y review some of the possible approaches for improving the robustness of the toxic detection systems:
⢠Adversarial Training: In this approach, during the training phase, we generate the adversarial examples and train the model to assign the original label to them [18]. In the context of toxic detection systems, we need to include different modiï¬ed versions of the toxic words into the training data. While this approach may improve the robustness of the system against the adversarial examples, it does not seem practical to train the model on all variants of every word.
⢠Spell checking: Many of the adversarial examples can be detected by ï¬rst applying a spell checking ï¬lter before the toxic detection system. This approach may however increase the false alarm.
⢠Blocking suspicious users for a period of time: The adversary needs to try different error patterns to ï¬nally evade the toxic detection system. Once a user fails to pass the threshold for a number of times, the system can block her for a while. This approach can force the users to less often use toxic language.
# V. CONCLUSION
In this paper, we presented an attack on the recently- released Googleâs Perspective API built for detecting toxic comments. We showed that the system can be deceived by slightly perturbing the abusive phrases to receive very low toxicity scores, while preserving the intended meaning. We also showed that the system has high false alarm rate in scoring high toxicity to benign phrases. We provided detailed examples for the studied cases. Our future work includes development of countermeasures against such attacks.
3
Disclaimer: The phrases used in Tables I and II are chosen from the examples provided in the Perspective website [1] for the purpose of demonstrating the results and do not represent the view or opinions of the authors or sponsoring agencies.
# REFERENCES
[1] âhttps://www.perspectiveapi.com/,â [2] M. Duggan, Online harassment. Pew Research Center, 2014. [3] âhttps://meta.wikimedia.org/wiki/Research:Detox,â [4] âhttps://www.nytimes.com/interactive/2016/09/20/insider/approve-or-
reject-moderation-quiz.html,â
[5] âhttps://www.wired.com/2017/02/googles-troll-ï¬ghting-ai-now-belongs- world/,â
[6] E. Wulczyn, N. Thain, and L. Dixon, âEx machina: Personal attacks seen at scale,â arXiv preprint arXiv:1610.08914, 2016.
[7] A. Krizhevsky, I. Sutskever, and G. E. Hinton, âImagenet classiï¬cation with deep convolutional neural networks,â in Advances in neural infor- mation processing systems, pp. 1097â1105, 2012.
[8] G. E. Dahl, D. Yu, L. Deng, and A. Acero, âContext-dependent pre- trained deep neural networks for large-vocabulary speech recognition,â IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 1, pp. 30â42, 2012.
[9] R. Collobert and J. Weston, âA uniï¬ed architecture for natural language processing: Deep neural networks with multitask learning,â in Proceed- ings of the 25th international conference on Machine learning, pp. 160â 167, ACM, 2008.
[10] âhttps://jigsaw.google.com/,â [11] âhttp://www.nytco.com/the-times-is-partnering-with-jigsaw-to-expand-
comment-capabilities/,â
[12] M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar, âCan machine learning be secure?,â in Proceedings of the 2006 ACM Symposium on Information, computer and communications security, pp. 16â25, ACM, 2006.
[13] M. Barreno, B. Nelson, A. D. Joseph, and J. Tygar, âThe security of machine learning,â Machine Learning, vol. 81, no. 2, pp. 121â148, 2010. [14] L. Huang, A. D. Joseph, B. Nelson, B. I. Rubinstein, and J. Tygar, âAd- versarial machine learning,â in Proceedings of the 4th ACM workshop on Security and artiï¬cial intelligence, pp. 43â58, ACM, 2011.
[15] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, âIntriguing properties of neural networks,â arXiv preprint arXiv:1312.6199, 2013.
[16] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, âPractical black-box attacks against deep learning systems using adversarial examples,â arXiv preprint arXiv:1602.02697, 2016.
[17] âhttps://conversationai.github.io/,â [18] I. J. Goodfellow, J. Shlens, and C. Szegedy, âExplaining and harnessing
adversarial examples,â arXiv preprint arXiv:1412.6572, 2014.
[19] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, âThe limitations of deep learning in adversarial settings,â in 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372â387, IEEE, 2016.
[20] C. Kereliuk, B. L. Sturm, and J. Larsen, âDeep learning and music ad- versaries,â IEEE Transactions on Multimedia, vol. 17, no. 11, pp. 2059â 2071, 2015.
[21] K. Grosse, N. Papernot, P. Manoharan, M. Backes, and P. McDaniel, âAdversarial perturbations against deep neural networks for malware classiï¬cation,â arXiv preprint arXiv:1606.04435, 2016.
[22] S. Reddy, M. Wellesley, K. Knight, and C. Marina del Rey, âObfuscating gender in social media writing,â NLP+ CSS 2016, p. 17, 2016.
4 | {
"id": "1606.04435"
} |
1702.04595 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | 7 1 0 2
b e F 5 1 ] V C . s c [
1 v 5 9 5 4 0 . 2 0 7 1 : v i X r a
Published as a conference paper at ICLR 2017
VISUALIZING DEEP NEURAL NETWORK DECISIONS: PREDICTION DIFFERENCE ANALYSIS
Luisa M Zintgraf1,3, Taco S Cohen1, Tameem Adel1, Max Welling1,2 1University of Amsterdam, 2Canadian Institute of Advanced Research, 3Vrije Universiteit Brussel {lmzintgraf,tameem.hesham}@gmail.com, {t.s.cohen, m.welling}@uva.nl
# ABSTRACT
This article presents the prediction difference analysis method for visualizing the response of a deep neural network to a speciï¬c input. When classifying images, the method highlights areas in a given input image that provide evidence for or against a certain class. It overcomes several shortcoming of previous methods and provides great additional insight into the decision making process of classiï¬ers. Making neural network decisions interpretable through visualization is important both to improve models and to accelerate the adoption of black-box classiï¬ers in application areas such as medicine. We illustrate the method in experiments on natural images (ImageNet data), as well as medical images (MRI brain scans).
# INTRODUCTION
Over the last few years, deep neural networks (DNNs) have emerged as the method of choice for perceptual tasks such as speech recognition and image classiï¬cation. In essence, a DNN is a highly complex non-linear function, which makes it hard to understand how a particular classiï¬cation comes about. This lack of transparency is a signiï¬cant impediment to the adoption of deep learning in areas of industry, government and healthcare where the cost of errors is high.
In order to realize the societal promise of deep learning - e.g., through self-driving cars or personalized medicine - it is imperative that classiï¬ers learn to explain their decisions, whether it is in the lab, the clinic, or the courtroom. In scientiï¬c applications, a better understanding of the complex dependencies learned by deep networks could lead to new insights and theories in poorly understood domains.
In this paper, we present a new, probabilistically sound methodology for explaining classiï¬cation decisions made by deep neural networks. The method can be used to produce a saliency map for each (instance, node) pair that highlights the parts (features) of the input that constitute most evidence for or against the activation of the given (internal or output) node. See ï¬gure 1 for an example.
In the following two sections, we review related work and then present our approach. In section 4 we provide several demonstrations of our technique for deep convolutional neural networks (DCNNs) trained on ImageNet data, and further how the method can be applied when classifying MRI brain scans of HIV patients with neurodegenerative disease.
Figure 1: Example of our visualization method: explains why the DCNN (GoogLeNet) predicts "cockatoo". Shown is the evidence for (red) and against (blue) the prediction. We see that the facial features of the cockatoo are most supportive for the decision, and parts of the body seem to constitute evidence against it. In fact, the classiï¬er most likely considers them evidence for the second-highest scoring class, white wolf.
1
Published as a conference paper at ICLR 2017
# 2 RELATED WORK
Broadly speaking, there are two approaches for understanding DCNNs through visualization inves- tigated in the literature: ï¬nd an input image that maximally activates a given unit or class score to visualize what the network is looking for (Erhan et al., 2009; Simonyan et al., 2013; Yosinski et al., 2015), or visualize how the network responds to a speciï¬c input image in order to explain a particular classiï¬cation made by the network. The latter will be the subject of this paper.
One such instance-speciï¬c method is class saliency visualization proposed by Simonyan et al. (2013) who measure how sensitive the classiï¬cation score is to small changes in pixel values, by computing the partial derivative of the class score with respect to the input features using standard backpropagation. They also show that there is a close connection to using deconvolutional networks for visualization, proposed by Zeiler & Fergus (2014). Other methods include Shrikumar et al. (2016), who compare the activation of a unit when a speciï¬c input is fed forward through the net to a reference activation for that unit. Zhou et al. (2016) and Bach et al. (2015) also generate interesting visualization results for individual inputs, but are both not as closely related to our method as the two papers mentioned above. The idea of our method is similar to another analysis Zeiler & Fergus (2014) make: they estimate the importance of input pixels by visualizing the probability of the (correct) class as a function of a gray patch occluding parts of the image. In this paper, we take a more rigorous approach at both removing information from the image and evaluating the effect of this.
In the ï¬eld of medical image classiï¬cation speciï¬cally, a widely used method for visualizing feature importances is to simply plot the weights of a linear classiï¬er (Klöppel et al., 2008; Ecker et al., 2010), or the p-values of these weights (determined by permutation testing) (Mourao-Miranda et al., 2005; Wang et al., 2007). These are independent of the input image, and, as argued by Gaonkar & Davatzikos (2013) and Haufe et al. (2014), interpreting these weights can be misleading in general.
The work presented in this paper is based on an instance-speciï¬c method by Robnik-Å ikonja & Kononenko (2008), the prediction difference analysis, which is reviewed in the next section. Our main contributions are three substantial improvements of this method: conditional sampling (section 3.1), multivariate analysis (section 3.2), and deep visualization (section 3.3).
# 3 APPROACH
Our method is based on the technique presented by Robnik-Å ikonja & Kononenko (2008), which we will now review. For a given prediction, the method assigns a relevance value to each input feature with respect to a class c. The basic idea is that the relevance of a feature xi can be estimated by measuring how the prediction changes if the feature is unknown, i.e., the difference between p(c|x) and p(c|x\i), where x\i denotes the set of all input features except xi.
To ï¬nd p(c|x\i), i.e., evaluate the prediction when a feature is unknown, the authors propose three strategies. The ï¬rst is to label the feature as unknown (which only few classiï¬ers allow). The second is to re-train the classiï¬er with the feature left out (which is clearly infeasible for DNNs and high-dimensional data like images). The third approach is to simulate the absence of a feature by marginalizing the feature:
plelx\i) = J plrilxi)ple}xi, 2%) a)
(with the sum running over all possible values for xi). However, modeling p(xi|x\i) can easily become infeasible with a large number of features. Therefore, the authors approximate equation (1) by assuming that feature xi is independent of the other features, x\i:
p(c|x\i) â p(xi)p(c|x\i, xi) . xi (2)
The prior probability p(xi) is usually approximated by the empirical distribution for that feature.
Once the class probability p(c|x\;) is estimated, it can be compared to p(c|x). We stick to an evaluation proposed by the authors referred to as weight of evidence, given by WE;,(c|x) = log, (odds(c|x)) â logy (odds(c|x\;)) ; (3)
2
Published as a conference paper at ICLR 2017
input x ae m.. ~
Figure 2: Simple illustration of the sampling procedure in algorithm 1. Given the input image x, we select every possible patch xw (in a sliding window fashion) of size k à k and place a larger patch Ëxw of size l à l around it. We can then conditionally sample xw by conditioning on the surrounding patch Ëxw.
Algorithm 1 Evaluating the prediction difference using conditional and multivariate sampling
Input: classifier with outputs p(clx), input image x of size n x n, inner patch size k, outer patch size 1 > k, class of interest c, probabilistic model over patches of size 1 x J, number of samples S Initialization: WE = zeros(n*n), counts = zeros(n*n) for every patch x,, of size k x kin x do x! = copy(x) sum, = 0 define patch X,, of size | x | that contains x, for s = 1to Sdo x/, < Xw sampled from p(xw|Xw\Xw) sum, += p(c|xâ) > evaluate classifier end for p(c|x\Xw) := sum, /S WE[coordinates of x,,] += log, (odds(c|x)) â log, (odds(c counts[coordinates of x,,] += 1 end for Output: WE / counts > point-wise division x\Xw))
where odds(c|x) = p(c|x)/(1 â p(c|x)). To avoid problems with zero probabilities, Laplace correction p â (pN + 1)/(N + K) is used, where N is the number of training instances and K the number of classes.
The method produces a relevance vector (WEi)i=1...m (m being the number of features) of the same size as the input, which reï¬ects the relative importance of all features. A large prediction difference means that the feature contributed substantially to the classiï¬cation, whereas a small difference indicates that the feature was not important for the decision. A positive value WEi means that the feature has contributed evidence for the class of interest: removing it would decrease the conï¬dence of the classiï¬er in the given class. A negative value on the other hand indicates that the feature displays evidence against the class: removing it also removes potentially conï¬icting or irritating information and the classiï¬er becomes more certain in the investigated class.
3.1 CONDITIONAL SAMPLING
In equation (3), the conditional probability p(xi|x\i) of a feature xi is approximated using the marginal distribution p(xi). This is a very crude approximation. In images for example, a pixelâs value is highly dependent on other pixels. We propose a much more accurate approximation, based on the following two observations: a pixel depends most strongly on a small neighborhood around it, and the conditional of a pixel given its neighborhood does not depend on the position of the pixel in the image. For a pixel xi, we can therefore ï¬nd a patch Ëxi of size l à l that contains xi, and condition on the remaining pixels in that patch:
p(xi|x\i) â p(xi|Ëx\i) . (4)
This greatly improves the approximation while remaining completely tractable.
For a feature to become relevant when using conditional sampling, it now has to satisfy two conditions: being relevant to predict the class of interest, and be hard to predict from the neighboring pixels. Relative to the marginal method, we therefore downweight the pixels that can easily be predicted and are thus redundant in this sense.
3
Published as a conference paper at ICLR 2017
3.2 MULTIVARIATE ANALYSIS
Robnik-Å ikonja & Kononenko (2008) take a univariate approach: only one feature at a time is removed. However, we would expect that a neural network is relatively robust to just one feature of a high-dimensional input being unknown, like a pixel in an image. Therefore, we will remove several features at once by again making use of our knowledge about images by strategically choosing these feature sets: patches of connected pixels. Instead of going through all individual pixels, we go through all patches of size k à k in the image (k à k à 3 for RGB images and k à k à k for 3D images like MRI scans), implemented in a sliding window fashion. The patches are overlapping, so that ultimately an individual pixelâs relevance is obtained by taking the average relevance obtained from the different patches it was in.
Algorithm 1 and ï¬gure 2 illustrate how the method can be implemented, incorporating the proposed improvements.
3.3 DEEP VISUALIZATION OF HIDDEN LAYERS
When trying to understand neural networks and how they make decisions, it is not only interesting to analyze the input-output relation of the classiï¬er, but also to look at what is going on inside the hidden layers of the network. We can adapt the method to see how the units of any layer of the network inï¬uence a node from a deeper layer. Mathematically, we can formulate this as follows. Let h be the vector representation of the values in a layer H in the network (after forward-propagating the input up to this layer). Further, let z = z(h) be the value of a node that depends on h, i.e., a node in a subsequent layer. Then the analog of equation (2) is given by the expectation:
g(z|h\i) â¡ Ep(hi|h\i) [z(h)] = p(hi|h\i)z(h\i, hi) , hi (5)
which expresses the distribution of z when unit hi in layer H is unobserved. The equation now works for arbitrary layer/unit combinations, and evaluates to the same as equation (1) when the input-output relation is analyzed. To evaluate the difference between g(z|h) and g(z|h\i), we will in general use the activation difference, ADi(z|h) = g(z|h) â g(z|h\i) , for the case when we are not dealing with probabilities (and equation (3) is not applicable).
# 4 EXPERIMENTS
In this section, we illustrate how the proposed visualization method can be applied, on the ImageNet dataset of natural images when using DCNNs (section 4.1), and on a medical imaging dataset of MRI scans when using a logistic regression classiï¬er (section 4.2). For marginal sampling we always use the empirical distribution, i.e., we replace a feature (patch) with samples taken directly from other images, at the same location. For conditional sampling we use a multivariate normal distribution. For both sampling methods we use 10 samples to estimate p(c|x\i) (since no signiï¬cant difference was observed with more samples). Note that all images are best viewed digital and in color.
Our implementation is available at github.com/lmzintgraf/DeepVis-PredDiff.
IMAGENET: UNDERSTANDING HOW A DCNN MAKES DECISIONS
We use images from the ILSVRC challenge (Russakovsky et al., 2015) (a large dataset of natural im- ages from 1000 categories) and three DCNNs: the AlexNet (Krizhevsky et al., 2012), the GoogLeNet (Szegedy et al., 2015) and the (16-layer) VGG network (Simonyan & Zisserman, 2014). We used the publicly available pre-trained models that were implemented using the deep learning framework caffe (Jia et al., 2014). Analyzing one image took us on average 20, 30 and 70 minutes for the respective classiï¬ers AlexNet, GoogLeNet and VGG (using the GPU implementation of caffe and mini-batches with the standard settings of 10 samples and a window size of k = 10).
The results shown here are chosen from among a small set of images in order to show a range of behavior of the algorithm. The shown images are quite representative of the performance of the method in general. Examples on randomly selected images, including a comparison to the sensitivity analysis of Simonyan et al. (2013), can be seen in appendix A.
4
Published as a conference paper at ICLR 2017
marginal conditional input marginal conditional ry.
Figure 3: Visualization of the effects of marginal versus conditional sampling using the GoogLeNet classiï¬er. The classiï¬er makes correct predictions (ostrich and saxophone), and we show the evidence for (red) and against (blue) this decision at the output layer. We can see that conditional sampling gives more targeted explanations compared to marginal sampling. Also, marginal sampling assigns too much importance on pixels that are easily predictable conditioned on their neighboring pixels.
african el., 0.63 1 2 4 - = ~ 9..* 7 " " agen a ae âa âa bs &. "4 oe he | Se |) Pe te. y : % J) : %. J s 4 : â 4 s
Figure 4: Visualization of how different window sizes inï¬uence the visualization result. We used the conditional sampling method and the AlexNet classiï¬er with l = k + 4 and varying k. We can see that even when removing single pixels (k = 1), this has a noticeable effect on the classiï¬er and more important pixels get a higher score. By increasing the window size we can get a more easily interpretable, smooth result until the image gets blurry for very large window sizes.
We start this section by demonstrating our proposed improvements (sections 3.1 - 3.3).
Marginal vs Conditional Sampling
Figure 3 shows visualizations of the spatial support for the highest scoring class, using marginal and conditional sampling (with k = 10 and l = 14). We can see that conditional sampling leads to results that are more reï¬ned in the sense that they concentrate more around the object. We can also see that marginal sampling leads to pixels being declared as important that are very easily predictable conditioned on their neighboring pixels (like in the saxophone example). Throughout our experiments, we have found that conditional sampling tends to give more speciï¬c and ï¬ne-grained results than marginal sampling. For the rest of our experiments, we therefore show results using conditional sampling only.
Multivariate Analysis
For ImageNet data, we have observed that setting k = 10 gives a good trade-off between sharp results and a smooth appearance. Figure 4 shows how different window sizes inï¬uence the resolution of the visualization. Surprisingly, removing only one pixel does have a measurable effect on the prediction, and the largest effect comes from sensitive pixels. We expected that removing only one pixel does not have any effect on the classiï¬cation outcome, but apparently the classiï¬er is sensitive even to these small changes. However when using such a small window size, it is difï¬cult to make sense of the sign information in the visualization. If we want to get a good impression of which parts in the image are evidence for/against a class, it is therefore better to use larger windows. If k is chosen too large however, the results tend to get blurry. Note that these results are not just simple averages of one another, but a multivariate approach is indeed necessary to observe the presented results.
# Deep Visualization of Hidden Network Layers
Our third main contribution is the extension of the method to neural networks; to understand the role of hidden layers in a DNN. Figure 5 shows how different feature maps in three different layers of the GoogLeNet react to the input of a tabby cat (see ï¬gure 6, middle image). For each feature map in a convolutional layer, we ï¬rst compute the relevance of the input image for each hidden unit in that map. To estimate what the feature map as a whole is doing, we show the average of the relevance vectors over all units in that feature map. The ï¬rst convolutional layer works with different types of simple image ï¬lters (e.g., edge detectors), and what we see is which parts of the input image respond
5
Published as a conference paper at ICLR 2017
pas pela Lal
Figure 5: Visualization of feature maps from thee different layers of the GoogLeNet (l.t.r.: âconv1/7x7_s2â, âinception_3a/outputâ, âinception_5b/outputâ), using conditional sampling and patch sizes k = 10 and l = 14 (see alg. 1). For each feature map in the convolutional layer, we ï¬rst evaluate the relevance for every single unit, and then average the results over all the units in one feature map to get a sense of what the unit is doing as a whole. Red pixels activate a unit, blue pixels decreased the activation.
Figure 6: Visualization of three different feature maps, taken from the âinception_3a/outputâ layer of the GoogLeNet (from the middle of the network). Shown is the average relevance of the input features over all activations of the feature map. We used patch sizes k = 10 and l = 14 (see alg. 1). Red pixels activate a unit, blue pixels decreased the activation.
positively or negatively to these ï¬lters. The layer we picked from somewhere in the middle of the network is specialized to higher level features (like facial features of the cat). The activations of the last convolutional layer are very sparse across feature channels, indicating that these units are highly specialized.
To get a sense of what single feature maps in convolutional layers are doing, we can look at their visualization for different input images and look for patterns in their behavior. Figure 6 shows this for four different feature maps from a layer from the middle of the GoogLeNet network. We can directly see which kind of features the model has learned at this stage in the network. For example, one feature map is mostly activated by the eyes of animals (third row), and another is looking mostly at the background (last row).
Penultimate vs Output Layer
If we visualize the inï¬uence of the input features on the penultimate (pre-softmax) layer, we show only the evidence for/against this particular class, without taking other classes into consideration. After the softmax operation however, the values of the nodes are all interdependent: a drop in the probability for one class could be due to less evidence for it, or because a different class becomes more likely. Figure 7 compares visualizations for the last two layers. By looking at the top three scoring classes, we can see that the visualizations in the penultimate layer look very similar if the classes are similar (like different dog breeds). When looking at the output layer however, they look rather different. Consider the case of the elephants: the top three classes are different elephant subspecies, and the visualizations of the penultimate layer look similar since every subspecies can be identiï¬ed by similar characteristics. But in the output layer, we can see how the classiï¬er decides for one of the three types of elephants and against the others: the ears in this case are the crucial difference.
6
Published as a conference paper at ICLR 2017
african eleph. tusker indian eleph. french bulldog boston bull â_ am. staffordsh. 29.86 29.29 25.78 27.77 26.35 17.67 a 2 3 E 2 5 @ = & am. staffordsh. 0.00 african eleph. tusker indian eleph. 0.63 0.01 0.36 â_y
Figure 7: Visualization of the support for the top-three scoring classes in the penultimate- and output layer. Next to the input image, the ï¬rst row shows the results with respect to the penultimate layer; the second row with respect to the output layer. For each image, we additionally report the values of the units. We used the AlexNet with conditional sampling and patch sizes k = 10 and l = 14 (see alg. 1). Red pixels are evidence for a class, and blue against it.
alexnet googlenet alexnet googlenet vos Dy â Cad | ayo Y
Figure 8: Comparison of the prediction visualization of different DCNN architectures. For two input images, we show the results of the prediction difference analysis when using different neural networks - the AlexNet, GoogLeNet and VGG network.
Network Comparison
When analyzing how neural networks make decisions, we can also compare how different network architectures inï¬uence the visualization. Here, we tested our method on the AlexNet, the GoogLeNet and the VGG network. Figure 8 shows the results for the three different networks, on two input images. The AlexNet seems to more on contextual information (the sky in the balloon image), which could be attributed to it having the least complex architecture compared to the other two networks. It is also interesting to see that the VGG network deems the basket of the balloon as very important compared to all other pixels. The second highest scoring class in this case was a parachute - presumably, the network learned to not confuse a balloon with a parachute by detecting a square basket (and not a human).
4.2 MRI DATA: EXPLAINING CLASSIFIER DECISIONS IN MEDICAL IMAGING
To illustrate how our visualization method can also be useful in a medical domain, we show some experimental results on an MRI dataset of HIV and healthy patients. In such settings, it is crucial that the practitioner has some insight into the algorithmâs decision when classifying a patient, to weigh this information and incorporate it in the overall diagnosis process.
The dataset used here is referred to as the COBRA dataset. It contains 3D MRIs from 100 HIV patients and 70 healthy individuals, included in the Academic Medical Center (AMC) in Amsterdam, The Netherlands. Of these subjects, diffusion weighted MRI data were acquired. Preprocessing of the data was performed with software developed in-house, using the HPCN-UvA Neuroscience Gateway and using resources of the Dutch e-Science Grid Shahand et al. (2015). As a result, Fractional Anisotropy (FA) maps were computed. FA is sensitive to microstructural damage and therefore expected to be, on average, decreased in patients. Subjects were scanned on two 3.0 Tesla scanner systems, 121 subjects on a Philips Intera system and 39 on a Philips Ingenia system. Patients and controls were evenly distributed. FA images were spatially normalized to standard space Andersson et al. (2007), resulting in volumes with 91 Ã 109 Ã 91 = 902, 629 voxels.
7
Published as a conference paper at ICLR 2017
We trained an L2-regularized Logistic Regression classiï¬er on a subset of the MRI slices (slices 29-40 along the ï¬rst axis) and on a balanced version of the dataset (by taking the ï¬rst 70 samples of the HIV class) to achieve an accuracy of 69.3% in a 10-fold cross-validation test. Analyzing one image took around half an hour (on a CPU, with k = 3 and l = 7, see algorithm 1). For conditional sampling, we also tried adding location information in equation (2), i.e., we split up the 3D image into a 20 à 20 à 20 grid and also condition on the index in that grid. We found that this slightly improved the interpretability of the results, since the pixel values in the special case of MRI scans does depend on spacial location as well.
Figure 9 (ï¬rst row) shows one way via which the prediction difference results could be presented to a physician, for an HIV sample. By overlapping the prediction difference and the MRI image, the exact regions can be pointed out that are evidence for (red parts) or against (blue parts) the classiï¬erâs decision. The second row shows the results using the weights of the logistic regression classiï¬er, which is a commonly used method in neuroscientiï¬c literature. We can see that they are considerably noisier (in the sense that, compared to our method, the voxels relevant for the classiï¬cation decisions are more scattered), and also, they are not speciï¬c to the given image. Figure 10 shows the visualization results for four healthy, and four HIV samples. We can clearly see that the patterns for the two classes are distinct, and there is some pattern to the decision of the classiï¬er, but which is still speciï¬c to the input image. Figure 11 shows the same (HIV) sample as in ï¬gure 9 along different axes, and ï¬gure 12 shows how the visualization changes with different patch sizes. We believe that both varying the slice and patch size can give different insights to a clinician, and in clinical practice, a 3D animation where these parameters can be adjusted would be very useful for analyzing the visualization result.
In general we can assume that the better the classiï¬er, the closer the explanations for its decisions are to the true class difference. For clinical practice it is therefore crucial to have very good classiï¬ers. This will increase computation time, but in many medical settings, longer waiting times for test results are common and worth the wait if the patient is not in an acute life threatening condition (e.g., when predicting HIV or Alzheimer from MRI scans, or the ï¬eld of cancer diagnosis and detection). The presented results here are for demonstration purposes of the visualization method, and we claim no medical validity. A thorough qualitative analysis incorporating expert knowledge was outside the scope of this paper.
# 5 FUTURE WORK
In our experiments, we used a simple multivariate normal distribution for conditional sampling. We can imagine that using more sophisticated generative models will lead to better results: pixels that are easily predictable by their surrounding are downweighted even more. However this will also signiï¬cantly increase the computational resources needed to produce the explanations. Similarly, we could try to modify equation (4) to get an even better approximation by using a conditional distribution that takes more information about the whole image into account (like adding spatial information for the MRI scans).
To make the method applicable for clinical analysis and practice, a better classiï¬cation algorithm is required. Also, software that visualizes the results as an interactive 3D model will improve the usability of the system.
# 6 CONCLUSION
We presented a new method for visualizing deep neural networks that improves on previous methods by using a more powerful conditional, multivariate model. The visualization method shows which pixels of a speciï¬c input image are evidence for or against a node in the network. The signed information offers new insights - for research on the networks, as well as the acceptance and usability in domains like healthcare. While our method requires signiï¬cant computational resources, real-time 3D visualization is possible when visualizations are pre-computed. With further optimization and powerful GPUs, pre-computation time can be reduced a lot further. In our experiments, we have presented several ways in which the visualization method can be put into use for analyzing how DCNNs make decisions.
8
Published as a conference paper at ICLR 2017
Input + Pred.Diff _ Input (inv) + Pred. Diff Input + Pred.Diff Input (inv) + Pred.Diff i 1 o ta x = , Input (inv) + Weights
Figure 9: Visualization of the support for the correct classiï¬cation âHIVâ, using the Prediction Differ- ence method and Logistic Regression Weights. For an HIV sample, we show the results with the prediction difference (ï¬rst row), and using the weights of the logistic regression classiï¬er (second row), for slices 29 and 40 (along the ï¬rst axis). Red are positive values, and blue negative. For each slice, the left image shows the original image, overlaid with the relevance values. The right image shows the original image with reversed colors and the relevance values. Relevance values are shown only for voxels with (absolute) relevance value above 15% of the (absolute) maximum value.
Class: HEALTHY Class: HIV fh | | as) as as] input Pred. Diff.
Figure 10: Prediction difference visualization for different samples. The ï¬rst four samples are of the class âhealthyâ; the last four of the class âHIVâ. All images show slice 39 (along the ï¬rst axis). All samples are correctly classiï¬ed, and the results show evidence for (red) and against (blue) this decision. Prediction differences are shown only for voxels with (absolute) relevance value above 15% of the (absolute) maximum value.
29 31 33 35 37 39 + Yee
Figure 11: Visualization results across different slices of the MRI image, using the same input image as shown in 9. Prediction differences are shown only for voxels with (absolute) relevance value above 15% of the (absolute) maximum value.
Input k=2 k=3 k=10 ~ &
Figure 12: How the patch size inï¬uences the visualization. For the input image (HIV sample, slice 39 along the ï¬rst axis) we show the visualization with different patch sizes (k in alg. 1). Prediction differences are shown only for voxels with (absolute) relevance value above 15% of the (absolute) maximum (for k = 2 it is 10%).
9
Published as a conference paper at ICLR 2017
# ACKNOWLEDGMENTS
This work was supported by AWS in Education Grant award. We thank Facebook and Google for ï¬nancial support, and our reviewers for their time and valuable, constructive feedback.
This work was also in part supported by: Innoviris, the Brussels Institute for Research and Innovation, Brussels, Belgium; the Nuts-OHRA Foundation (grant no. 1003-026), Amsterdam, The Netherlands; The Netherlands Organization for Health Research and Development (ZonMW) together with AIDS Fonds (grant no 300020007 and 2009063). Additional unrestricted scientiï¬c grants were received from Gilead Sciences, ViiV Healthcare, Janssen Pharmaceutica N.V., Bristol-Myers Squibb, Boehringer Ingelheim, and Merck&Co.
We thank Barbara Elsenga, Jane Berkel, Sandra Moll, Maja Totté, and Marjolein Martens for running the AGEhIV study program and capturing our data with such care and passion. We thank Yolanda Ruijs-Tiggelman, Lia Veenenberg-Benschop, Sima Zaheri, and Mariska Hillebregt at the HIV Monitoring Foundation for their contributions to data management. We thank Aaï¬en Henderiks and Hans-Erik Nobel for their advice on logistics and organization at the Academic Medical Center. We thank all HIV-physicians and HIV-nurses at the Academic Medical Center for their efforts to include the HIV-infected participants into the AGEhIV Cohort Study, and the Municipal Health Service Amsterdam personnel for their efforts to include the HIV-uninfected participants into the AGEhIV Cohort Study. We thank all study participants without whom this research would not be possible.
AGEhIV Cohort Study Group. Scientiï¬c oversight and coordination: P. Reiss (principal investigator), F.W.N.M. Wit, M. van der Valk, J. Schouten, K.W. Kooij, R.A. van Zoest, E. Verheij, B.C. Elsenga (Aca- demic Medical Center (AMC), Department of Global Health and Amsterdam Institute for Global Health and Development (AIGHD)). M. Prins (co-principal investigator), M.F. Schim van der Loeff, M. Martens, S. Moll, J. Berkel, M. Totté, G.R. Visser, L. May, S. Kovalev, A. Newsum, M. Dijkstra (Public Health Service of Amsterdam, Department of Infectious Diseases). Datamanagement: S. Zaheri, M.M.J. Hillebregt, Y.M.C. Ruijs, D.P. Benschop, A. el Berkaoui (HIV Monitoring Foundation). Central laboratory support: N.A. Kootstra, A.M. Harskamp-Holwerda, I. Maurer, T. Booiman, M.M. Mangas Ruiz, A.F. Girigorie, B. Boeser-Nunnink (AMC, Laboratory for Viral Immune Pathogenesis and Department of Experimental Immunology). Project management and administrative support: W. Zikkenheiner, F.R. Janssen (AIGHD). Participating HIV physicians and nurses: S.E. Geerlings, M.H. Godfried, A. Goorhuis, J.W.R. Hovius, J.T.M. van der Meer, F.J.B. Nellen, T. van der Poll, J.M. Prins, P. Reiss, M. van der Valk, W.J. Wiersinga, M. van Vugt, G. de Bree, F.W.N.M. Wit; J. van Eden, A.M.H. van Hes, M. Mutschelknauss , H.E. Nobel, F.J.J. Pijnappel, M. Bijsterveld, A. Weijsenfeld, S. Smalhout (AMC, Division of Infectious Diseases). Other collaborators: J. de Jong, P.G. Postema (AMC, Department of Cardiology); P.H.L.T. Bisschop, M.J.M. Serlie (AMC, Division of Endocrinology and Metabolism); P. Lips (Free University Medical Center Amsterdam); E. Dekker (AMC, Department of Gastroenterology); N. van der Velde (AMC, Division of Geriatric Medicine); J.M.R. Willemsen, L. Vogt (AMC, Division of Nephrology); J. Schouten, P. Portegies, B.A. Schmand, G.J. Geurtsen (AMC, Department of Neurology); F.D. Verbraak, N. Demirkaya (AMC, Department of Ophthalmology); I. Visser (AMC, Department of Psychiatry); A. Schadé (Free University Medical Center Amsterdam, Department of Psychiatry); P.T. Nieuwkerk, N. Langebeek (AMC, Department of Medical Psychology); R.P. van Steenwijk, E. Dijkers (AMC, Department of Pulmonary medicine); C.B.L.M. Majoie, M.W.A. Caan, T. Su (AMC, Department of Radiology); H.W. van Lunsen, M.A.F. Nievaard (AMC, Department of Gynaecology); B.J.H. van den Born, E.S.G. Stroes, (AMC, Division of Vascular Medicine); W.M.C. Mulder (HIV Vereniging Nederland).
# REFERENCES
Jesper LR Andersson, Mark Jenkinson, and Stephen Smith. Non-linear optimisation. fmrib technical report tr07ja1. University of Oxford FMRIB Centre: Oxford, UK, 2007.
Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Woj- ciech Samek. On pixel-wise explanations for non-linear classiï¬er decisions by layer-wise relevance propaga- tion. PloS one, 10(7):e0130140, 2015.
Christine Ecker, Andre Marquand, Janaina Mourão-Miranda, Patrick Johnston, Eileen M Daly, Michael J Brammer, Stefanos Maltezos, Clodagh M Murphy, Dene Robertson, Steven C Williams, et al. Describing the brain in autism in ï¬ve dimensionsâmagnetic resonance imaging-assisted diagnosis of autism spectrum disorder using a multiparameter classiï¬cation approach. The Journal of Neuroscience, 30(32):10612â10623, 2010.
Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher-layer features of a deep network. Dept. IRO, Université de Montréal, Tech. Rep, 4323, 2009.
Bilwaj Gaonkar and Christos Davatzikos. Analytic estimation of statistical signiï¬cance maps for support vector machine based multi-variate image analysis and classiï¬cation. NeuroImage, 78:270â283, 2013.
10
Published as a conference paper at ICLR 2017
Stefan Haufe, Frank Meinecke, Kai Görgen, Sven Dähne, John-Dylan Haynes, Benjamin Blankertz, and Felix BieÃmann. On the interpretation of weight vectors of linear models in multivariate neuroimaging. Neuroimage, 87:96â110, 2014.
Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadar- rama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
Stefan Klöppel, Cynthia M Stonnington, Carlton Chu, Bogdan Draganski, Rachael I Scahill, Jonathan D Rohrer, Nick C Fox, Clifford R Jack, John Ashburner, and Richard SJ Frackowiak. Automatic classiï¬cation of mr scans in alzheimerâs disease. Brain, 131(3):681â689, 2008.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097â1105, 2012.
Janaina Mourao-Miranda, Arun LW Bokde, Christine Born, Harald Hampel, and Martin Stetter. Classifying brain states and determining the discriminating activation patterns: Support vector machine on functional mri data. NeuroImage, 28(4):980â995, 2005.
Marko Robnik-Å ikonja and Igor Kononenko. Explaining classiï¬cations for individual instances. Knowledge and Data Engineering, IEEE Transactions on, 20(5):589â600, 2008.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211â252, 2015. doi: 10.1007/s11263-015-0816-y.
Shayan Shahand, Ammar Benabdelkader, Mohammad Mahdi Jaghoori, Mostapha al Mourabit, Jordi Huguet, Matthan WA Caan, Antoine HC Kampen, and SÃlvia D Olabarriaga. A data-centric neuroscience gateway: design, implementation, and experiences. Concurrency and Computation: Practice and Experience, 27(2): 489â506, 2015.
Avanti Shrikumar, Peyton Greenside, Anna Shcherbina, and Anshul Kundaje. Not just a black box: Learning important features through propagating activation differences. arXiv preprint arXiv:1605.01713, 2016.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classiï¬cation models and saliency maps. arXiv preprint arXiv:1312.6034, 2013.
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1â9, 2015.
Ze Wang, Anna R Childress, Jiongjiong Wang, and John A Detre. Support vector machine learning-based fmri data group analysis. NeuroImage, 36(4):1139â1151, 2007.
Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579, 2015.
Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Computer visionâECCV 2014, pp. 818â833. Springer, 2014.
Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921â2929, 2016.
11
Published as a conference paper at ICLR 2017
A RANDOM RESULTS
eS t-for-two (1) spatula (47) stinkhorn hermit crab (1) cash machine (1) dishrag (4) squirrel monkey (1) car wheel (1) handkerchief (1) Parachute (1) scuba diver (3) chambered nautilus (1) 1) goose (1) langur (1) bullet train (1 groom (1) handkerchief (2) mixing bowl (1) croquet ball megalith (1) throne (1) loggerhead (1) redbone (1) ; hamster (1) boathouse (1) coffeepot (4) envelope (1)
Figure 13: Results on 34 randomly chosen ImageNet images. Middle columns: original image; left columns: sensitivity maps (Simonyan et al., 2013) where the red pixels indicate high sensitivity, and white pixels mean no sensitivity (note that we show the absolute values of the partial derivatives, since the sign cannot be interpreted like in our method); right columns: results from our method. For both methods, we visualize the results with respect to the correct class which is given above the image. In brackets we see how the classiï¬er ranks this class, i.e., a (1) means it was correctly classiï¬ed, whereas a (4) means that it was misclassiï¬ed, and the correct class was ranked fourth. For our method, red areas show evidence for the correct class, and blue areas show evidence against the class (e.g., the scuba diver looks more like a tea pot to the classiï¬er).
12 | {
"id": "1506.06579"
} |
1702.03044 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | This paper presents incremental network quantization (INQ), a novel method,
targeting to efficiently convert any pre-trained full-precision convolutional
neural network (CNN) model into a low-precision version whose weights are
constrained to be either powers of two or zero. Unlike existing methods which
are struggled in noticeable accuracy loss, our INQ has the potential to resolve
this issue, as benefiting from two innovations. On one hand, we introduce three
interdependent operations, namely weight partition, group-wise quantization and
re-training. A well-proven measure is employed to divide the weights in each
layer of a pre-trained CNN model into two disjoint groups. The weights in the
first group are responsible to form a low-precision base, thus they are
quantized by a variable-length encoding method. The weights in the other group
are responsible to compensate for the accuracy loss from the quantization, thus
they are the ones to be re-trained. On the other hand, these three operations
are repeated on the latest re-trained group in an iterative manner until all
the weights are converted into low-precision ones, acting as an incremental
network quantization and accuracy enhancement procedure. Extensive experiments
on the ImageNet classification task using almost all known deep CNN
architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the
efficacy of the proposed method. Specifically, at 5-bit quantization, our
models have improved accuracy than the 32-bit floating-point references. Taking
ResNet-18 as an example, we further show that our quantized models with 4-bit,
3-bit and 2-bit ternary weights have improved or very similar accuracy against
its 32-bit floating-point baseline. Besides, impressive results with the
combination of network pruning and INQ are also reported. The code is available
at https://github.com/Zhouaojun/Incremental-Network-Quantization. | http://arxiv.org/pdf/1702.03044 | Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen | cs.CV, cs.AI, cs.NE | Published by ICLR 2017, and the code is available at
https://github.com/Zhouaojun/Incremental-Network-Quantization | null | cs.CV | 20170210 | 20170825 | 7 1 0 2
g u A 5 2 ] V C . s c [
2 v 4 4 0 3 0 . 2 0 7 1 : v i X r a
Published as a conference paper at ICLR 2017
# INCREMENTAL NETWORK QUANTIZATION: TOWARDS LOSSLESS CNNS WITH LOW-PRECISION WEIGHTS
Aojun Zhouâ, Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen Intel Labs China {aojun.zhou, anbang.yao, yiwen.guo, lin.x.xu, yurong.chen}@intel.com
# ABSTRACT
This paper presents incremental network quantization (INQ), a novel method, tar- geting to efï¬ciently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as beneï¬t- ing from two innovations. On one hand, we introduce three interdependent oper- ations, namely weight partition, group-wise quantization and re-training. A well- proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the ï¬rst group are respon- sible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classiï¬cation task using al- most all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efï¬cacy of the proposed method. Speciï¬cally, at 5-bit quantization (a variable-length encoding: 1 bit for representing zero value, and the remaining 4 bits represent at most 16 different values for the powers of two) 1, our models have improved accuracy than the 32-bit ï¬oating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit ï¬oating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. We believe that our method sheds new insights on how to make deep CNNs to be applicable on mobile or embed- ded devices. The code is available at https://github.com/Zhouaojun/Incremental- Network-Quantization.
1
# INTRODUCTION
Deep convolutional neural networks (CNNs) have demonstrated record breaking results on a variety of computer vision tasks such as image classiï¬cation (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015), face recognition (Taigman et al., 2014; Sun et al., 2014), semantic segmentation (Long et al., 2015; Chen et al., 2015a) and object detection (Girshick, 2015; Ren et al., 2015). Regardless of the availability of signiï¬cantly improved training resources such as abundant annotated data, powerful computational platforms and diverse training frameworks, the promising results of deep CNNs are mainly attributed to the large number of learnable parameters, ranging from tens of millions to even hundreds of millions. Recent progress further shows clear evidence that CNNs could easily enjoy the accuracy gain from the increased network depth and width (He et al., 2016; Szegedy et al., 2015; 2016). However, this in turn lays heavy burdens on the memory and âThis work was done when Aojun Zhou was an intern at Intel Labs China, supervised by Anbang Yao who proposed the original idea and is responsible for correspondence. The ï¬rst three authors contributed equally to the writing of the paper.
# 1This notation applies to our method throughout the paper.
1
Published as a conference paper at ICLR 2017
other computational resources. For instance, ResNet-152, a speciï¬c instance of the latest residual network architecture wining ImageNet classiï¬cation challenge in 2015, has a model size of about 230 MB and needs to perform about 11.3 billion FLOPs to classify a 224 à 224 image crop. There- fore, it is very challenging to deploy deep CNNs on the devices with limited computation and power budgets.
Substantial efforts have been made to the speed-up and compression on CNNs during training, feed- forward test or both of them. Among existing methods, the category of network quantization meth- ods attracts great attention from researches and developers. Some network quantization works try to compress pre-trained full-precision CNN models directly. Gong et al. (2014) address the storage problem of AlexNet (Krizhevsky et al., 2012) with vector quantization techniques. By replacing the weights in each of the three fully connected layers with respective ï¬oating-point centroid values obtained from the clustering, they can get over 20à model compression at about 1% loss in top-5 recognition rate. HashedNet (Chen et al., 2015b) uses a hash function to randomly map pre-trained weights into hash buckets, and all the weights in the same hash bucket are constrained to share a single ï¬oating-point value. In HashedNet, only the fully connected layers of several shallow CNN models are considered. For better compression, Han et al. (2016) present deep compression method which combines the pruning (Han et al., 2015), vector quantization and Huffman coding, and re- duce the model storage by 35à on AlexNet and 49à on VGG-16 (Simonyan & Zisserman, 2015). Vanhoucke et al. (2011) use an SSE 8-bit ï¬xed-point implementation to improve the computation of neural networks on the modern Intel x86 CPUs in feed-forward test, yielding 3à speed-up over an optimized ï¬oating-point baseline. Training CNNs by substituting the 32-bit ï¬oating-point rep- resentation with the 16-bit ï¬xed-point representation has also been explored in Gupta et al. (2015). Other seminal works attempt to restrict CNNs into low-precision versions during training phase. Soudry et al. (2014) propose expectation backpropagation (EBP) to estimate the posterior distribu- tion of deterministic network weights. With EBP, the network weights can be constrained to +1 and -1 during feed-forward test in a probabilistic way. BinaryConnect (Courbariaux et al., 2015) further extends the idea behind EBP to binarize network weights during training phase directly. It has two versions of network weights: ï¬oating-point and binary. The ï¬oating-point version is used as the reference for weight binarization. BinaryConnect achieves state-of-the-art accuracy using shallow CNNs for small datasets such as MNIST (LeCun et al., 1998) and CIFAR-10. Later on, a series of efforts have been invested to train CNNs with low-precision weights, low-precision activations and even low-precision gradients, including but not limited to BinaryNet (Courbariaux et al., 2016), XNOR-Net (Rastegari et al., 2016), ternary weight network (TWN) (Li & Liu, 2016), DoReFa-Net (Zhou et al., 2016) and quantized neural network (QNN) (Hubara et al., 2016).
Despite these tremendous advances, CNN quantization still remains an open problem due to two crit- ical issues which have not been well resolved yet, especially under scenarios of using low-precision weights for quantization. The ï¬rst issue is the non-negligible accuracy loss for CNN quantization methods, and the other issue is the increased number of training iterations for ensuring convergence. In this paper, we attempt to address these two issues by presenting a novel incremental network quantization (INQ) method.
In our INQ, there is no assumption on the CNN architecture, and its basic goal is to efï¬ciently convert any pre-trained full-precision (i.e., 32-bit ï¬oating-point) CNN model into a low-precision version whose weights are constrained to be either powers of two or zero. The advantage of such kind of low-precision models is that the original ï¬oating-point multiplication operations can be replaced by cheaper binary bit shift operations on dedicated hardware like FPGA. We noticed that most existing network quantization methods adopt a global strategy in which all the weights are simultaneously converted to low-precision ones (that are usually in the ï¬oating-point types). That is, they have not considered the different importance of network weights, leaving the room to retain network accu- racy limited. In sharp contrast to existing methods, our INQ makes a very careful handling for the model accuracy drop from network quantization. To be more speciï¬c, it incorporates three interde- pendent operations: weight partition, group-wise quantization and re-training. Weight partition uses a pruning-inspired measure (Han et al., 2015; Guo et al., 2016) to divide the weights in each layer of a pre-trained full-precision CNN model into two disjoint groups which play complementary roles in our INQ. The weights in the ï¬rst group are quantized to be either powers of two or zero by a variable-length encoding method, forming a low-precision base for the original model. The weights in the other group are re-trained while keeping the quantized weights ï¬xed, compensating for the accuracy loss resulted from the quantization. Furthermore, these three operations are repeated on the
2
Published as a conference paper at ICLR 2017
50% (1) 75% · · · (2) 100% (a) (b) (c)
(a) Pre-trained full- Figure 1: An overview of our incremental network quantization method. precision model used as a reference. (b) Model update with three proposed operations: weight partition, group-wise quantization (green connections) and re-training (blue connections). (c) Final low-precision model with all the weights constrained to be either powers of two or zero. In the ï¬g- ure, operation (1) represents a single run of (b), and operation (2) denotes the procedure of repeating operation (1) on the latest re-trained weight group until all the non-zero weights are quantized. Our method does not lead to accuracy loss when using 5-bit, 4-bit and even 3-bit approximations in net- work quantization. For better visualization, here we just use a 3-layer fully connected network as an illustrative example, and the newly re-trained weights are divided into two disjoint groups of the same size at each run of operation (1) except the last run which only performs quantization on the re-trained ï¬oating-point weights occupying 12.5% of the model weights.
latest re-trained weight group in an iterative manner until all the weights are quantized, acting as an incremental network quantization and accuracy enhancement procedure (as illustrated in Figure 1).
The main insight of our INQ is that a compact combination of the proposed weight partition, group- wise quantization and re-training operations has the potential to get a lossless low-precision CNN model from any full-precision reference. We conduct extensive experiments on the ImageNet large scale classiï¬cation task using almost all known deep CNN architectures to validate the effective- ness of our method. We show that: (1) For AlexNet, VGG-16, GoogleNet and ResNets with 5-bit quantization, INQ achieves improved accuracy in comparison with their respective full-precision baselines. The absolute top-1 accuracy gain ranges from 0.13% to 2.28%, and the absolute top-5 accuracy gain is in the range of 0.23% to 1.65%. (2) INQ has the property of easy convergence in training. In general, re-training with less than 8 epochs could consistently generate a lossless model with 5-bit weights in the experiments. (3) Taking ResNet-18 as an example, our quantized models with 4-bit, 3-bit and 2-bit ternary weights also have improved or very similar accuracy compared with its 32-bit ï¬oating-point baseline. (4) Taking AlexNet as an example, the combination of our network pruning and INQ outperforms deep compression method (Han et al., 2016) with signiï¬cant margins.
# INCREMENTAL NETWORK QUANTIZATION
In this section, we clarify the insight of our INQ, describe its key components, and detail its imple- mentation.
2.1 WEIGHT QUANTIZATION WITH VARIABLE-LENGTH ENCODING
Suppose a pre-trained full-precision (i.e., 32-bit ï¬oating-point) CNN model can be represented by {Wl : 1 ⤠l ⤠L}, where Wl denotes the weight set of the lth layer, and L denotes the number of learnable layers in the model. To simplify the explanation, we only consider convolutional layers and fully connected layers. For CNN models like AlexNet, VGG-16, GoogleNet and ResNets as tested in this paper, Wl can be a 4D tensor for the convolutional layer, or a 2D matrix for the fully connected layer. For simplicity, here the dimension difference is not considered in the expression. Given a pre-trained full-precision CNN model, the main goal of our INQ is to convert all 32-bit ï¬oating-point weights to be either powers of two or zero without loss of model accuracy. Besides, we also attempt to explore the limit of the expected bit-width under the premise of guaranteeing lossless network quantization. Here, we start with our basic network quantization method on how to
3
Published as a conference paper at ICLR 2017
convert Wl to be a low-precision version Wl, and each of its entries is chosen from
Pl = {±2 c n1, · · · , ±2 n2, 0}, (1)
where n1 and n2 are two integer numbers, and they satisfy n2 ⤠n1. Mathematically, n1 and n2 help to bound Pl in the sense that its non-zero elements are constrained to be in the range of either [â2n1, â2n2] or [2n2, 2n1]. That is, network weights with absolute values smaller than 2n2 will be pruned away (i.e., set to zero) in the ï¬nal low-precision model. Obviously, the problem is how to determine n1 and n2. In our INQ, the expected bit-width b for storing the indices in Pl is set beforehand, thus the only hyper-parameter shall be determined is n1 because n2 can be naturally computed once b and n1 are available. Here, n1 is calculated by using a tricky yet practically effective formula as
n1 = ï¬oor(log2(4s/3)), (2)
where ï¬oor(·) indicates the round down operation and s is calculated by using
s = max(abs(Wl)),
(3)
where abs(·) is an element-wise operation and max(·) outputs the largest element of its input. In fact, Equation (2) helps to match the rounding power of 2 for s, and it could be easily implemented in practical programming. After n1 is obtained, n2 can be naturally determined as n2 = n1 + 1 â 2(bâ1)/2. For instance, if b = 3 and n1 = â1, it is easy to get n2 = â2. Once Pl is determined, we further use the ladder of powers to convert every entry of Wl into a low-precision one by using
se Bsgn(Wi(i,7)) if (a + B)/2 < abs(Wi(i, j)) < 36/2 Wii = 4 G3) {5 otherwise, a
# c
where α and β are two adjacent elements in the sorted Pl, making the above equation as a numerical rounding to the quantum values. It should be emphasized that factor 4/3 in Equation (2) is set to make sure that all the elements in Pl correspond with the quantization rule deï¬ned in Equation (4). In other words, factor 4/3 in Equation (2) highly correlates with factor 3/2 in Equation (4).
Here, an important thing we want to clarify is the deï¬nition of the expected bit-width b. Taking 5-bit quantization as an example, since zero value cannot be written as the power of two, we use 1 bit to represent zero value, and the remaining 4 bits to represent at most 16 different values for the powers of two. That is, the number of candidate quantum values is at most 2bâ1 + 1, so our quantization method actually adopts a variable-length encoding scheme. It is clear that the quantization described above is performed in a linear scale. An alternative solution is to perform the quantization in the log scale. Although it may also be effective, it should be a little bit more difï¬cult in implementation and may cause some extra computational overhead in comparison to our method.
INCREMENTAL QUANTIZATION STRATEGY
We can naturally use the above described method to quantize any pre-trained full-precision CNN model. However, noticeable accuracy loss appeared in the experiments when using small bit-width values (e.g., 5-bit, 4-bit, 3-bit and 2-bit).
In the literature, there are many existing network quantization works such as HashedNet (Chen et al., 2015b), vector quantization (Gong et al., 2014), ï¬xed-point representation (Vanhoucke et al., 2011; Gupta et al., 2015), BinaryConnect (Courbariaux et al., 2015), BinaryNet (Courbariaux et al., 2016), XNOR-Net (Rastegari et al., 2016), TWN (Li & Liu, 2016), DoReFa-Net (Zhou et al., 2016) and QNN (Hubara et al., 2016). Similar to our basic network quantization method, they also suffer from non-negligible accuracy loss on deep CNNs, especially when being applied on the ImageNet large scale classiï¬cation dataset. For all these methods, a common fact is that they adopt a global strategy in which all the weights are simultaneously converted into low-precision ones, which in turn causes accuracy loss. Compared with the methods focusing on the pre-trained models, accuracy loss becomes worse for the methods such as XNOR-Net, TWN, DoReFa-Net and QNN which intend to train low-precision CNNs from scratch.
Recall that our main goal is to achieve lossless low-precision quantization for any pre-trained full- precision CNN model with no assumption on its architecture. To this end, our INQ makes a special
4
Published as a conference paper at ICLR 2017
0.01 0.02 -0.20 0.04 0.33 0.01 0.02 -0.20 0.04 2â2 0.11 0.04 -0.7 0.19 0.17 -0.42 -0.33 0.02 -0.05 0.17 -2â1 -2â2 0.02 -0.05 0.15 -2â1 -2â2 -0.09 0.02 0.83 -0.03 0.03 0.06 0.02 20 -0.03 0.03 0.06 -0.02 20 -0.06 0.21 -0.90 0.07 0.11 0.87 -0.36 -20 0.07 0.11 20 -2â2 -20 0.27 -0.09 20 -0.73 0.41 0.42 0.39 0.47 -2â1 2â1 2â1 2â1 2â1 -2â1 2â1 2â1 2â1 2â3 0 -2â1 2â2 2â2 2â3 -0.05 -2â1 2â2 2â2 2â3 0.03 -2â1 2â2 2â3 -2â1 -2â2 -2â3 0 2â3 -2â1 -2â2 -2â3 -0.02 2â3 -2â1 -2â2 -0.13 0 20 -2â3 2â2 2â3 0.02 20 2â3 2â2 2â3 -0.03 20 -0.11 2â2 -20 2â3 0 20 -2â2 -20 2â3 -0.04 20 -2â2 -20 0.091 -0.01 20 -2â1 2â1 2â1 2â1 2â1 -2â1 2â1 2â1 2â1 2â1 -2â1 2â1 2â1 2â1 2â2 -0.02 0.15 -2â2 2â1 2â2 -0.01 2â3 -2â2 2â1
Figure 2: Result illustrations. First row: results from the 1st iteration of the proposed three oper- ations. The top left cube illustrates weight partition operation generating two disjoint groups, the middle image illustrates the quantization operation on the ï¬rst weight group (green cells), and the top right cube illustrates the re-training operation on the second weight group (light blue cells). Sec- ond row: results from the 2nd, 3rd and 4th iterations of the INQ. In the ï¬gure, the accumulated portion of the weights which have been quantized undergoes from 50%â75%â87.5%â100%.
handling of the strategy for suppressing resulting quantization loss in model accuracy. We are par- tially inspired by the latest progress in network pruning (Han et al., 2015; Guo et al., 2016). In these methods, the accuracy loss from removing less important network weights of a pre-trained neural network model could be well compensated by following re-training steps. Therefore, we conjec- ture that the nature of changing network weight importance is critical to achieve lossless network quantization.
Base on this assumption, we present INQ which incorporates three interdependent operations: weight partition, group-wise quantization and re-training. Weight partition is to divide the weights in each layer of a pre-trained full-precision CNN model into two disjoint groups which play comple- mentary roles in our INQ. The weights in the ï¬rst group are responsible for forming a low-precision base for the original model, thus they are quantized by using Equation (4). The weights in the second group adapt to compensate for the loss in model accuracy, thus they are the ones to be re-trained. Once the ï¬rst run of the quantization and re-training operations is ï¬nished, all the three operations are further conducted on the second weight group in an iterative manner, until all the weights are converted to be either powers of two or zero, acting as an incremental network quantization and accuracy enhancement procedure. As a result, accuracy loss under low-precision CNN quantization can be well suppressed by our INQ. Illustrative results at iterative steps of our INQ are provided in Figure 2. For the lth layer, weight partition can be deï¬ned as
A(1) l ⪠A(2) l = {Wl(i, j)}, and A(1) l â© A(2) l = â
, (5)
where A(1) denotes the ï¬rst weight group that needs to be quantized, and A2 denotes the other weight group that needs to be re-trained. We leave the strategies for group partition to be chosen in the experiment section. Here, we deï¬ne a binary matrix Tl to help distinguish above two categories of weights. That is, Tl(i, j) = 0 means Wl(i, j) â A(1) , and Tl(i, j) = 1 means Wl(i, j) â A(2)
5
Published as a conference paper at ICLR 2017
INCREMENTAL NETWORK QUANTIZATION ALGORITHM
Now, we come to the training method. Taking the lth layer as an example, the basic optimization problem of making its weights to be either powers of two or zero can be expressed as
E(Wl) = L(Wl) + λR(Wl) min Wl s.t. Wl(i, j) â Pl, 1 ⤠l ⤠L, (6)
where L(Wl) is the network loss, R(Wl) is the regularization term, λ is a positive coefï¬cient, and the constraint term indicates each weight entry Wl(i, j) should be chosen from the set Pl consisting of a ï¬xed number of the values of powers of two plus zero. Direct solving above optimization problem in training from scratch is challenging since it is very easy to undergo convergence problem.
By performing weight partition and group-wise quantization operations beforehand, the optimiza- tion problem deï¬ned in (6) can be reshaped into a easier version. That is, we only need to optimize the following objective function
E(Wl) = L(Wl) + λR(Wl) min Wl s.t. Wl(i, j) â Pl, if Tl(i, j) = 0, 1 ⤠l ⤠L, (7)
where Pl is determined at group-wise quantization operation, and the binary matrix Tl acts as a mask which is determined by weight partition operation. Since Pl and Tl are known, the optimiza- tion problem (7) can be solved using popular stochastic gradient decent (SGD) method. That is, in INQ, we can get the update scheme for the re-training as
Wl(i, j) â Wl(i, j) â γ âE â(Wl(i, j)) Tl(i, j), (8)
where γ is a positive learning rate. Note that the binary matrix Tl forces zero update to the weights that have been quantized. That is, only the weights still keep with ï¬oating-point values are updated, akin to the latest pruning methods (Han et al., 2015; Guo et al., 2016) in which only the weights that are not currently removed are re-trained to enhance network accuracy. The whole procedure of our INQ is summarized as Algorithm 1.
We would like to highlight that the merits of our INQ are in three aspects: (1) Weight partition in- troduces the importance-aware weight quantization. (2) Group-wise weight quantization introduces much less accuracy loss than simultaneously quantizing all the network weights, thus making re- training have larger room to recover model accuracy. (3) By integrating the operations of weight partition, group-wise quantization and re-training into a nested loop, our INQ has the potential to obtain lossless low-precision CNN model from the pre-trained full-precision reference.
# Algorithm 1 Incremental network quantization for lossless CNNs with low-precision weights.
the training data, {Wl Input: X: : 1 ⤠l ⤠L}: {Ï1, Ï2, · · · , ÏN }: the accumulated portions of weights quantized at iterative steps the pre-trained full-precision CNN model,
Output: { Wl : 1 ⤠l ⤠L}: the ï¬nal low-precision model with the weights constrained to be either powers of two or zero
# c
l â â
, A(2) 1: Initialize A(1) 2: for n = 1, 2, . . . , N do 3:
l â {Wl(i, j)}, Tl â 1, for 1 ⤠l ⤠L
1: Initialize A? â 0, AP &â (Wi(i,f)}, T) <1, for 1 <1< L
Reset the base learning rate and the learning policy According to Ïn, perform layer-wise weight partition and update A(1) Based on A(1) Quantize the weights in A(1) by Equation (4) layer-wisely Calculate feed-forward loss, and update weights in {A(2)
, A(2) l
# and Tl
4
, determine Pl layer-wisely
5:
6:
: 1 ⤠l ⤠L} by Equation (8) 7: 8: end for l
6
Published as a conference paper at ICLR 2017
# 3 EXPERIMENTAL RESULTS
To analyze the performance of our INQ, we perform extensive experiments on the ImageNet large scale classiï¬cation task, which is known as the most challenging image classiï¬cation benchmark so far. ImageNet dataset has about 1.2 million training images and 50 thousand validation images. Each image is annotated as one of 1000 object classes. We apply our INQ to AlexNet, VGG-16, GoogleNet, ResNet-18 and ResNet-50, covering almost all known deep CNN architectures. Using the center crops of validation images, we report the results with two standard measures: top-1 error rate and top-5 error rate. For fair comparison, all pre-trained full-precision (i.e., 32-bit ï¬oating- point) CNN models except ResNet-18 are taken from the Caffe model zoo2. Note that He et al. (2016) do not release their pre-trained ResNet-18 model to the public, so we use a publicly available re-implementation by Facebook3. Since our method is implemented with Caffe, we make use of an open source tool4 to convert the pre-trained ResNet-18 model from Torch to Caffe.
3.1 RESULTS ON IMAGENET
Table 1: Our INQ well converts diverse full-precision deep CNN models (including AlexNet, VGG- 16, GoogleNet, ResNet-18 and ResNet-50) to 5-bit low-precision versions with consistently im- proved model accuracy.
Network AlexNet ref AlexNet VGG-16 ref VGG-16 GoogleNet ref GoogleNet ResNet-18 ref ResNet-18 ResNet-50 ref ResNet-50 Bit-width Top-1 error Top-5 error Decrease in top-1/top-5 error 32 5 32 5 32 5 32 5 32 5 42.76% 42.61% 31.46% 29.18% 31.11% 30.98% 31.73% 31.02% 26.78% 25.19% 19.77% 19.54% 11.35% 9.70% 10.97% 10.72% 11.31% 10.90% 8.76% 7.55% 0.15%/0.23% 2.28%/1.65% 0.13%/0.25% 0.71%/0.41% 1.59%/1.21%
Setting expected bit-width to 5, the ï¬rst set of experiments is performed to testify the efï¬cacy of our INQ on different CNN architectures. Regarding weight partition, there are several candidate strate- gies as we tried in our previous work for efï¬cient network pruning (Guo et al., 2016). In Guo et al. (2016), we found random partition and pruning-inspired partition are the two best choices compared with the others. Thus in this paper, we directly compare these two strategies for weight partition. In random strategy, the weights in each layer of any pre-trained full-precision deep CNN model are randomly split into two disjoint groups. In pruning-inspired strategy, the weights are divided into two disjoint groups by comparing their absolute values with layer-wise thresholds which are auto- matically determined by a given splitting ratio. Here we directly use pruning-inspired strategy and the experimental results in Section 3.2 will show why. After the re-training with no more than 8 epochs over each pre-trained full-precision model, we obtain the results as shown in Table 1. It can be concluded that the 5-bit CNN models generated by our INQ show consistently improved top-1 and top-5 recognition rates compared with respective full-precision references. Parameter settings are described below.
AlexNet: AlexNet has 5 convolutional layers and 3 fully-connected layers. We set the accumulated portions of quantized weights at iterative steps as {0.3, 0.6, 0.8, 1}, the batch size as 256, the weight decay as 0.0005, and the momentum as 0.9.
VGG-16: Compared with AlexNet, VGG-16 has 13 convolutional layers and more parameters. We set the accumulated portions of quantized weights at iterative steps as {0.5, 0.75, 0.875, 1}, the batch size as 32, the weight decay as 0.0005, and the momentum as 0.9.
# 2https://github.com/BVLC/caffe/wiki/Model-Zoo 3https://github.com/facebook/fb.resnet.torch/tree/master/pretrained 4https://github.com/zhanghang1989/fb-caffe-exts
7
Published as a conference paper at ICLR 2017
GoogleNet: Compared with AlexNet and VGG-16, GoogleNet is more difï¬cult to quantize due to a smaller number of parameters and the increased network width. We set the accumulated portions of quantized weights at iterative steps as {0.2, 0.4, 0.6, 0.8, 1}, the batch size as 80, the weight decay as 0.0002, and the momentum as 0.9.
ResNet-18: Different from above three networks, ResNets have batch normalization layers and relief the vanishing gradient problem by using shortcut connections. We ï¬rst test the 18-layer version for exploratory purpose and test the 50-layer version later on. The network architectures of ResNet- 18 and ResNet-34 are very similar. The only difference is the number of ï¬lters in every convolutional layer. We set the accumulated portions of quantized weights at iterative steps as {0.5, 0.75, 0.875, 1}, the batch size as 80, the weight decay as 0.0005, and the momentum as 0.9.
ResNet-50: Besides signiï¬cantly increased network depth, ResNet-50 has a more complex network architecture in comparison to ResNet-18. However, regarding network architecture, ResNet-50 is very similar to ResNet-101 and ResNet-152. The only difference is the number of ï¬lters in every convolutional layer. We set the accumulated portions of quantized weights at iterative steps as {0.5, 0.75, 0.875, 1}, the batch size as 32, the weight decay as 0.0005, and the momentum as 0.9.
3.2 ANALYSIS OF WEIGHT PARTITION STRATEGIES
In our INQ, the ï¬rst operation is weight partition whose result will directly affect the following group-wise quantization and re-training operations. Therefore, the second set of experiments is conducted to analyze two candidate strategies for weight partition. As mentioned in the previous section, we use pruning-inspired strategy for weight partition. Unlike random strategy in which all the weights have equal probability to fall into the two disjoint groups, pruning-inspired strategy considers that the weights with larger absolute values are more important than the smaller ones to form a low-precision base for the original CNN model. We use ResNet-18 as a test case to compare the performance of these two strategies. In the experiments, the parameter settings are completely the same as described in Section 3.1. We set 4 epochs for weight re-training. Table 2 summarizes the results of our INQ with 5-bit quantization. It can be seen that our INQ achieves top-1 error rate of 32.11% and top-5 error rate of 11.73% by using random partition. Comparatively, pruning-inspired partition brings 1.09% and 0.83% decrease in top-1 and top-5 error rates, respectively. Apparently, pruning-inspired partition is better than random partition, and this is the reason why we use it in this paper. For future works, weight partition based on quantization error could also be an option worth exploring.
Table 2: Comparison of two different strategies for weight partition on ResNet-18.
Strategy Random partition Pruning-inspired partition Bit-width Top-1 error Top-5 error 5 5 32.11% 31.02% 11.73% 10.90%
3.3 THE TRADE-OFF BETWEEN EXPECTED BIT-WIDTH AND MODEL ACCURACY
The third set of experiments is performed to explore the limit of the expected bit-width under which our INQ can still achieve lossless network quantization. Similar to the second set of experiments, we also use ResNet-18 as a test case, and the parameter settings for the batch size, the weight decay and the momentum are completely the same. Finally, lower-precision models with 4-bit, 3-bit and even 2-bit ternary weights are generated for comparisons. As the expected bit-width goes down, the number of candidate quantum values will be decreased signiï¬cantly, thus we shall increase the number of iterative steps accordingly for enhancing the accuracy of ï¬nal low-precision model. Speciï¬cally, we set the accumulated portions of quantized weights at iterative steps as {0.3, 0.5, 0.8, 0.9, 0.95, 1}, {0.2, 0.4, 0.6, 0.7, 0.8, 0.9, 0.95, 1} and {0.2, 0.4, 0.6, 0.7, 0.8, 0.85, 0.9, 0.95, 0.975, 1} for 4-bit, 3-bit and 2-bit ternary models, respectively. The required number of epochs also increases when the expected bit-width goes down, and it reaches 30 when training our 2-bit ternary model. Although our 4-bit model shows slightly decreased accuracy when compared with the 5-bit model, its accuracy is still better than that of the pre-trained full-precision model. Comparatively, even when the expected bit-width goes down to 3, our low-precision model shows only 0.19% and
8
Published as a conference paper at ICLR 2017
0.33% losses in top-1 and top-5 recognition rates, respectively. As for our 2-bit ternary model, although it incurs 2.25% decrease in top-1 error rate and 1.56% decrease in top-5 error rate in comparison to the pre-trained full-precision reference, its accuracy is considerably better than state- of-the-art results reported for binary-weight network (BWN) (Rastegari et al., 2016) and ternary weight network (TWN) (Li & Liu, 2016). Detailed results are summarized in Table 3 and Table 4.
Table 3: Our INQ generates extremely low-precision (4-bit and 3-bit) models with improved or very similar accuracy compared with the full-precision ResNet-18 model.
Model ResNet-18 ref INQ INQ INQ INQ Bit-width 32 5 4 3 2 (ternary) Top-1 error Top-5 error 31.73% 31.02% 31.11% 31.92% 33.98% 11.31% 10.90% 10.99% 11.64% 12.87%
Table 4: Comparison of our 2-bit ternary model and some other binary or ternary models, including the BWN and the TWN approximations of ResNet-18.
Method BWN(Rastegari et al., 2016) TWN(Li & Liu, 2016) INQ (ours) Bit-width 1 2 (ternary) 2 (ternary) Top-1 error Top-5 error 39.20% 38.20% 33.98% 17.00% 15.80% 12.87%
3.4 LOW-BIT DEEP COMPRESSION
In the literature, recently proposed deep compression method (Han et al., 2016) reports so far best results on network compression without loss of model accuracy. Therefore, the last set of experi- ments is conducted to explore the potential of our INQ for much better deep compression. Note that Han et al. (2016) is a hybrid network compression solution combining three different techniques, namely network pruning (Han et al., 2015), vector quantization (Gong et al., 2014) and Huffman coding. Taking AlexNet as an example, network pruning gets 9à compression, however this re- sult is mainly obtained from the fully connected layers. Actually its compression performance on the convolutional layers is less than 3à (as can be seen in the Table 4 of Han et al. (2016)). Be- sides, network pruning is realized by separately performing pruning and re-training in an iterative way, which is very time-consuming. It will cost at least several weeks for compressing AlexNet. We solved this problem by our dynamic network surgery (DNS) method (Guo et al., 2016) which achieves about 7à speed-up in training and improves the performance of network pruning from 9à to 17.7Ã. In Han et al. (2016), after network pruning, vector quantization further improves com- pression ratio from 9à to 27Ã, and Huffman coding ï¬nally boosts compression ratio up to 35Ã. For fair comparison, we combine our proposed INQ and DNS, and compare the resulting method with Han et al. (2016). Detailed results are summarized in Table 5. When combing our proposed INQ and DNS, we achieve much better compression results compared with Han et al. (2016). Speciï¬- cally, with 5-bit quantization, we can achieve 53à compression with slightly larger gains both in top-5 and top-1 recognition rates, yielding 51.43%/96.30% absolute improvement in compression performance compared with full version/fair version (i.e., the combination of network pruning and vector quantization) of Han et al. (2016), respectively. Consistently better results have also obtained for our 4-bit and 3-bit models.
Besides, we also perform a set of experiments on AlexNet to compare the performance of our INQ and vector quantization (Gong et al., 2014). For fair comparison, re-training is also used to enhance the performance of vector quantization, and we set the number of cluster centers for all of 5 convo- lutional layers and 3 fully connect layers to 32 (i.e., 5-bit quantization). In the experiment, vector quantization incurs over 3% loss in model accuracy. When we change the number of cluster centers for convolutional layers from 32 to 128, it gets an accuracy loss of 0.98%. This is consistent with the results reported in (Gong et al., 2014). Comparatively, vector quantization is mainly proposed
9
Published as a conference paper at ICLR 2017
Table 5: Comparison of the combination of our INQ and DNS, and deep compression method on AlexNet. Conv: Convolutional layer, FC: Fully connected layer, P: Pruning, Q: Quantization, H: Huffman coding.
Decrease in top-1/top5 error 0.00%/0.03% 0.00%/0.03% -0.01%/0.00% 0.08%/0.03% -1.99%/-2.60% -0.52%/-0.20% -1.47%/-0.96% Method Bit-width(Conv/FC) Compression ratio Han et al. (2016) (P+Q) Han et al. (2016) (P+Q+H) Han et al. (2016) (P+Q+H) Our method (P+Q) Han et al. (2016) (P+Q+H) Our method (P+Q) Our method (P+Q) 8/5 8/5 8/4 5/5 4/2 4/4 3/3 27Ã 35Ã - 53Ã - 71Ã 89Ã
to compress the parameters in the fully connected layers of a pre-trained full-precision CNN model, while our INQ addresses all network layers simultaneously and has no accuracy loss for 5-bit and 4-bit quantization. Therefore, it is evident that our INQ is much better than vector quantization. Last but not least, the ï¬nal weights for vector quantization (Gong et al., 2014), network pruning (Han et al., 2015) and deep compression (Han et al., 2016) are still ï¬oating-point values, but the ï¬- nal weights for our INQ are in the form of either powers of two or zero. The direct advantage of our INQ is that the original ï¬oating-point multiplication operations can be replaced by cheaper binary bit shift operations on dedicated hardware like FPGA.
# 4 CONCLUSIONS
In this paper, we present INQ, a new network quantization method, to address the problem of how to convert any pre-trained full-precision (i.e., 32-bit ï¬oating-point) CNN model into a lossless low- precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which usually quantize all the network weights simultaneously, INQ is a more compact quantization framework. It incorporates three interdependent operations: weight partition, group- wise quantization and re-training. Weight partition splits the weights in each layer of a pre-trained full-precision CNN model into two disjoint groups which play complementary roles in INQ. The weights in the ï¬rst group is directly quantized by a variable-length encoding method, forming a low-precision base for the original CNN model. The weights in the other group are re-trained while keeping all the quantized weights ï¬xed, compensating for the accuracy loss from network quantiza- tion. More importantly, the operations of weight partition, group-wise quantization and re-training are repeated on the latest re-trained weight group in an iterative manner until all the weights are quantized, acting as an incremental network quantization and accuracy enhancement procedure. On the ImageNet large scale classiï¬cation task, we conduct extensive experiments and show that our quantized CNN models with 5-bit, 4-bit, 3-bit and even 2-bit ternary weights have improved or at least comparable accuracy against their full-precision baselines, including AlexNet, VGG-16, GoogleNet and ResNets. As for future works, we plan to extend incremental idea behind INQ from low-precision weights to low-precision activations and low-precision gradients (we have actually already made some good progress on it, as shown in our supplementary materials). We will also investigate computation and power efï¬ciency by implementing our low-precision CNN models on hardware platforms.
# REFERENCES
Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and L. Yuille Alan. Se- In ICLR, mantic image segmentation with deep convolutional nets and fully connected crfs. 2015a.
Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. Compressing neural networks with the hashing trick. In ICML, 2015b.
10
Published as a conference paper at ICLR 2017
Matthieu Courbariaux, Bengio Yoshua, and David Jean-Pierre. Binaryconnect: Training deep neural networks with binary weights during propagations. In NIPS, 2015.
Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks: Training deep neural networks with weights and activations constrained to +1 or -1. arXiv preprint arXiv:1602.02830v3, 2016.
Ross Girshick. Fast r-cnn. In ICCV, 2015.
Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep concolutional net- works using vector quantization. arXiv preprint arXiv:1412.6115v1, 2014.
Yiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network surgery for efï¬cient dnns. In NIPS, 2016.
Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. In ICML, 2015.
Song Han, Jeff Pool, John Tran, and William J. Dally. Learning both weights and connections for efï¬cient neural networks. In NIPS, 2015.
Song Han, Jeff Pool, John Tran, and William J. Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In ICLR, 2016.
Kaiming He, Zhang Xiangyu, Ren Shaoqing, and Sun Jian. Deep residual learning for image recog- nition. In CVPR, 2016.
Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Quantized neural networks: Training neural networks with low precision weights and activations. arXiv preprint arXiv:1609.07061v1, 2016.
Alex Krizhevsky, Sutskever Ilya, and E. Hinton Geoffrey. Imagenet classiï¬cation with deep convo- lutional neural networks. In NIPS, 2012.
Yann LeCun, Bottou Leon, Yoshua Bengio, and Patrick Hadner. Gradient-based learning applied to documentrecognition. In NIPS, 1998.
Fengfu Li and Bin Liu. Ternary weight networks. arXiv preprint arXiv:1605.04711v1, 2016.
Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classiï¬cation using binary convolutional neural networks. arXiv preprint arXiv:1603.05279v4, 2016.
Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In NIPS, 2015.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
Daniel Soudry, Itay Hubara, and Ron Meir. Expectation backpropagation: Parameter-free training of multilayer neural networks with continuous or discrete weights. In NIPS, 2014.
Yi Sun, Xiaogang Wang, and Xiaoou Tang. Deep learning face representation from predicting 10,000 classes. In CVPR, 2014.
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du- mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In CVPR, 2015.
Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261v1, 2016.
11
Published as a conference paper at ICLR 2017
Yaniv Taigman, Ming Yang, Marcâ Aurelio Ranzato, and Lior Wolf. Deepface: Closing the gap to human-level performance in face veriï¬cation. In CVPR, 2014.
Vincent Vanhoucke, Andrew Senior, and Mark Z. Mao. Improving the speed of neural networks on cpus. In Deep Learning and Unsupervised Feature Learning Workshop, NIPS, 2011.
Shuchang Zhou, Zekun Ni, Xinyu Zhou, He Wen, Yuxin Wu, and Yuheng Zou. Dorefa-net: Train- ing low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1605.04711v1, 2016.
12
Published as a conference paper at ICLR 2017
# A APPENDIX 1: STATISTICAL ANALYSIS OF THE QUANTIZED WEIGHTS
Taking our 5-bit AlexNet model as an example, we analyze the distribution of the quantized weights. Detailed statistical results are summarized in Table 6. We can ï¬nd: (1) in the 1st and 2nd convolu- tional layers, the values of {â2â6, â2â5, â2â4, 2â6, 2â5, 2â4} and {â2â8, â2â7, â2â6, â2â5, 0, 2â8, 2â7, 2â6, 2â5} occupy over 60% and 94% of all quantized weights, respectively; (2) the distributions of the quantized weights in the 3rd, 4th and 5th convolutional layers are similar to that of the 2nd convolutional layer, and more weights are quantized into zero in the 2nd, 3rd, 4th and 5th convolutional layers compared with the 1st convolutional layer; (3) in the 1st fully connected layer, the values of {â2â10, â2â9, â2â8, â2â7, 0, 2â10, 2â9, 2â8, 2â7} occupy about 98% of all quantized weights, and similar results can be seen for the 2nd fully connected layer; (4) gener- ally, the distributions of the quantized weights in the convolutional layers are usually more scattered compared with the fully connected layers. This may be partially the reason why it is much eas- ier to get good compression performance on fully connected layers in comparison to convolutional layers, when using methods such as network hashing (Chen et al., 2015b) and vector quantization (Gong et al., 2014); (5) for 5-bit AlexNet model, the required bit-width for each layer is actually 4 but not 5.
Table 6: A statistical distribution of the quantized weights in our 5-bit AlexNet model.
Conv1 - - 5.04% 6.56% 3.43% 9.22% 0.002% 0.004% 0.40% 9.79% 10.52% 8.73% - - 0.55% - 2.70% 9.75% - - 0.004% - 0.39% 4.61% - - - - 0.01% 0.67% 3.62% 6.17% 8.86% 8.97% 11.30% 12.24% 9.70% 5.51% 3.40% 8.30% - 5.81% - - - 10.51% 7.84% - 4.69% - - - 12.91% 11.30% 8.08% 9.70% 5.20% 7.69% 10.44% 8.60% 11.90% 10.94% 11.01% 11.66% 10.33% 8.95% 6.79% 8.95% 12.56% 3.54% 11.05% 11.86% 12.25% 10.67% 1.12% 9.99% 6.37% 0.01% 6.81% 11.15% 6.57% 2.75% 0.06% 1eâ5% 2eâ5% 0.01% 1.62% 1.24% 10.14% 2.26% - 0.08% 0.53% 4.26% 0.16% - 0.003% 0.01% 0.05% 0.60% 3eâ4% 2eâ4% 3eâ4% - - - 100% 100% 100% 4 4 4
B APPENDIX 2: LOSSLESS CNNS WITH LOW-PRECISION WEIGHTS AND LOW-PRECISION ACTIVATIONS
Table 7: Comparison of our VGG-16 model with 5-bit weights and 4-bit activations, and the pre- trained reference with 32-bit ï¬oating-point weights and 32-bit ï¬oat-point activations.
Bit-width for weight/activation 32/32 5/4 Decrease in top-1/top-5 error Network Top-1 error Top-5 error VGG-16 ref VGG-16 31.46% 29.82% 11.35% 10.19% 1.64%/1.16%
13
Published as a conference paper at ICLR 2017
Recently, we have made some good progress on developing our INQ for lossless CNNs with both low-precision weights and low-precision activations. According to the results summarized in Ta- ble 7, it can be seen that our VGG-16 model with 5-bit weights and 4-bit activations shows improved top-5 and top-1 recognition rates in comparison to the pre-trained reference with 32-bit ï¬oating-point weights and 32-bit ï¬oating-point activations. To the best of our knowledge, this should be the best results reported on VGG-16 architecture so far.
14 | {
"id": "1605.04711"
} |
1702.01806 | Beam Search Strategies for Neural Machine Translation | The basic concept in Neural Machine Translation (NMT) is to train a large
Neural Network that maximizes the translation performance on a given parallel
corpus. NMT is then using a simple left-to-right beam-search decoder to
generate new translations that approximately maximize the trained conditional
probability. The current beam search strategy generates the target sentence
word by word from left-to- right while keeping a fixed amount of active
candidates at each time step. First, this simple search is less adaptive as it
also expands candidates whose scores are much worse than the current best.
Secondly, it does not expand hypotheses if they are not within the best scoring
candidates, even if their scores are close to the best one. The latter one can
be avoided by increasing the beam size until no performance improvement can be
observed. While you can reach better performance, this has the draw- back of a
slower decoding speed. In this paper, we concentrate on speeding up the decoder
by applying a more flexible beam search strategy whose candidate size may vary
at each time step depending on the candidate scores. We speed up the original
decoder by up to 43% for the two language pairs German-English and
Chinese-English without losing any translation quality. | http://arxiv.org/pdf/1702.01806 | Markus Freitag, Yaser Al-Onaizan | cs.CL | First Workshop on Neural Machine Translation, 2017 | Proceedings of the First Workshop on Neural Machine Translation,
2017 | cs.CL | 20170206 | 20170614 | 7 1 0 2
2017
n u J 4 1 ] L C . s c [
2 v 6 0 8 1 0 . 2 0 7 1 : v i X r a
# Beam Search Strategies for Neural Machine Translation
Markus Freitag and Yaser Al-Onaizan IBM T.J. Watson Research Center 1101 Kitchawan Rd, Yorktown Heights, NY 10598 {freitagm,onaizan}@us.ibm.com
# Abstract
The basic concept in Neural Machine Translation (NMT) is to train a large Neu- ral Network that maximizes the transla- tion performance on a given parallel cor- pus. NMT is then using a simple left-to- right beam-search decoder to generate new translations that approximately maximize the trained conditional probability. The current beam search strategy generates the target sentence word by word from left-to- right while keeping a ï¬xed amount of ac- tive candidates at each time step. First, this simple search is less adaptive as it also ex- pands candidates whose scores are much worse than the current best. Secondly, it does not expand hypotheses if they are not within the best scoring candidates, even if their scores are close to the best one. The latter one can be avoided by increas- ing the beam size until no performance im- provement can be observed. While you can reach better performance, this has the drawback of a slower decoding speed. In this paper, we concentrate on speeding up the decoder by applying a more ï¬exi- ble beam search strategy whose candidate size may vary at each time step depend- ing on the candidate scores. We speed up the original decoder by up to 43% for the two language pairs GermanâEnglish and ChineseâEnglish without losing any translation quality.
models (Jean et al., 2015; Luong et al., 2015), in the recent it has become very popular years 2013; (Kalchbrenner and Blunsom, Sutskever et al., 2014; Bahdanau et al., 2014). With the recent success of NMT, attention has shifted towards making it more practical. One of the challenges is the search strategy for extracting the best translation for a given source sentence. In NMT, new sentences are translated by a simple beam search decoder that ï¬nds a translation that approximately maximizes the conditional proba- bility of a trained NMT model. The beam search strategy generates the translation word by word from left-to-right while keeping a ï¬xed number (beam) of active candidates at each time step. By increasing the beam size, the translation perfor- mance can increase at the expense of signiï¬cantly reducing the decoder speed. Typically, there is a saturation point at which the translation quality does not improve any more by further increasing the beam. The motivation of this work is two folded. First, we prune the search graph, thus, speed up the decoding process without losing any translation quality. Secondly, we observed that the best scoring candidates often share the same history and often come from the same partial hypothesis. We limit the amount of candidates coming from the same partial hypothesis to introduce more diversity without reducing the decoding speed by just using a higher beam.
# 2 Related Work
# 1 Introduction
Due to the fact tion (NMT) better performance tional that Neural Machine Transla- is reaching comparable or even tradi- compared to the translation (SMT) statistical machine
The original beam search for sequence to se- quence models has been introduced and described by (Graves, 2012; Boulanger-Lewandowski et al., 2013) and by (Sutskever et al., 2014) for neural (Hu et al., 2015; Mi et al., machine translation. 2016) improved the beam search with a constraint softmax function which only considered a limited
word set of translation candidates to reduce the computation complexity. This has the advantage that they normalize only a small set of candidates and thus improve the decoding speed. (Wu et al., 2016) only consider tokens that have local scores that are not more than beamsize below the best token during their search. Further, the authors prune all partial hypotheses whose score are beam- size lower than the best ï¬nal hypothesis (if one has already been generated). In this work, we investigate different absolute and relative pruning schemes which have successfully been applied in statistical machine translation for e.g. phrase table pruning (Zens et al., 2012).
# 3 Original Beam Search
The original beam-search strategy ï¬nds a transla- tion that approximately maximizes the conditional probability given by a speciï¬c model. It builds the translation from left-to-right and keeps a ï¬xed number (beam) of translation candidates with the highest log-probability at each time step. For each end-of-sequence symbol that is selected among the highest scoring candidates the beam is reduced by one and the translation is stored into a ï¬nal can- didate list. When the beam is zero, it stops the search and picks the translation with the highest log-probability (normalized by the number of tar- get words) out of the ï¬nal candidate list.
# 4 Search Strategies
In this section, we describe the different strategies we experimented with. In all our extensions, we ï¬rst reduce the candidate list to the current beam size and apply on top of this one or several of the following pruning schemes.
relative threshold pruning method discards those candidates that are far worse than the best active candidate. Given a pruning threshold rp and an active candidate list C, a candidate cand â C is discarded if:
score(cand) ⤠rp â max câC {score(c)} (1)
Absolute Threshold Pruning. Instead of taking the relative difference of the scores into ac- count, we just discard those candidates that are worse by a speciï¬c threshold than the best active candidate. Given a pruning threshold
ap and an active candidate list C, a candidate cand â C is discarded if:
score(cand) ⤠max câC {score(c)} â ap (2)
Relative Local Threshold Pruning. In this prun- ing approach, we only consider the score scorew of the last generated word and not the total score which also include the scores of the previously generated words. Given a pruning threshold rpl and an active candidate list C, a candidate cand â C is discarded if:
scorew(cand) ⤠rpl â max câC {scorew(c)}
(3) Maximum Candidates per Node We observed that at each time step during the decoding process, most of the partial hypotheses share the same predecessor words. To introduce more diversity, we allow only a ï¬xed number of candidates with the same history at each time step. Given a maximum candidate threshold mc and an active candidate list C, a candidate cand â C is discarded if already mc better scoring partial hyps with the same history are in the candidate list.
# 5 Experiments
For the GermanâEnglish translation task, we train an NMT system based on the WMT 2016 training data (Bojar et al., 2016) (3.9M parallel sentences). For the ChineseâEnglish experi- ments, we use an NMT system trained on 11 mil- lion sentences from the BOLT project.
In all our experiments, we use our in-house attention-based NMT implementation which is For similar GermanâEnglish, we use sub-word units ex- tracted by byte pair encoding (Sennrich et al., 2015) instead of words which shrinks the vocabu- lary to 40k sub-word symbols for both source and target. For ChineseâEnglish, we limit our vocab- ularies to be the top 300K most frequent words for both source and target language. Words not in these vocabularies are converted into an unknown token. During translation, we use the alignments (from the attention mechanism) to replace the un- known tokens either with potential targets (ob- tained from an IBM Model-1 trained on the paral- lel data) or with the source word itself (if no target was found) (Mi et al., 2016). We use an embed- ding dimension of 620 and ï¬x the RNN GRU lay-
28 25 27.5 20 27 15 U E L B 26.5 10 26 5 25.5 BLEU average fan out 0 5 10 15 beam size 20 25 e c n e t n e s r e p t u o n a f e g a r e v a
Figure 1: GermanâEnglish: Original beam- search strategy with different beam sizes on new- stest2014.
27.4 5 27.2 4.5 27 26.8 4 26.6 3.5 U E L B 26.4 3 26.2 2.5 26 25.8 BLEU average fan out 2 25.6 1.5 25.4 1 0 0.2 0.4 0.6 0.8 1 relative pruning, beam size = 5 e c n e t n e s r e p t u o n a f e g a r e v a
Figure 2: GermanâEnglish: Different values of relative pruning measured on newstest2014.
ers to be of 1000 cells each. For the training proce- dure, we use SGD (Bishop, 1995) to update model parameters with a mini-batch size of 64. The train- ing data is shufï¬ed after each epoch.
We measure the decoding speed by two num- bers. First, we compare the actual speed relative to the same setup without any pruning. Secondly, we measure the average fan out per time step. For each time step, the fan out is deï¬ned as the num- ber of candidates we expand. Fan out has an up- per bound of the size of the beam, but can be de- creased either due to early stopping (we reduce the beam every time we predict a end-of-sentence symbol) or by the proposed pruning schemes. For each pruning technique, we run the experiments with different pruning thresholds and chose the largest threshold that did not degrade the transla- tion performance based on a selection set.
In Figure 1, you can see the GermanâEnglish translation performance and the average fan out per sentence for different beam sizes. Based
on this experiment, we decided to run our prun- ing experiments for beam size 5 and 14. The GermanâEnglish results can be found in Table 1. By using the combination of all pruning tech- niques, we can speed up the decoding process by 13% for beam size 5 and by 43% for beam size 14 without any drop in performance. The rela- tive pruning technique is the best working one for beam size 5 whereas the absolute pruning tech- nique works best for a beam size 14. In Figure 2 the decoding speed with different relative prun- ing threshold for beam size 5 are illustrated. Set- ting the threshold higher than 0.6 hurts the trans- lation performance. A nice side effect is that it has become possible to decode without any ï¬x beam size when we apply pruning. Nevertheless, the de- coding speed drops while the translation perfor- mance did not change. Further, we looked at the number of search errors introduced by our prun- ing schemes (number of times we prune the best scoring hypothesis). 5% of the sentences change due to search errors for beam size 5 and 9% of the sentences change for beam size 14 when using all four pruning techniques together.
The ChineseâEnglish translation results can be found in Table 2. We can speed up the decoding process by 10% for beam size 5 and by 24% for beam size 14 without loss in translation quality. In addition, we measured the number of search errors introduced by pruning the search. Only 4% of the sentences change for beam size 5, whereas 22% of the sentences change for beam size 14.
# 6 Conclusion
The original beam search decoder used in Neu- ral Machine Translation is very simple. It gen- erated translations from left-to-right while look- ing at a ï¬x number (beam) of candidates from the last time step only. By setting the beam size large enough, we ensure that the best translation per- formance can be reached with the drawback that many candidates whose scores are far away from the best are also explored. In this paper, we in- troduced several pruning techniques which prune candidates whose scores are far away from the best one. By applying a combination of absolute and relative pruning schemes, we speed up the decoder by up to 43% without losing any translation qual- ity. Putting more diversity into the decoder did not improve the translation quality.
beam speed avg fan out tot fan out newstest2014 newstest2015 per sent BLEU TER BLEU TER size 55.4 1 53.7 5 53.8 5 53.7 5 53.8 5 53.8 5 53.8 5 53.5 14 53.4 14 53.5 14 53.4 14 53.4 14 53.4 14 53.3 - pruning per sent 1.00 4.54 3.71 4.11 4.25 4.54 3.64 12.19 10.38 9.49 10.27 12.21 8.44 28.46 up - - 6% 5% 5% 0% 13% - 10% 29% 24% 1% 43% - 25 122 109 116 118 126 101 363 315 279 306 347 260 979 56.8 54.6 54.7 54.6 54.7 54.6 54.6 54.3 54.3 54.3 54.4 54.4 54.5 54.4 25.5 27.3 27.3 27.3 27.3 27.4 27.3 27.6 27.6 27.6 27.6 27.6 27.6 27.6 26.1 27.4 27.3 27.4 27.4 27.5 27.3 27.6 27.6 27.6 27.7 27.7 27.6 27.6 no pruning no pruning rp=0.6 ap=2.5 rpl=0.02 mc=3 rp=0.6,ap=2.5,rpl=0.02,mc=3 no pruning rp=0.3 ap=2.5 rpl=0.3 mc=3 rp=0.3,ap=2.5,rpl=0.3,mc=3 rp=0.3,ap=2.5,rpl=0.3,mc=3
Table 1: Results GermanâEnglish: relative pruning(rp), absolute pruning(ap), relative local pruning(rpl) and maximum candidates per node(mc). Average fan out is the average number of candidates we keep at each time step during decoding.
pruning beam speed avg fan out tot fan out MT08 nw MT08 wb per sent BLEU TER BLEU TER size 27.3 61.7 26.0 60.3 1 34.4 57.3 30.6 58.2 5 34.4 57.3 30.6 58.2 5 34.3 57.3 30.6 58.2 5 34.4 57.5 30.6 58.3 5 34.4 57.4 30.7 58.2 5 34.3 57.3 30.6 58.2 5 35.3 57.1 31.2 57.8 14 35.2 57.2 31.2 57.8 14 35.2 56.9 31.1 57.9 14 35.3 57.2 31.1 57.9 14 35.3 56.9 31.1 57.8 14 35.3 56.9 31.1 57.8 14 35.2 57.3 31.1 57.9 - up - - 1% 4% 1% 0% 10% - 3% 14% 10% 0% 24% - per sent 1.00 4.36 4.32 4.26 4.35 4.37 3.92 11.96 11.62 10.15 10.93 11.98 8.62 38.76 29 137 134 132 135 139 121 376 362 321 334 378 306 1411 no pruning no pruning rp=0.2 ap=5 rpl=0.01 mc=3 rp=0.2,ap=5,rpl=0.01,mc=3 no pruning rp=0.2 ap=2.5 rpl=0.3 mc=3 rp=0.2,ap=2.5,rpl=0.3,mc=3 rp=0.2,ap=2.5,rpl=0.3,mc=3
Table 2: Results ChineseâEnglish: relative pruning(rp), absolute pruning(ap), relative local pruning(rpl) and maximum candidates per node(mc).
# References
D. Bahdanau, K. Cho, and Y. Bengio. 2014. Neural machine translation by jointly learning to align and translate. ArXiv e-prints .
Christopher M Bishop. 1995. Neural networks for pat- tern recognition. Oxford university press.
2016 conference on machine translation (wmt16). Proceedings of WMT .
Nicolas Boulanger-Lewandowski, Yoshua Bengio, and Pascal Vincent. 2013. Audio chord recognition with recurrent neural networks. In ISMIR. Citeseer, pages 335â340.
Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, An- tonio Jimeno Yepes, Philipp Koehn, Varvara Lo- gacheva, Christof Monz, et al. 2016. Findings of the
Alex Graves. 2012. Sequence transduction with arXiv preprint recurrent neural networks. arXiv:1211.3711 .
Xiaoguang Hu, Wei Li, Xiang Lan, Hua Wu, and
Haifeng Wang. 2015. Improved beam search with constrained softmax for nmt. Proceedings of MT Summit XV page 297.
S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large tar- In get vocabulary for neural machine translation. Proceedings of ACL. Beijing, China, pages 1â10.
Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent In Proceedings of continuous translation models. the 2013 Conference on Empirical Methods in Nat- ural Language Processing. Association for Compu- tational Linguistics, Seattle.
Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In Pro- ceedings of ACL. Beijing, China, pages 11â19.
Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016. Vocabulary manipulation for neural machine trans- lation. arXiv preprint arXiv:1605.03209 .
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909 .
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Sys- tems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada. pages 3104â3112. http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Googleâs neural ma- Macherey, et al. 2016. chine translation system: Bridging the gap between arXiv preprint human and machine translation. arXiv:1609.08144 .
Richard Zens, Daisy Stanton, and Peng Xu. 2012. A systematic comparison of phrase table pruning tech- In Proceedings of the 2012 Joint Confer- niques. ence on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguis- tics, pages 972â983. | {
"id": "1605.03209"
} |
1701.08718 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural
networks augmented with an external memory can learn the long-term dependency
tasks more easily and achieve better generalization than vanilla recurrent
neural networks (RNN). We suggest that memory augmented neural networks can
reduce the effects of vanishing gradients by creating shortcut (or wormhole)
connections. Based on this observation, we propose a novel memory augmented
neural network model called TARDIS (Temporal Automatic Relation Discovery in
Sequences). The controller of TARDIS can store a selective set of embeddings of
its own previous hidden states into an external memory and revisit them as and
when needed. For TARDIS, memory acts as a storage for wormhole connections to
the past to propagate the gradients more effectively and it helps to learn the
temporal dependencies. The memory structure of TARDIS has similarities to both
Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but
both read and write operations of TARDIS are simpler and more efficient. We use
discrete addressing for read/write operations which helps to substantially to
reduce the vanishing gradient problem with very long sequences. Read and write
operations in TARDIS are tied with a heuristic once the memory becomes full,
and this makes the learning problem simpler when compared to NTM or D-NTM type
of architectures. We provide a detailed analysis on the gradient propagation in
general for MANNs. We evaluate our models on different long-term dependency
tasks and report competitive results in all of them. | http://arxiv.org/pdf/1701.08718 | Caglar Gulcehre, Sarath Chandar, Yoshua Bengio | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20170130 | 20170130 | 7 1 0 2 n a J 0 3 ]
G L . s c [ 1 v 8 1 7 8 0 . 1 0 7 1 : v i X r a
Memory Augmented Neural Networks with Wormhole Connections
# Memory Augmented Neural Networks with Wormhole Connections
# Caglar Gulcehre Montreal Institute for Learning Algorithms Universite de Montreal Montreal, Canada
gulcehrc@iro.umontreal.ca
# Sarath Chandar Montreal Institute for Learning Algorithms Universite de Montreal Montreal, Canada
apsarathchandar@gmail.com
# Yoshua Bengio Montreal Institute for Learning Algorithms Universite de Montreal Montreal, Canada
yoshua.bengio@umontreal.ca
# Abstract
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the eï¬ects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more eï¬ectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more eï¬cient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on diï¬erent long-term dependency tasks and report competitive results in all of them.
# 1. Introduction
Recurrent Neural Networks (RNNs) are neural network architectures that are designed to handle temporal dependencies in sequential prediction problems. However it is well known that RNNs suï¬er from the issue of vanishing gradients as the length of the sequence and the dependencies increases (Hochreiter, 1991; Bengio et al., 1994). Long Short Term
1
Gulcehre, Chandar, and Bengio
Memory (LSTM) units (Hochreiter and Schmidhuber, 1997) were proposed as an alternative architecture which can handle long range dependencies better than a vanilla RNN. A simpliï¬ed version of LSTM unit called Gated Recurrent Unit (GRU), proposed in (Cho et al., 2014), has proven to be successful in a number of applications (Bahdanau et al., 2015; Xu et al., 2015; Trischler et al., 2016; Kaiser and Sutskever, 2015; Serban et al., 2016). Even though LSTMs and GRUs attempt to solve the vanishing gradient problem, the memory in both architectures is stored in a single hidden vector as it is done in an RNN and hence accessing the information too far in the past can still be diï¬cult. In other words, LSTM and GRU models have a limited ability to perform a search through its past memories when it needs to access a relevant information for making a prediction. Extending the capabilities of neural networks with a memory component has been explored in the literature on diï¬erent applications with diï¬erent architectures (Weston et al., 2015; Graves et al., 2014; Joulin and Mikolov, 2015; Grefenstette et al., 2015; Sukhbaatar et al., 2015; Bordes et al., 2015; Chandar et al., 2016; Gulcehre et al., 2016; Graves et al., 2016; Rae et al., 2016).
Memory augmented neural networks (MANN) such as neural Turing machines (NTM) (Graves et al., 2014; Rae et al., 2016), dynamic NTM (D-NTM) (Gulcehre et al., 2016), and Diï¬erentiable Neural Computers (DNC) (Graves et al., 2016) use an external memory (usually a matrix) to store information and the MANNâs controller can learn to both read from and write into the external memory. As we show here, it is in general possible to use particular MANNs to explicitly store the previous hidden states of an RNN in the memory and that will provide shortcut connections through time, called here wormhole connections, to look into the history of the states of the RNN controller. Learning to read and write into an external memory by using neural networks gives the model more freedom or ï¬exibility to retrieve information from its past, forget or store new information into the memory. However, if the addressing mechanism for read and/or write operations are continuous (like in the NTM and continuous D-NTM), then the access may be too diï¬use, especially early on during training. This can hurt especially the writing operation, since a diï¬used write operation will overwrite a large fraction of the memory at each step, yielding fast vanishing of the memories (and gradients). On the other hand, discrete addressing, as used in the discrete D-NTM, should be able to perform this search through the past, but prevents us from using straight backpropagation for learning how to choose the address.
We investigate the ï¬ow of the gradients and how the wormhole connections introduced by the controller eï¬ects it. Our results show that the wormhole connections created by the controller of the MANN can signiï¬cantly reduce the eï¬ects of the vanishing gradients by shortening the paths that the signal needs to travel between the dependencies. We also discuss how the MANNs can generalize to the sequences longer than the ones seen during the training.
In a discrete D-NTM, the controller must learn to read from and write into the external memory by itself and additionally, it should also learn the reader/writer synchronization. This can make the learning to be more challenging. In spite of this diï¬culty, Gulcehre et al. (2016) reported that the discrete D-NTM can learn faster than the continuous D-NTM on some of the bAbI tasks. We provide a formal analysis of gradient ï¬ow in MANNs based on discrete addressing and justify this result. In this paper, we also propose a new MANN based on discrete addressing called TARDIS (Temporal Automatic Relation Discovery in Sequences). In TARDIS, memory access is based on tying the write and read heads of
2
# Memory Augmented Neural Networks with Wormhole Connections
the model after memory is ï¬lled up. When the memory is not full, the write head store information in memory in the sequential order.
The main characteristics of TARDIS are as follows, TARDIS is a simple memory aug- mented neural network model which can represent long-term dependencies eï¬ciently by using a external memory of small size. TARDIS represents the dependencies between the hidden states inside the memory. We show both theoretically and experimentally that TARDIS ï¬xes to a large extent the problems related to long-term dependencies. Our model can also store sub-sequences or sequence chunks into the memory. As a consequence, the controller can learn to represent the high-level temporal abstractions as well. TARDIS performs well on several structured output prediction tasks as veriï¬ed in our experiments.
The idea of using external memory with attention can be justiï¬ed with the concept of mental-time travel which humans do occasionally to solve daily tasks. In particular, in the cognitive science literature, the concept of chronesthesia is known to be a form of consciousness which allows human to think about time subjectively and perform mental time-travel (Tulving, 2002). TARDIS is inspired by this ability of humans which allows one to look up past memories and plan for the future using the episodic memory.
# 2. TARDIS: A Memory Augmented Neural Network
Neural network architectures with an external memory represent the memory in a matrix form, such that at each time step t the model can both read from and write to the external memory. The whole content of the external memory can be considered as a generalization of hidden state vector in a recurrent neural network. Instead of storing all the information into a single hidden state vector, our model can store them in a matrix which has a higher capacity and with more targeted ability to substantially change or use only a small subset of the memory at each time step. The neural Turing machine (NTM) (Graves et al., 2014) is such an example of a MANN, with both reading and writing into the memory.
# 2.1 Model Outline
In this subsection, we describe the basic structure of TARDIS 1 (Temporal Automatic Relation Discovery In Sequences). TARDIS is a MANN which has an external memory matrix Mt â RkÃq where k is the number of memory cells and q is the dimensionality of each cell. The model has an RNN controller which can read and write from the external memory at every time step. To read from the memory, the controller generates the read t â RkÃ1 and the reading operation is typically achieved by computing the dot weights wr product between the read weights wr t and the memory Mt, resulting in the content vector rt â RqÃ1:
r= (M;)' wi, (1
TARDIS uses discrete addressing and hence wr t is a one-hot vector and the dot-product chooses one of the cells in the memory matrix (Zaremba and Sutskever, 2015; Gulcehre et al., t â R1Ãk, to write into the memory which 2016). The controller generates the write weights ww is also a one hot vector, with discrete addressing. We will omit biases from our equations
1. Name of the model is inspired from the time-machine in a popular TV series Dr. Who.
3
# Gulcehre, Chandar, and Bengio
for the simplicity in the rest of the paper. Let i be the index of the non-zero entry in the one-hot vector ww t , then the controller writes a linear projection of the current hidden state to the memory location Mt[i]:
Mt[i] = Wmht, (2)
where Wm â RdmÃdh is the projection matrix that projects the dh dimensional hidden state vector to a dm dimensional micro-state vector such that dh > dm.
At every time step, the hidden state ht of the controller is also conditioned on the content rt read from the memory. The wormhole connections are created by conditioning ht on rt:
ht = Ï(xt, htâ1, rt). (3)
As each cell in the memory is a linear projection of one of the previous hidden states, the conditioning of the controllerâs hidden state with the content read from the memory can be interpreted as a way of creating short-cut connections across time (from the time tâ tha hy was written to the time ¢ when it was read through r;) which can help to the flow of gradients across time. This is possible because of the discrete addressing used for read and write operations.
However, the main challenge for the model is to learn proper read and write mechanisms so that it can write the hidden states of the previous time steps that will be useful for future predictions and read them at the right time step. We call this the reader/writer synchronization problem. Instead of designing complicated addressing mechanisms to mitigate the diï¬culty of learning how to properly address the external memory, TARDIS side-steps the reader/writer synchronization problem by using the following heuristics. For the ï¬rst k time steps, our model writes the micro-states into the k cells of the memory in a sequential order. When the memory becomes full, the most eï¬ective strategy in terms of preserving the information stored in the memory would be to replace the memory cell that has been read with the micro-state generated from the hidden state of the controller after it is conditioned on the memory cell that has been read. If the model needs to perfectly retain the memory cell that it has just overwritten, the controller can in principle learn to do that by copying its read input to its write output (into the same memory cell). The pseudocode and the details of the memory update algorithm for TARDIS is presented in Algorithm 1.
There are two missing pieces in Algorithm 1: How to generate the read weights? What is the structure of the controller function Ï? We will answer these two questions in detail in next two sub-sections.
2.2 Addressing mechanism Similar to D-NTM, memory matrix Mt of TARDIS has disjoint address section At â RkÃa and content section Ct â RkÃc, Mt = [At; Ct] and Mt â RkÃq for q = c + a. However, unlike D-NTM address vectors are ï¬xed to random sparse vectors. The controller reads both the address and the content parts of the memory, but it will only write into the content section of the memory.
t are generated by an MLP which uses the information coming from ht, xt, Mt and the usage vector ut (described below). The MLP is parametrized as follows:
4
# Memory Augmented Neural Networks with Wormhole Connections
Algorithm 1 Pseudocode for the controller and memory update mechanism of TARDIS.
Initialize ho Initialize Mo for t ⬠{1,---T,} do Compute the read weights w} < read(hy, Mz, xz) Sample from/discretize W; and obtain wf Read from the memory, rz < (Mi)! wf. Compute a new controller hidden state, hy <â 6(x:, heâ1,1r2) if t<kthen Wr else ite into the memory, M;[¢] < Wmhr Select the memory location to write into j + max,(w/[j]) Wr ite into the memory, M;[j] â Wmhr end if end for
m[i] = a! tanh(W]h; + Wix, + W),M, [i] + W2u,) (4) W) = softmax(7;), (5)
x, Wγ by either sampling from wr where {a, Wγ h, Wγ m, Wγ t or by using argmax over wr t . u} are learnable parameters. wr t is a one-hot vector obtained
ut is the usage vector which denotes the frequency of accesses to each cell in the memory. ut is computed from the sum of discrete address vectors wr t and normalizing them.
t-1 w= norm(}~ wi). (6) i=1
norm(·) applied in Equation 6 is a simple feature-wise computation of centering and divisive variance normalization. This normalization step makes the training easier with the usage vectors. The introduction of the usage vector can help the attention mechanism to choose between the diï¬erent memory cells based on their frequency of accesses to each cell of the memory. For example, if a memory cell is very rarely accessed by the controller, for the next time step, it can learn to assign more weights to those memory cells by looking into the usage vector. By this way, the controller can learn an LRU access mechanism (Santoro et al., 2016; Gulcehre et al., 2016).
Further, in order to prevent the model to learn deï¬cient addressing mechanisms, for e.g. reading the same memory cell which will not increase the memory capacity of the model, we decrease the probability of the last read memory location by subtracting 100 from the logit of wr
5
Gulcehre, Chandar, and Bengio
# 2.3 TARDIS Controller
We use an LSTM controller, and its gates are modiï¬ed to take into account the content rt of the cell read from the memory:
ft it ot = sigm sigm sigm (Whhtâ1 + Wxxt + Wrrt) , (7)
where ft, it, and ot are forget gate, input gate, and output gate respectively. αt, βt are the scalar RESET gates which control the magnitude of the information ï¬owing from the memory and the previous hidden states to the cell of the LSTM ct. By controlling the ï¬ow of information into the LSTM cell, those gates will allow the model to store the sub-sequences or chunks of sequences into the memory instead of the entire context.
We use Gumbel sigmoid (Maddison et al., 2016; Jang et al., 2016) for αt and βt due to its behavior close to binary.
a,\ _ (gumbel-sigmoid wel wel wel (â') ~ ees we hia + wel xt wet Pe) (8)
As in Equation 8 empirically, we ï¬nd gumbel-sigmoid to be easier to train than the regular sigmoid. The temperature of the Gumbel-sigmoid is ï¬xed to 0.3 in all our experiments.
The cell of the LSTM controller, ct is computed according to the Equation 9 with the αt and βt RESET gates.
Ëct = tanh(βtWg ct = ftctâ1 + itËct, hhtâ1 + Wg xxt + αtWg rrt),
The hidden state of the LSTM controller is computed as follows:
ht = ot tanh(ct). (10)
In Figure 1, we illustrate the interaction between the controller and the memory with various heads and components of the controller.
# 2.4 Micro-states and Long-term Dependencies
A micro-state of the LSTM for a particular time step is the summary of the information that has been stored in the LSTM controller of the model. By attending over the cells of the memory which contains previous micro-states of the LSTM, the model can explicitly learn to restore information from its own past.
The controller can learn to represent high-level temporal abstractions by creating wormhole connections through the memory as illustrated in Figure 2. In this example, the model takes the token x0 at the ï¬rst timestep and stores its representation to the ï¬rst memory cell with address a0. In the second timestep, the controller takes x1 as input and writes into the second memory cell with the address a1. Furthermore, β1 gater blocks the connection from h1 to h2. At the third timestep, the controller starts reading. It receives x2 as input and
6
(9)
# Memory Augmented Neural Networks with Wormhole Connections
â Legend: : MLP output : Read/Write output 7 ie) @) : Observed Input ° iS) : Output prediction : Controller â âp : General Connection â â@ : Multiplicative Connection â> : Affine Connection
Figure 1: At each time step controller takes xt, the memory cell that has been read rt and the hidden state of the previous timestep htâ1. Then, it generates αt which controls the contribution of the rt into the internal dynamics of the new controllerâs state ht (We omit the βt in this visualization). Once the memory Mt becomes full, discrete addressing weights wr t is generated by the controller which will be used to both read from and write into the memory. To the predict the target yt, the model will have to use both ht and rt.
reads the ï¬rst memory cell where micro-state of h0 was stored. After reading, it computes the hidden-state h2 and writes the micro-state of h2 into the ï¬rst memory cell. The length of the path passing through the microstates of h0 and h2 would be 1. The wormhole connection from h2 to h0 would skip a timestep.
A regular single-layer RNN has a ï¬xed graphical representation of a linear-chain when considering only the connections through its recurrent states or the temporal axis. However, TARDIS is more ï¬exible in terms of that and it can learn directed graphs with more diverse structures using the wormhole connections and the RESET gates. The directed graph that TARDIS can learn through its recurrent states have at most the degree of 4 at each vertex (maximum 2 incoming and 2 outgoing edges) and it depends on the number of cells (k) that can be stored in the memory.
In this work, we focus on a variation of TARDIS, where the controller maintains a ï¬xed-size external memory. However as in (Cheng et al., 2016), it is possible to use a memory that grows with respect to the length of its input sequences, but that would not scale and can be more diï¬cult to train with discrete addressing.
7
# Gulcehre, Chandar, and Bengio
GULCEHRE, CHANDAR, AND BENGIO
M -- wo M, _ M3 Mg ___ Ms eo 0 Le, to 19 [fg hy» By ray ay He ray are Read ag Read ay Read on . [Write fig to ay Write hy to ay â\ , [Writeh, toay =~ (Write hg to ay Dependencies among the input tokens: a KAR KN oe + 2S â4
Figure 2: TARDISâs controller can learn to represent the dependencies among the inputs tokens by choosing which cells to read and write and creating wormhole connections. xt represents the input to the controller at timestep t and the ht is the hidden state of the controller RNN.
# 3. Training TARDIS
In this section, we explain how to train TARDIS as a language model. We use language modeling as an example application. However, we would like to highlight that TARDIS can also be applied to any complex sequence to sequence learning tasks.
Consider N training examples where each example is a sequence of length T . At every time-step t, the model receives the input xt â {0, 1}|V | which is a one-hot vector of size equal to the size of the vocabulary |V | and should produce the output yt â {0, 1}|V | which is also a one-hot vector of size equal to the size of the vocabulary |V |.
The output of the model for i-th example and t-th time-step is computed as follows:
t = softmax(Wog(h(i) oi t , r(i) t )), (11)
where Wo is the learnable parameters and g(ht, rt) is a single layer MLP which combines both ht and rt as in deep fusion by (Pascanu et al., 2013a). The task loss would be the categorical cross-entropy between the targets and model-outputs. Super-script i denotes that the variable is the output for the ith sample in the training set.
N T \V| ; Lmodei(O = wh » sy! [k] log(of" [&]), (12) i=1 t=1 k=1
However, the discrete decisions taken for memory access during every time-step makes the model not diï¬erentiable and hence we need to rely on approximate methods of computing gradients with respect to the discrete address vectors. In this paper we explore two such approaches: REINFORCE (Williams, 1992) and straight-through estimator (Bengio et al., 2013).
8
Memory Augmented Neural Networks with Wormhole Connections
# 3.1 Using REINFORCE
REINFORCE is a likelihood-ratio method, which provides a convenient and simple way of estimating the gradients of the stochastic actions. In this paper, we focus on application of REINFORCE on sequential prediction tasks, such as language modelling. For example i, let R(wr(i) at timestep j. We are interested in maximizing j the expected return for the whole episode as deï¬ned below:
T(8) = ES) R(w!)) (13)
Ideally we would like to compute the gradients for Equation 13, however computing the gradient of the expectation may not be feasible. We would have to use a Monte-Carlo approximation and compute the gradients by using the REINFORCE for the sequential prediction task which can be written as in Equation 14.
N T ; ; Vo (8) = 5 1S (ROw)) =) Vo loutweâ)), (14)
where bj is the reward baseline. However, we can further assume that the future actions do not depend on the past rewards in the episode/trajectory and further reduce the variance of REINFORCE as in Equation 15.
N T T ; ; Vo (9) = = S-1S> S\(R(w)) = b;) Vo log we], (15) i=1 t=0 j=t 2
In our preliminary experiments, we ï¬nd out that the training of the model is easier with the discounted returns, instead of using the centered undiscounted return:
1 N T T ri) r(i) Vor(6 =yL Lr w') â b;)|Vo log(w;"â)]. (16) i=1 t=0 j=t
Training REINFORCE with an Auxiliary Cost Training models with REINFORCE can be diï¬cult, due to the variance imposed into the gradients. In the recent years, researchers have developed several tricks in order to mitigate the eï¬ect of high-variance in the gradients. As proposed by (Mnih and Gregor, 2014), we also use variance normalization on the REINFORCE gradients.
)) is the log-likelihood of the prediction at j that timestep. Our initial experiments showed that REINFORCE with this reward structue often tends to under-utilize the memory and mainly rely on the internal memory of the LSTM controller. Especially, in the beginning of the training model, it can just decrease the loss by relying on the memory of the controller and this can cause the REINFORCE to increase the log-likelihood of the random actions.
In order to deal with this issue, instead of using the log-likelihood of the model as reward, we introduce an auxiliary cost to use as the reward Râ which is computed based on predictions
9
# Gulcehre, Chandar, and Bengio
which are only based on the memory cell rt which is read by the controller and not the hidden state of the controller:
ial wi) =r" k] log(softmax(W?r; ©) 4 wex\)) [ak], (17)
x â RdoÃdx} where do is the dimensionality of the output size and dx (for language modelling both do and dx would be do = |V |) is the dimensionality of the input of the model. We do not backpropagate through r(j)
# i
# i
# 3.2 Using Gumbel Softmax
Training with REINFORCE can be challenging due to the high variance of the gradients, gumbel-softmax provides a good alternative with straight-through estimator for REINFORCE to tackle the variance issue. Unlike (Maddison et al., 2016; Jang et al., 2016) instead of annealing the temperature or ï¬xing it, our model learns the inverse-temperature with an MLP Ï (ht) which has a single scalar output conditioned on the hidden state of the controller.
r(h;) = softplus(w7 "bh; +bâ) + 1. gumbel-softmax(7;[i]) = softmax((m[i] + â¬)7(hz)),
(18)
(19)
We replace the softmax in Equation 5 with gumbel-softmax deï¬ned above. During t for t for gradient computation and hence
Learning the temperature of the Gumbel-Softmax reduces the burden of performing extensive hyper-parameter search for the temperature.
# 4. Related Work
Neural Turing Machine (NTM) (Graves et al., 2014) is the most related class of architecture to our model. NTMs have proven to be successful in terms of generalizing over longer sequences than the sequences that it has been trained on. Also NTM has been shown to be more eï¬ective in terms of solving algorithmic tasks than the gated models such as LSTMs. However NTM can have limitations due to some of its design choices. Due to the controllerâs lack of precise knowledge on the contents of the information, the contents of the memory can overlap. These memory augmented models are also known to be complicated, which yields to the diï¬culties in terms of implementing the model and training it. The controller has no information about the sequence of operations and the information such as frequency of the read and write access to the memory. TARDIS tries to address these issues.
Gulcehre et al. (2016) proposed a variant of NTM called dynamic NTM (D-NTM) which had learnable location based addressing. D-NTM can be used with both continuous addressing and discrete addressing. Discrete D-NTM is related to TARDIS in the sense that both models use discrete addressing for all the memory operations. However, discrete D-NTM expects the controller to learn to read/write and also learn reader/writer synchronization.
10
# Memory Augmented Neural Networks with Wormhole Connections
TARDIS do not have this synchronization problem since both reader and writer are tied. Rae et al. (2016) proposed sparse access memory (SAM) mechanism for NTMs which can be seen as a hybrid of continuous and discrete addressing. SAM uses continuous addressing over a selected set of top-K relevant memory cells. Recently, Graves et al. (2016) proposed a diï¬erentiable neural computer (DNC) which is a successor of NTM.
Rocktäschel et al. (2015) and (Cheng et al., 2016) proposed models that generate weights to attend over the previous hidden states of the RNN. However, since those models attend over the whole context, the computation of the attention can be ineï¬cient.
Grefenstette et al. (2015) has proposed a model that can store the information in a data structure, such as in a stack, dequeue or queue in a diï¬erentiable manner.
Grave et al. (2016) has proposed to use a cache based memory representation which stores the last k states of the RNN in the memory and similar to the traditional cache-based models the model learns to choose a state of the memory for the prediction in the language modeling tasks (Kuhn and De Mori, 1990).
# 5. Gradient Flow through the External Memory
In this section, we analyze the ï¬ow of the gradients through the external memory and will also investigate its eï¬ciency in terms of dealing with the vanishing gradients problem (Hochreiter, 1991; Bengio et al., 1994). First, we describe the vanishing gradient problem in an RNN and then describe how an external memory model can deal with it. For the sake of simplicity, we will focus on vanilla RNNs during the entire analysis, but the same analysis can be extended to LSTMs. In our analysis, we also assume that the weights for the read/write heads are discrete.
We will show that the rate of the gradients vanishing through time for a memory- augmented recurrent neural network is much smaller than of a regular vanilla recurrent neural network.
Consider an RNN which at each timestep t takes an input xt â Rd and produces an output yt â Ro. The hidden state of the RNN can be written as,
# zt = Whtâ1 + Uxt, ht = f(zt).
(20)
(21)
where W and U are the recurrent and the input weights of the RNN respectively and f(-) is a non-linear activation function. Let £ = an L; be the loss function that the RNN is trying to minimize. Given an input sequence of length Tâ, we can write the derivative of the loss £ with respect to parameters @ as,
aL aL ALy, Dhy, Thy, 30 = 2s 00 DS oh, db, 00 (22) 1st <T 1s<ti ST 1<to<ti
11
# Gulcehre, Chandar, and Bengio
The multiplication of many Jacobians in the form of âht âhtâ1 to obtain âht1 âht0 is the main reason of the vanishing and the exploding gradients (Pascanu et al., 2013b):
Ohi, diag[fâ(z,) |W, 23 The, Il a I] diagttâ @:)) (23) to<t<ty to<t<ty
Let us assume that the singular values of a matrix M are ordered as, Ï1(M) ⥠Ï2(M) ⥠· · · ⥠Ïn(M). Let α be an upper bound on the singular values of W, s.t. α ⥠Ï1(W), then the norm of the Jacobian will satisfy (Zilly et al., 2016),
Ohy, rg re Ilo, SIMI diag 2)I < « 01 (diaglt(2))), (24)
Pascanu et al. (2013b) showed that for || âht âhtâ1 || ⤠Ï1( âht âhtâ1 ) ⤠η < 1, the following inequality holds:
i Oh; hy | I] toStcty Sls II toStcty 1 <7 ti âto (25)
t0â¤tâ¤t1 Since η < 1 and the norm of the product of Jacobians grows exponentially on t1 â t0, the
norm of the gradients will vanish exponentially fast.
Now consider the MANN where the contents of the memory are linear projections of the previous hidden states as described in Equation 2. Let us assume that both reading and writing operation use discrete addressing. Let the content read from the memory at time step t correspond to some memory location i:
rt = Mt[i] = Ahit, (26)
where hit corresponds to the hidden state of the controller at some previous timestep it. Now the hidden state of the controller in the external memory model can be written as,
zt = Whtâ1 + Vrt + Uxt, ht = f(zt). (27)
If the controller reads Mt[i] at time step t and its memory content is Ahit as described above, then the Jacobians associated with Equation 27 can be computed as follows:
# âht1 âht0
Ohy Ohy_1 II to<t<ty I] diag lw to<t<ty ti-1 k=to k<t*<ty Ohi, + diag[f"(z:,)|WA Qiito + Risto . + S°( J] diagitâ(z-)JW) diaglf(2x)/VA ahi, oh, oh,
12
# Memory Augmented Neural Networks with Wormhole Connections
where Qt1t0 and Rt1t0 are deï¬ned as below,
Quito = [] dial @ Iw, (30) to<t<ty
to<t<ty â
â oy oy F) ahi, Rito = >> ( J] diaglfâ(ze-)]W) diagltâ(z,)|VA dh, 1 Uasli@a VAS. BL 0 0 k=to k<t*<t
As shown in Equation 29, Jacobians of the MANN can be rewritten as a summation of two matrices, Qt1t0 and Rt1t0. The gradients ï¬owing through Rt1t0 do not necessarily vanish through time, because it is the sum of jacobians computed over the shorter paths.
The norm of the Jacobian can be lower bounded as follows by using Minkowski inequality:
Ohy Oh, ll =I | (32) Ohi, U. Ohy,-1
= ||Qt1t0 + Rt1t0|| ⥠||Rt1t0|| â ||Qt1t0|| (33)
Assuming that the length of the dependency is very long ||Qt1t0|| would vanish to 0. Then we will have,
||Qt1t0 + Rt1t0|| ⥠||Rt1t0|| (34)
As one can see that the rate of the gradients vanishing through time depends on the length of the sequence passes through Rt1t0. This is typically lesser than the length of the sequence passing through Qt1t0. Thus the gradients vanish at lesser rate than in an RNN. In particular the rate would strictly depend on the length of the shortest paths from t1 to t0, because for the long enough dependencies, gradients through the longer paths would still vanish.
We can also derive an upper bound for norm of the Jacobian as follows:
Oh, Oh; I= I] - | (35 hin to<t<ty 1
= ||Qt1t0 + Rt1t0|| ⤠Ï1(Qt1t0 + Rt1t0) (36)
Using the result from (Loyka, 2015), we can lower bound Ï1(Qt1t0 + Rt1t0) as follows:
Ï1(Qt1t0 + Rt1t0) ⥠|Ï1(Qt1t0) â Ï1(Rt1t0)| (37)
For long sequences we know that Ï1(Qt1t0) will go to 0 (see equation 25). Hence,
Ï1(Qt1t0 + Rt1t0) ⥠Ï1(Rt1t0) (38)
The rate at which Ï1(Rt1t0) reaches zero is strictly smaller than the rate at which Ï1(Qt1t0) reaches zero and with ideal memory access, it will not reach zero. Hence unlike vanilla RNNs, Equation 38 states that the upper bound of the norm of the Jacobian will not reach to zero for a MANN with ideal memory access.
13
Gulcehre, Chandar, and Bengio
Theorem 1 Consider a memory augmented neural network with T memory cells for a sequence of length T , and each hidden state of the controller is stored in diï¬erent cells of the memory. If the prediction at time step t1 has only a long-term dependency to t0 and the prediction at t1 is independent from the tokens appear before t0, and the memory reading mechanism is perfect, the model will not suï¬er from vanishing gradients when we back-propagate from t1 to t0.2
If the input sequence has a longest-dependency to t0 from t1, we would only be Proof: interested in gradients propagating from t1 to t0 and the Jacobians from t1 to t0, i.e. âht1 . If âht0 the controller learns a perfect reading mechanism at time step t1 it would read memory cell where the hidden state of the RNN at time step t0 is stored at. Thus following the jacobians deï¬ned in the Equation 29, we can rewrite the jacobians as,
# âht1 âht0
dhi, Il ah, Diy, Bie, Oba can ah, =[ [[ diagtt@ iw) + >°( [] diagttâ(z-)/W) diaglfâ(z,)/VA ah. to<t<ty k=to k<t*<ty fo a dh + diaglf"(z,)|VA oh, (39)
In Equation 39, the first two terms might vanish as t; â to grows. However, the singular values of the third term do not change as t; â to grows. As a result, the gradients propagated from t, to to will not necessarily vanish through time. However, in order to obtain stable dynamics for the network, the initialization of the matrices, V and A is important.
This analysis highlights the fact that an external memory model with optimal read/write mechanism can handle long-range dependencies much better than an RNN. However, this is applicable only when we use discrete addressing for read/write operations. Both NTM and D-NTM still have to learn how to read and write from scratch which is a challenging optimization problem. For TARDIS tying the read/write operations make the learning to become much simpler for the model. In particular, the results of the Theorem 1 points the importance of coming up with better ways of designing attention mechanisms over the memory.
The controller of a MANN may not be able learn to use the memory eï¬ciently. For example, some cells of the memory may remain empty or may never be read. The controller can overwrite the memory cells which have not been read. As a result the information stored in those overwritten memory cells can be lost completely. However TARDIS avoids most of these issues by the construction of the algorithm.
# 6. On the Length of the Paths Through the Wormhole Connections
As we have discussed in Section 5, the rate at which the gradients vanish for a MANN depends on the length of the paths passing along the wormhole connections. In this section
2. Let us note that, unlike an Markovian n-gram assumption, here we assume that at each time step the n can be diï¬erent.
14
# Memory Augmented Neural Networks with Wormhole Connections
we will analyse those lengths in depth for untrained models such that the model will assign uniform probability to read or write all memory cells. This will give us a better idea on how each untrained model uses the memory at the beginning of the training.
A wormhole connection can be created by reading a memory cell and writing into the same cell in TARDIS. For example, in Figure 2, while the actual path from h4 to h0 is of length 4, memory cell a0 creates a shorter path of length 2 (h0 â h2 â h4). We call the length of the actual path as T and length of the shorter path created by wormhole connection as Tmem.
Consider a TARDIS model which has k cells in its memory. If TARDIS access each memory cell uniformly random, then the probability of accessing a random cell i, p[i] = i The expected length of the shorter path created by wormhole connections (Imem) would be proportional to the number of reads and writes into a memory cell. For TARDIS with reader choosing a memory cell uniformly random this would be Trem = an pli] = z â lat the end of the sequence. We verify this result by simulating the read and write heads of TARDIS as in Figure 3 a).
a)
# b)
Figure 3: In these ï¬gures we visualized the expected path length in the memory cells for a sequence of length 200, memory size 50 with 100 simulations. a) shows the results for the TARDIS and b) shows the simulation for a MANN with uniformly random read and write heads.
Now consider a MANN with separate read and write heads each accessing the memory in discrete and uniformly random fashion. Let us call it as uMANN. We will compute the expected length of the shorter path created by wormhole connections (Tmem) for uMANN. wr t and ww t are the read and write head weights, each sampled from a multinomial distribution with uniform probability for each memory cells respectively. Let jt be the index of the memory cell read at timestep t. For any memory cell i, len(·), deï¬ned below, is a recursive function that computes the length of the path created by wormhole connections in that cell.
yy flen(My-1 [si], é, 92) +1 if we] =1 len(M; [i], i, je) = { ten(My lil.) if wi] =0 (40)
t Ei,jt[len(Mt[i], i, jt)] will be T /k â 1 by induction for every memory cell. However, for proof assumes that when t is less than or equal to k,
15
# Gulcehre, Chandar, and Bengio
the length of all paths stored in the memory len(Mt[i], i, jt) should be 0. We have run simulations to compute the expected path length in a memory cell of uMANN as in Figure 3 (b).
This analysis shows that while TARDIS with uniform read head maintains the same expected length of the shorter path created by wormhole connections as uMANN, it completely avoids the reader/writer synchronization problem.
In expectation, Ï1(Rt1t0) will decay proportionally to Tmem whereas Ï1(Qt1t0) will decay proportional 3 to T . With ideal memory access, the rate at which Ï1(Rt1t0) reaches zero would be strictly smaller than the rate at which Ï1(Qt1t0) reaches zero. Hence, as per Equation 38, the upper bound of the norm of the Jacobian will vanish at a much smaller rate. However, this result assumes that the dependencies which the prediction relies are accessible through the memory cell which has been read by the controller.
Figure 4: Assuming that the prediction at t1 depends on the t0, a wormhole connection can shorten the path by creating a connection from t1 â m to t0 + n. A wormhole connection may not directly create a connection from t1 to t0, but it can create shorter paths which the gradients can ï¬ow without vanishing. In this ï¬gure, we consider the case where a wormhole connection is created from t1 â m to t0 + n. This connections skips all the tokens in between t1 â m and t0 + n.
In the more general case, consider a MANN with k ⥠T . The writer just ï¬lls in the memory cells in a sequential manner and the reader chooses a memory cell uniformly at random. Let us call this model as urMANN. Let us assume that there is a dependency between two timesteps t0 and t1 as shown in Figure 4. If t0 was taken uniformly between 0 and t1 â 1, then there is a probability 0.5 that the read address invoked at time t1 will be greater than or equal to t0 (proof by symmetry). In that case, the expected shortest path length through that wormhole connection would be (t1 â t0)/2, but this still would not scale well. If the reader is very well trained, it could pick exactly t0 and the path length will be 1. Let us consider all the paths of length less than or equal to k + 1 of the form in Figure 4. Also, let n ⤠k/2 and m ⤠k/2. Then, the shortest path from t0 to t1 now has length n + m + 1 ⤠k + 1, using a wormhole connection that connects the state at t0 + n with the state at t1 â m. There are O(k2) such paths that are realized, but we leave the distribution of the length of that shortest path as an open question. However, the probability of hitting a very short path (of length less than or equal to k + 1) increases exponentially with k. Let the probability of the read at t1 â m to hit the interval (t0, t0 + k/2) be p. Then the probability
3. Exponentially when the Equation 25 holds.
16
# Memory Augmented Neural Networks with Wormhole Connections
that the shorter paths over the last k reads hits that interval is 1 â (1 â p)k/2, where p is on the order of k/t1. On the other hand, the probability of not hitting that interval approaches to 0 exponentially with k.
Figure 4 illustrates how wormhole connections can creater shorter paths. In Figure 5 (b), we show that the expected length of the path travelled outside the wormhole connections obtained from the simulations decreases as the size of the memory decreases. In particular, for urMANN and TARDIS the trend is very close to exponential. As shown in Figure 5 (a), this also inï¬uences the total length of the paths travelled from timestep 50 to 5 as well. Writing into the memory by using weights sampled with uniform probability for all memory cells can not use the memory as eï¬ciently as other approaches that we compare to. In particular ï¬xing the writing mechanism seems to be useful.
Even if the reader does not manage to learn where to read, there are many "short paths" which can considerably reduce the eï¬ect of vanishing gradients.
a) b)
Figure 5: We have run simulations for TARDIS, MANN with uniform read and write mechanisms (uMANN) and MANN with uniform read and write head is ï¬xed with a heuristic (urMANN). In our simulations, we assume that there is a dependency from timestep 50 to 5. We run 200 simulations for each one of them with diï¬erent memory sizes for each model. In plot a) we show the results for the expected length of the shortest path from timestep 50 to 5. In the plots, as the size of the memory gets larger for both models, the length of the shortest path decreases dramatically. In plot b), we show the expected length of the shortest path travelled outside the wormhole connections with respect to diï¬erent memory sizes. TARDIS seems to use the memory more eï¬ciently compared to other models in particular when the size of the memory is small by creating shorter paths.
# 7. On Generalization over the Longer Sequences
Graves et al. (2014) have shown that the LSTMs can not generalize well on the sequences longer than the ones seen during the training. Whereas a MANN such as an NTM or a D-NTM has been shown to generalize to sequences longer than the ones seen during the training set on a set of toy tasks.
We believe that the main reason of why LSTMs typically do not generalize to the sequences longer than the ones that are seen during the training is mainly because the hidden
17
Gulcehre, Chandar, and Bengio
state of an LSTM network utilizes an unbounded history of the input sequence and as a result, its parameters are optimized using the maximum likelihood criterion to ï¬t on the sequences with lengths of the training examples. However, an n-gram language model or an HMM does not suï¬er from this issue. In comparison, an n-gram LM would use an input context with a ï¬xed window size and an HMM has the Markov property in its latent space. As argued below, we claim that while being trained a MANN can also learn the ability to generalize for sequences with a longer length than the ones that appear in the training set by modifying the contents of the memory and reading from it.
A regular RNN will minimize the negative log-likelihood objective function for the targets yt by using the unbounded history represented with the hidden state of the RNN, and it will model the parametrized conditional distribution p(yt|ht; θ) for the prediction at timestep t and a MANN would learn p(yt|ht, rt; θ). If we assume that rt represents all the dependencies that yt depends on in the input sequence, we will have p(yt|ht, rt; θ) â p(yt|rt, xt; θ) where rt represents the dependencies in a limited context window that only contains paths shorter than the sequences seen during the training set. Due to this property, we claim that MANNs such as NTM, D-NTM or TARDIS can generalize to the longer sequences more easily. In our experiments on PennTreebank, we show that a TARDIS language model trained to minimize the log-likelihood for p(yt|ht, rt; θ) and on the test set both p(yt|ht, rt; θ) and p(yt|rt, xt; θ) for the same model yields to very close results. On the other hand, the fact that the best results on bAbI dataset obtained in (Gulcehre et al., 2016) is with feedforward controller and similarly in (Graves et al., 2014) feedforward controller was used to solve some of the toy tasks also conï¬rms our hypothesis. As a result, what has been written into the memory and what has been read becomes very important to be able to generalize to the longer sequences.
# 8. Experiments
# 8.1 Character-level Language Modeling on PTB
As a preliminary study on the performance of our model we consider character-level language modelling. We have evaluated our models on Penn TreeBank (PTB) corpus (Marcus et al., 1993) based on the train, valid and test used in (Mikolov et al., 2012). On this task, we are using layer-normalization (Ba et al., 2016) and recurrent dropout (Semeniuta et al., 2016) as those are also used by the SOTA results on this task. Using layer-normalization and the recurrent dropout improves the performance signiï¬cantly and reduces the eï¬ects of overï¬tting. We train our models with Adam (Kingma and Ba, 2014) over the sequences of length 150. We show our results in Table 1.
In addition to the regular char-LM experiments, in order to conï¬rm our hypothesis regarding to the ability of MANNs generalizing to the sequences longer than the ones seen during the training. We have trained a language model which learns p(yt|ht, rt; θ) by using a softmax layer as described in Equation 11. However to measure the performance of p(yt|rt, xt; θ) on test set, we have used the softmax layer that gets into the auxiliary cost deï¬ned for the REINFORCE as in Equation 17 for a model trained with REINFORCE and with the auxiliary cost. As in Table 1, the modelâs performance by using p(yt|ht, rt; θ) is 1.26, however by using p(yt|ht, rt; θ) it becomes 1.28. This gap is small enough to conï¬rm our assumption that p(yt|ht, rt; θ) â p(yt|rt, xt; θ).
18
# Memory Augmented Neural Networks with Wormhole Connections
Model CW-RNN (Koutnik et al., 2014) HF-MRNN (Sutskever et al., 2011) ME n-gram (Mikolov et al., 2012) BatchNorm LSTM (Cooijmans et al., 2016) Zoneout RNN (Krueger et al., 2016) LayerNorm LSTM (Ha et al., 2016) LayerNorm HyperNetworks (Ha et al., 2016) LayerNorm HM-LSTM & Step Fn. & Slope Annealing(Chung et al., 2016) Our LSTM + Layer Norm + Dropout TARDIS + REINFORCE + R TARDIS + REINFORCE + Auxiliary Cost TARDIS + REINFORCE + Auxiliary Cost + R TARDIS + Gumbel Softmax + ST + R
Table 1: Character-level language modelling results on Penn TreeBank Dataset. TARDIS with Gumbel Softmax and straight-through (ST) estimator performs better than REINFORCE and it performs competitively compared to the SOTA on this task. "+ R" notiï¬es the use of RESET gates α and β.
# 8.2 Sequential Stroke Multi-digit MNIST task
In this subsection, we introduce a new pen-stroke based sequential multi-digit MNIST prediction task as a benchmark for long term dependency modelling. We also benchmark the performance of LSTM and TARDIS in this challenging task.
8.2.1 Task and Dataset
Recently (de Jong, 2016) introduced an MNIST pen stroke classiï¬cation task and also provided dataset which consisted of pen stroke sequences representing the skeleton of the digits in the MNIST dataset. Each MNIST digit image I is represented as a sequence of quadruples {dxi, dyi, eosi, eodi}T i=1, where T is the number of pen strokes to deï¬ne the digit, (dxi, dyi) denotes the pen oï¬set from the previous to the current stroke (can be 1, -1 or 0), eosi is a binary valued feature to denote end of stroke and eodi is another binary valued feature to denote end of the digit. In the original dataset, ï¬rst quadruple contains absolute value (x, y) instead of oï¬sets (dx, dy). Without loss of generality, we set the starting position (x, y) to (0, 0) in our experiments. Each digit is represented by 40 strokes on an average and the task is to predict the digit at the end of the stroke sequence.
While this dataset was proposed for incremental sequence learning in (de Jong, 2016), we consider the multi-digit version of this dataset to benchmark models that can handle long term dependencies. Speciï¬cally, given a sequence of pen-stroke sequences, the task is to predict the sequence of digits corresponding to each pen-stroke sequences in the given order. This is a challenging task since it requires the model to learn to predict the digit based on the pen-stroke sequence, count the number of digits and remember them and generate them in the same order after seeing all the strokes. In our experiments we consider 3 versions of this task with 5,10, and 15 digit sequences respectively. We generated 200,000 training data
19
Gulcehre, Chandar, and Bengio
points by randomly sampling digits from the training set of the MNIST dataset. Similarly we generated 20,000 validation and test data points by randomly sampling digits from the validation set and test set of the MNIST dataset respectively. Average length of the stroke sequences in each of these tasks are 199, 399, and 599 respectively.
Figure 6: An illustration of the sequential MNIST strokes task with multiple digits. The net- work is ï¬rst provided with the sequence of strokes information for each MNIST digits(location information) as input, during the prediction the network tries to predict the MNIST digits that it has just seen. When the model tries to predict the predictions from the previous time steps are fed back into the network. For the ï¬rst time step the model receives a special <bos> token which is fed into the model in the ï¬rst time step when the prediction starts.
# 8.2.2 Results
We benchmark the performance of LSTM and TARDIS in this new task. Both models receive the sequence of pen strokes and at the end of the sequence are expected to generate the sequence of digits followed by a particular <bos> token. The tasks is illustrated in Figure 6. We evaluate the models based on per-digit error rate. We also compare the performance of TARDIS with REINFORCE with that of TARDIS with gumbel softmax. All the models were trained for same number of updates with early stopping based on the per-digit error rate in the validation set. Results for all 3 versions of the task are reported in Table-2. From the table, we can see that TARDIS performs better than LSTM in all the three versions of the task. Also TARDIS with gumbel-softmax performs slightly better than TARDIS with REINFORCE, which is consistent with our other experiments.
Model 3.54% LSTM TARDIS with REINFORCE 2.56% TARDIS with gumbel softmax 1.89% 2.23% 5-digits 10-digits 15-digits 3.00% 2.09% 8.81% 3.67% 3.09%
Table 2: Per-digit based test error in sequential stroke multi-digit MNIST task with 5,10, and 15 digits.
We also compare the learning curves of all the three models in Figure-7. From the ï¬gure we can see that TARDIS learns to solve the task faster that LSTM by eï¬ectively utilizing
20
# Memory Augmented Neural Networks with Wormhole Connections
the given memory slots. Also, TARDIS with gumbel softmax converges faster than TARDIS with REINFORCE.
5 digits 10 digits ââ TARDIS+Gumbel 08 ââ LSTM ââ TARDIS+Gumbel ââ TARDIS+REINFORCE 08 2 ââ TARDIS+REINFORCE fae ââ LSTM se : 2 Los eo 5 S Sos 3° Bos o o 7 02 > o2 on ot 00 00 0 5 10 15 20 25 ° 5 epochs epochs 15 digits os os ââ LSTM ââ TARDIS+REINFORCE ââ TARDIS+Gumbel validation error rate o 0 2 9%» 4 so 6 7 8 epochs
Figure 7: Learning curves for LSTM and TARDIS for sequential stroke multi-digit MNIST task with 5, 10, and 15 digits respectively.
# 8.3 NTM Tasks
Graves et al. (2014) proposed associative recall and the copy tasks to evaluate a modelâs ability to learn simple algorithms and generalize to the sequences longer than the ones seen during the training. We trained a TARDIS model with 4 features for the address and 32 features for the memory content part of the model. We used a model with hidden state of size 120. Our model uses a memory of size 16. We train our model with Adam and used the learning rate of 3e-3. We show the results of our model in Table 3. TARDIS model was able to solve the both tasks, both with Gumbel-softmax and REINFORCE.
# 8.4 Stanford Natural Language Inference
Bowman et al. (2015) proposed a new task to test the machine learning algorithmsâ ability to infer whether two given sentences entail, contradict or are neutral(semantic independence) from each other. However, this task can be considered as a long-term dependency task, if the premise and the hypothesis are presented to the model in sequential order as also explored by Rocktäschel et al. (2015). Because the model should learn the dependency relationship between the hypothesis and the premise. Our model ï¬rst reads the premise, then the hypothesis and at the end of the hypothesis the model predicts whether the premise
21
# Gulcehre, Chandar, and Bengio
D-NTM cont. (Gulcehre et al., 2016) D-NTM discrete (Gulcehre et al., 2016) NTM (Graves et al., 2014) TARDIS + Gumbel Softmax + ST TARDIS REINFORCE + Auxiliary Cost Success Success Success Success Success Success Failure Success Success Success
# Copy Task Associative Recall
Table 3: In this table, we consider a model to be successful on copy or associative recall if its validation cost (binary cross-entropy) is lower than 0.02 over the sequences of maximum length seen during the training. We set the threshold to 0.02 to determine whether a model is successful on a task as in (Gulcehre et al., 2016).
and the hypothesis contradicts or entails. The model proposed by Rocktäschel et al. (2015), applies attention over its previous hidden states over premise when it reads the hypothesis. In that sense their model can still be considered to have some task-speciï¬c architectural design choice. TARDIS and our baseline LSTM models do not include any task-speciï¬c architectural design choices. In Table 4, we compare the results of diï¬erent models. Our model, performs signiï¬cantly better than other models. However recently it has been shown that with architectural tweaks, it is possible to design a model speciï¬cally to solve this task and achieve 88.2% test accuracy (Chen et al., 2016).
Model Word by Word Attention(Rocktäschel et al., 2015) Word by Word Attention two-way(Rocktäschel et al., 2015) LSTM + LayerNorm + Dropout TARDIS + REINFORCE + Auxiliary Cost TARDIS + Gumbel Softmax + ST Test Accuracy 83.5 83.2 81.7 82.4 84.3
Table 4: Comparisons of diï¬erent baselines on SNLI Task.
# 9. Conclusion
In this paper, we propose a simple and eï¬cient memory augmented neural network model which can perform well both on algorithmic tasks and more realistic tasks. Unlike the previous approaches, we show better performance on real-world NLP tasks, such as language modelling and SNLI. We have also proposed a new task to measure the performance of the models dealing with long-term dependencies.
We provide a detailed analysis on the eï¬ects of using external memory for the gradients and justify the reason why MANNs generalize better on the sequences longer than the ones seen in the training set. We have also shown that the gradients will vanish at a much slower rate (if they vanish) when an external memory is being used. Our theoretical results should encourage further studies in the direction of developing better attention mechanisms that can create wormhole connections eï¬ciently.
22
Memory Augmented Neural Networks with Wormhole Connections
# Acknowledgments
We thank Chinnadhurai Sankar for suggesting the phrase "wormhole connections" and proof-reading the paper. We would like to thank Dzmitry Bahdanau for the comments and feedback for the earlier version of this paper. We would like to also thank the developers of Theano 4, for developing such a powerful tool for scientiï¬c computing Theano Development Team (2016). We acknowledge the support of the following organizations for research funding and computing support: NSERC, Samsung, Calcul Québec, Compute Canada, the Canada Research Chairs and CIFAR. SC is supported by a FQRNT-PBEEE scholarship.
4. http://deeplearning.net/software/theano/
23
Gulcehre, Chandar, and Bengio
# References
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoï¬rey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In Proceedings Of The International Conference on Representation Learning (ICLR 2015), 2015.
Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is diï¬cult. Neural Networks, IEEE Transactions on, 5(2):157â166, 1994.
Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075, 2015.
Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326, 2015.
Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, and Yoshua Bengio. Hierarchical memory networks. arXiv preprint arXiv:1605.07427, 2016.
Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, and Hui Jiang. Enhancing and combining sequential and tree lstm for natural language inference. arXiv preprint arXiv:1609.06038, 2016.
Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine reading. arXiv preprint arXiv:1601.06733, 2016.
Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks. arXiv preprint arXiv:1609.01704, 2016.
Tim Cooijmans, Nicolas Ballas, César Laurent, and Aaron Courville. Recurrent batch normalization. arXiv preprint arXiv:1603.09025, 2016.
Edwin D. de Jong. Incremental sequence learning. arXiv preprint arXiv:1611.03068, 2016.
Edouard Grave, Armand Joulin, and Nicolas Usunier. Improving neural language models with a continuous cache. arXiv preprint arXiv:1612.04426, 2016.
Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014.
24
Memory Augmented Neural Networks with Wormhole Connections
Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska- BarwiÅska, Sergio G. Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adrià P. Badia, Karl M. Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerï¬eld, Phil Blunsom, Koray Kavukcuoglu, and Demis Hassabis. Hybrid computing using a neural network with dynamic external memory. Nature, advance online publication, October 2016. ISSN 0028-0836. doi: 10.1038/nature20101. URL http://dx.doi.org/10.1038/nature20101.
Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to transduce with unbounded memory. In Advances in Neural Information Processing Systems, pages 1819â1827, 2015.
Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, and Yoshua Bengio. Dynamic neural turing machine with soft and hard addressing schemes. arXiv preprint arXiv:1607.00036, 2016.
David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016.
Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische Universität München, page 91, 1991.
Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9 (8):1735â1780, 1997.
Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016.
Armand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. In Advances in Neural Information Processing Systems, pages 190â198, 2015.
Åukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228, 2015.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Jan Koutnik, Klaus Greï¬, Faustino Gomez, and Juergen Schmidhuber. A clockwork rnn. arXiv preprint arXiv:1402.3511, 2014.
David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron Courville, et al. Zoneout: Regularizing rnns by randomly preserving hidden activations. arXiv preprint arXiv:1606.01305, 2016.
Roland Kuhn and Renato De Mori. A cache-based natural language model for speech recognition. IEEE transactions on pattern analysis and machine intelligence, 12(6):570â 583, 1990.
25
Gulcehre, Chandar, and Bengio
Sergey Loyka. On singular value inequalities for the sum of two matrices. arXiv preprint arXiv:1507.06630, 2015.
Chris J Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712, 2016.
Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313â330, 1993.
Tomáš Mikolov, Ilya Sutskever, Anoop Deoras, Hai-Son Le, Stefan Kombrink, and J Cer- nocky. Subword language modeling with neural networks. preprint (http://www. ï¬t. vutbr. cz/imikolov/rnnlm/char. pdf ), 2012.
Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXiv preprint arXiv:1402.0030, 2014.
Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. How to construct deep recurrent neural networks. arXiv preprint arXiv:1312.6026, 2013a.
Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the diï¬culty of training recurrent neural networks. ICML (3), 28:1310â1318, 2013b.
Jack W. Rae, Jonathan J. Hunt, Tim Harley, Ivo Danihelka, Andrew W. Senior, Greg Wayne, Alex Graves, and Timothy P. Lillicrap. Scaling memory-augmented neural networks with sparse reads and writes. CoRR, abs/1610.09027, 2016.
Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomáš KoÄisk`y, and Phil Blun- som. Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664, 2015.
Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lil- licrap. One-shot learning with memory-augmented neural networks. arXiv preprint arXiv:1605.06065, 2016.
Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. Recurrent dropout without memory loss. arXiv preprint arXiv:1603.05118, 2016.
Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the 30th AAAI Conference on Artiï¬cial Intelligence (AAAI-16), 2016.
Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. arXiv preprint arXiv:1503.08895, 2015.
Ilya Sutskever, James Martens, and Geoï¬rey E Hinton. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 1017â1024, 2011.
26
Memory Augmented Neural Networks with Wormhole Connections
Theano Development Team. Theano: A Python framework for fast computation of arXiv e-prints, abs/1605.02688, May 2016. URL http: mathematical expressions. //arxiv.org/abs/1605.02688.
Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language compre- hension with the epireader. arXiv preprint arXiv:1606.02270, 2016.
Endel Tulving. Chronesthesia: Conscious awareness of subjective time. 2002.
Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In Proceedings Of The International Conference on Representation Learning (ICLR 2015), 2015. In Press.
Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist rein- forcement learning. Machine Learning, 8:229â256, 1992.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings Of The International Conference on Representation Learning (ICLR 2015), 2015.
Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. CoRR, abs/1505.00521, 2015.
Julian Georg Zilly, Rupesh Kumar Srivastava, Jan KoutnÃk, and Jürgen Schmidhuber. Recurrent highway networks. arXiv preprint arXiv:1607.03474, 2016.
27 | {
"id": "1609.01704"
} |
1701.06538 | Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer | The capacity of a neural network to absorb information is limited by its
number of parameters. Conditional computation, where parts of the network are
active on a per-example basis, has been proposed in theory as a way of
dramatically increasing model capacity without a proportional increase in
computation. In practice, however, there are significant algorithmic and
performance challenges. In this work, we address these challenges and finally
realize the promise of conditional computation, achieving greater than 1000x
improvements in model capacity with only minor losses in computational
efficiency on modern GPU clusters. We introduce a Sparsely-Gated
Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward
sub-networks. A trainable gating network determines a sparse combination of
these experts to use for each example. We apply the MoE to the tasks of
language modeling and machine translation, where model capacity is critical for
absorbing the vast quantities of knowledge available in the training corpora.
We present model architectures in which a MoE with up to 137 billion parameters
is applied convolutionally between stacked LSTM layers. On large language
modeling and machine translation benchmarks, these models achieve significantly
better results than state-of-the-art at lower computational cost. | http://arxiv.org/pdf/1701.06538 | Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean | cs.LG, cs.CL, cs.NE, stat.ML | null | null | cs.LG | 20170123 | 20170123 | 7 1 0 2
n a J 3 2 ] G L . s c [
1 v 8 3 5 6 0 . 1 0 7 1 : v i X r a
Under review as a conference paper at ICLR 2017
# OUTRAGEOUSLY LARGE NEURAL NETWORKS: THE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER
Noam Shazeer1, Azalia Mirhoseiniââ 1, Krzysztof Maziarzâ2, Andy Davis1, Quoc Le1, Geoffrey Hinton1 and Jeff Dean1
1Google Brain, {noam,azalia,andydavis,qvl,geoffhinton,jeff}@google.com 2Jagiellonian University, Cracow, krzysztof.maziarz@student.uj.edu.pl
# ABSTRACT
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increas- ing model capacity without a proportional increase in computation. In practice, however, there are signiï¬cant algorithmic and performance challenges. In this work, we address these challenges and ï¬nally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efï¬ciency on modern GPU clusters. We in- troduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve signiï¬cantly better results than state-of-the-art at lower computational cost.
1
# INTRODUCTION AND RELATED WORK
1.1 CONDITIONAL COMPUTATION
Exploiting scale in both training data and model size has been central to the success of deep learn- ing. When datasets are sufï¬ciently large, increasing the capacity (number of parameters) of neural networks can give much better prediction accuracy. This has been shown in domains such as text (Sutskever et al., 2014; Bahdanau et al., 2014; Jozefowicz et al., 2016; Wu et al., 2016), images (Krizhevsky et al., 2012; Le et al., 2012), and audio (Hinton et al., 2012; Amodei et al., 2015). For typical deep learning models, where the entire model is activated for every example, this leads to a roughly quadratic blow-up in training costs, as both the model size and the number of training examples increase. Unfortunately, the advances in computing power and distributed computation fall short of meeting such demand.
Various forms of conditional computation have been proposed as a way to increase model capacity without a proportional increase in computational costs (Davis & Arel, 2013; Bengio et al., 2013; Eigen et al., 2013; Ludovic Denoyer, 2014; Cho & Bengio, 2014; Bengio et al., 2015; Almahairi et al., 2015). In these schemes, large parts of a network are active or inactive on a per-example basis. The gating decisions may be binary or sparse and continuous, stochastic or deterministic. Various forms of reinforcement learning and back-propagation are proposed for trarining the gating decisions.
âEqually major contributors â Work done as a member of the Google Brain Residency program (g.co/brainresidency)
1
# Under review as a conference paper at ICLR 2017
MoE layer Ge, MoE layer
Figure 1: A Mixture of Experts (MoE) layer embedded within a recurrent language model. In this case, the sparse gating function selects two experts to perform computations. Their outputs are modulated by the outputs of the gating network.
While these ideas are promising in theory, no work to date has yet demonstrated massive improve- ments in model capacity, training time, or model quality. We blame this on a combination of the following challenges:
⢠Modern computing devices, especially GPUs, are much faster at arithmetic than at branch- ing. Most of the works above recognize this and propose turning on/off large chunks of the network with each gating decision.
⢠Large batch sizes are critical for performance, as they amortize the costs of parameter trans- fers and updates. Conditional computation reduces the batch sizes for the conditionally active chunks of the network.
⢠Network bandwidth can be a bottleneck. A cluster of GPUs may have computational power thousands of times greater than the aggregate inter-device network bandwidth. To be com- putationally efï¬cient, the relative computational versus network demands of an algorithm must exceed this ratio. Embedding layers, which can be seen as a form of conditional com- putation, are handicapped by this very problem. Since the embeddings generally need to be sent across the network, the number of (example, parameter) interactions is limited by network bandwidth instead of computational capacity.
⢠Depending on the scheme, loss terms may be necessary to achieve the desired level of sparsity per-chunk and/or per example. Bengio et al. (2015) use three such terms. These issues can affect both model quality and load-balancing.
⢠Model capacity is most critical for very large data sets. The existing literature on condi- tional computation deals with relatively small image recognition data sets consisting of up to 600,000 images. It is hard to imagine that the labels of these images provide a sufï¬cient signal to adequately train a model with millions, let alone billions of parameters.
In this work, we for the ï¬rst time address all of the above challenges and ï¬nally realize the promise of conditional computation. We obtain greater than 1000x improvements in model capacity with only minor losses in computational efï¬ciency and signiï¬cantly advance the state-of-the-art results on public language modeling and translation data sets.
1.2 OUR APPROACH: THE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER
Our approach to conditional computation is to introduce a new type of general purpose neural net- work component: a Sparsely-Gated Mixture-of-Experts Layer (MoE). The MoE consists of a num- ber of experts, each a simple feed-forward neural network, and a trainable gating network which selects a sparse combination of the experts to process each input (see Figure 1). All parts of the network are trained jointly by back-propagation.
2
# Under review as a conference paper at ICLR 2017
While the introduced technique is generic, in this paper we focus on language modeling and machine translation tasks, which are known to beneï¬t from very large models. In particular, we apply a MoE convolutionally between stacked LSTM layers (Hochreiter & Schmidhuber, 1997), as in Figure 1. The MoE is called once for each position in the text, selecting a potentially different combination of experts at each position. The different experts tend to become highly specialized based on syntax and semantics (see Appendix E Table 9). On both language modeling and machine translation benchmarks, we improve on best published results at a fraction of the computational cost.
1.3 RELATED WORK ON MIXTURES OF EXPERTS
Since its introduction more than two decades ago (Jacobs et al., 1991; Jordan & Jacobs, 1994), the mixture-of-experts approach has been the subject of much research. Different types of expert architectures hae been proposed such as SVMs (Collobert et al., 2002), Gaussian Processes (Tresp, 2001; Theis & Bethge, 2015; Deisenroth & Ng, 2015), Dirichlet Processes (Shahbaba & Neal, 2009), and deep networks. Other work has focused on different expert conï¬gurations such as a hierarchical structure (Yao et al., 2009), inï¬nite numbers of experts (Rasmussen & Ghahramani, 2002), and adding experts sequentially (Aljundi et al., 2016). Garmash & Monz (2016) suggest an ensemble model in the format of mixture of experts for machine translation. The gating network is trained on a pre-trained ensemble NMT model.
The works above concern top-level mixtures of experts. The mixture of experts is the whole model. Eigen et al. (2013) introduce the idea of using multiple MoEs with their own gating networks as parts of a deep model. It is intuitive that the latter approach is more powerful, since complex prob- lems may contain many sub-problems each requiring different experts. They also allude in their conclusion to the potential to introduce sparsity, turning MoEs into a vehicle for computational computation.
Our work builds on this use of MoEs as a general purpose neural network component. While Eigen et al. (2013) uses two stacked MoEs allowing for two sets of gating decisions, our convolutional application of the MoE allows for different gating decisions at each position in the text. We also realize sparse gating and demonstrate its use as a practical way to massively increase model capacity.
# 2 THE STRUCTURE OF THE MIXTURE-OF-EXPERTS LAYER
The Mixture-of-Experts (MoE) layer consists of a set of n âexpert networks" E1, · · · , En, and a âgating network" G whose output is a sparse n-dimensional vector. Figure 1 shows an overview of the MoE module. The experts are themselves neural networks, each with their own parameters. Although in principle we only require that the experts accept the same sized inputs and produce the same-sized outputs, in our initial investigations in this paper, we restrict ourselves to the case where the models are feed-forward networks with identical architectures, but with separate parameters.
Let us denote by G(x) and Ei(x) the output of the gating network and the output of the i-th expert network for a given input x. The output y of the MoE module can be written as follows:
y= Ga) Bi(2) (1)
i=1
We save computation based on the sparsity of the output of G(x). Wherever G(x)i = 0, we need not compute Ei(x). In our experiments, we have up to thousands of experts, but only need to evaluate a handful of them for every example. If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of âexperts", each of which is itself a secondary mixture-of-experts with its own gating network. In the following we focus on ordinary MoEs. We provide more details on hierarchical MoEs in Appendix B.
Our implementation is related to other models of conditional computation. A MoE whose experts are simple weight matrices is similar to the parameterized weight matrix proposed in (Cho & Bengio, 2014). A MoE whose experts have one hidden layer is similar to the block-wise dropout described in (Bengio et al., 2015), where the dropped-out layer is sandwiched between fully-activated layers.
3
# Under review as a conference paper at ICLR 2017
2.1 GATING NETWORK
Softmax Gating: A simple choice of non-sparse gating function (Jordan & Jacobs, 1994) is to multiply the input by a trainable weight matrix Wg and then apply the Sof tmax function.
GÏ(x) = Sof tmax(x · Wg) (2)
Noisy Top-K Gating: We add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values, setting the rest to ââ (which causes the corresponding gate values to equal 0). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix A. The amount of noise per component is controlled by a second trainable weight matrix Wnoise.
G(x) = Sof tmax(KeepT opK(H(x), k)) (3)
H(x)i = (x · Wg)i + StandardN ormal() · Sof tplus((x · Wnoise)i) (4)
# Ui
KeepT opK(v, k)i = if vi is in the top k elements of v. ââ otherwise. (5)
Training the Gating Network We train the gating network by simple back-propagation, along with the rest of the model. If we choose k > 1, the gate values for the top k experts have nonzero derivatives with respect to the weights of the gating network. This type of occasionally-sensitive behavior is described in (Bengio et al., 2013) with respect to noisy rectiï¬ers. Gradients also back- propagate through the gating network to its inputs. Our method differs here from (Bengio et al., 2015) who use boolean gates and a REINFORCE-style approach to train the gating network.
3 ADDRESSING PERFORMANCE CHALLENGES
3.1 THE SHRINKING BATCH PROBLEM
On modern CPUs and GPUs, large batch sizes are necessary for computational efficiency, so as to amortize the overhead of parameter loads and updates. If the gating network chooses k out of n experts for each example, then for a batch of b examples, each expert receives a much smaller batch of approximately ab < b examples. This causes a naive MoE implementation to become very inefficient as the number of experts increases. The solution to this shrinking batch problem is to make the original batch size as large as possible. However, batch size tends to be limited by the memory necessary to store activations between the forwards and backwards passes. We propose the following techniques for increasing the batch size:
Mixing Data Parallelism and Model Parallelism: In a conventional distributed training setting, multiple copies of the model on different devices asynchronously process distinct batches of data, and parameters are synchronized through a set of parameter servers. In our technique, these different batches run synchronously so that they can be combined for the MoE layer. We distribute the standard layers of the model and the gating network according to conventional data-parallel schemes, but keep only one shared copy of each expert. Each expert in the MoE layer receives a combined batch consisting of the relevant examples from all of the data-parallel input batches. The same set of devices function as data-parallel replicas (for the standard layers and the gating networks) and as model-parallel shards (each hosting a subset of the experts). If the model is distributed over d devices, and each device processes a batch of size b, each expert receives a batch of approximately kbd n examples. Thus, we achieve a factor of d improvement in expert batch size. In the case of a hierarchical MoE (Section B), the primary gating network employs data parallelism, and the secondary MoEs employ model parallelism. Each secondary MoE resides on one device.
4
# Under review as a conference paper at ICLR 2017
This technique allows us to increase the number of experts (and hence the number of parameters) by proportionally increasing the number of devices in the training cluster. The total batch size increases, keeping the batch size per expert constant. The memory and bandwidth requirements per device also remain constant, as do the step times, as does the amount of time necessary to process a number of training examples equal to the number of parameters in the model. It is our goal to train a trillion- parameter model on a trillion-word corpus. We have not scaled our systems this far as of the writing of this paper, but it should be possible by adding more hardware.
Taking Advantage of Convolutionality: In our language models, we apply the same MoE to each time step of the previous layer. If we wait for the previous layer to ï¬nish, we can apply the MoE to all the time steps together as one big batch. Doing so increases the size of the input batch to the MoE layer by a factor of the number of unrolled time steps.
Increasing Batch Size for a Recurrent MoE: We suspect that even more powerful models may involve applying a MoE recurrently. For example, the weight matrices of a LSTM or other RNN could be replaced by a MoE. Sadly, such models break the convolutional trick from the last para- graph, since the input to the MoE at one timestep depends on the output of the MoE at the previous timestep. Gruslys et al. (2016) describe a technique for drastically reducing the number of stored activations in an unrolled RNN, at the cost of recomputing forward activations. This would allow for a large increase in batch size.
3.2 NETWORK BANDWIDTH
Another major performance concern in distributed computing is network bandwidth. Since the ex- perts are stationary (see above) and the number of gating parameters is small, most of the communi- cation involves sending the inputs and outputs of the experts across the network. To maintain com- putational efï¬ciency, the ratio of an expertâs computation to the size of its input and output must ex- ceed the ratio of computational to network capacity of the computing device. For GPUs, this may be thousands to one. In our experiments, we use experts with one hidden layer containing thousands of RELU-activated units. Since the weight matrices in the expert have sizes input_sizeÃhidden_size and hidden_size à output_size, the ratio of computation to input and output is equal to the size of the hidden layer. Conveniently, we can increase computational efï¬ciency simply by using a larger hidden layer, or more hidden layers.
# 4 BALANCING EXPERT UTILIZATION
We have observed that the gating network tends to converge to a state where it always produces large weights for the same few experts. This imbalance is self-reinforcing, as the favored experts are trained more rapidly and thus are selected even more by the gating network. Eigen et al. (2013) describe the same phenomenon, and use a hard constraint at the beginning of training to avoid this local minimum. Bengio et al. (2015) include a soft constraint on the batch-wise average of each gate.1
We take a soft constraint approach. We deï¬ne the importance of an expert relative to a batch of training examples to be the batchwise sum of the gate values for that expert. We deï¬ne an additional loss Limportance, which is added to the overall loss function for the model. This loss is equal to the square of the coefï¬cient of variation of the set of importance values, multiplied by a hand-tuned scaling factor wimportance. This additional loss encourages all experts to have equal importance.
Importance(X) = G(x) xâX (6)
Limportance(X) = wimportance · CV (Importance(X))2 (7)
1Bengio et al. (2015) also include two additional losses. One controls per-example sparsity, which we do not need since it is enforced by the ï¬xed value of k. A third loss encourages diversity of gate values. In our experiments, we ï¬nd that the gate values naturally diversify as the experts specialize (in a virtuous cycle), and we do not need to enforce diversity of gate values.
5
# Under review as a conference paper at ICLR 2017
While this loss function can ensure equal importance, experts may still receive very different num- bers of examples. For example, one expert may receive a few examples with large weights, and another may receive many examples with small weights. This can cause memory and performance problems on distributed hardware. To solve this problem, we introduce a second loss function, Lload , which ensures balanced loads. Appendix A contains the deï¬nition of this function, along with experimental results.
# 5 EXPERIMENTS
1 BILLION WORD LANGUAGE MODELING BENCHMARK
Dataset: This dataset, introduced by (Chelba et al., 2013) consists of shufï¬ed unique sentences from news articles, totaling approximately 829 million words, with a vocabulary of 793,471 words.
Previous State-of-the-Art: The best previously published results (Jozefowicz et al., 2016) use models consisting of one or more stacked Long Short-Term Memory (LSTM) layers (Hochreiter & Schmidhuber, 1997; Gers et al., 2000). The number of parameters in the LSTM layers of these models vary from 2 million to 151 million. Quality increases greatly with parameter count, as do computational costs. Results for these models form the top line of Figure 2-right.
MoE Models: Our models consist of two stacked LSTM layers with a MoE layer between them (see Figure 1). We vary the sizes of the layers and the number of experts. For full details on model architecture, training regimen, additional baselines and results, see Appendix C.
Low Computation, Varied Capacity: To investigate the effects of adding capacity, we trained a series of MoE models all with roughly equal computational costs: about 8 million multiply-and- adds per training example per timestep in the forwards pass, excluding the softmax layer. We call this metric (ops/timestep). We trained models with ï¬at MoEs containing 4, 32, and 256 experts, and models with hierarchical MoEs containing 256, 1024, and 4096 experts. Each expert had about 1 million parameters. For all the MoE layers, 4 experts were active per input.
The results of these models are shown in Figure 2-left. The model with 4 always-active experts performed (unsurprisingly) similarly to the computationally-matched baseline models, while the largest of the models (4096 experts) achieved an impressive 24% lower perplexity on the test set.
FF Baseline Models 45 |F-FF Flat MoE Models [FHT Hierarchical MoE Models 2 g 240 S a % & 35 10" 10° 10° 10° Model Parameters Excluding Embedding and Softmax
55 [EL isi Models VM MoE Models 50 45 2 g 2340 S a % 35 & 30 10° 10" 10° Computational Budget (ops/timestep)
FF Baseline Models 55 [EL isi Models 45 |F-FF Flat MoE Models VM MoE Models [FHT Hierarchical MoE Models 50 45 2 g 240 2340 S a % 35 & 35 30 10" 10° 10° 10° 10° 10" 10° Model Parameters Excluding Embedding and Softmax Computational Budget (ops/timestep)
Figure 2: Model comparison on 1-Billion-Word Language-Modeling Benchmark. On the left, we plot test perplexity as a function of model capacity for models with similar computational budgets of approximately 8-million-ops-per-timestep. On the right, we plot test perplexity as a function of computational budget. The top line represents the LSTM models from (Jozefowicz et al., 2016). The bottom line represents 4-billion parameter MoE models with different computational budgets.
Varied Computation, High Capacity: In addition to the largest model from the previous section, we trained two more MoE models with similarly high capacity (4 billion parameters), but higher computation budgets. These models had larger LSTMs, and fewer but larger and experts. Details
6
# Under review as a conference paper at ICLR 2017
Table 1: Summary of high-capacity MoE-augmented models with varying computational budgets, vs. best previously published results (Jozefowicz et al., 2016). Details in Appendix C.
Best Published Results Low-Budget MoE Model Medium-Budget MoE Model High-Budget MoE Model Test Test #Parameters Perplexity Perplexity excluding embedding 10 epochs 100 epochs and softmax layers 151 million 4303 million 4313 million 4371 million 34.7 34.1 31.3 28.0 30.6 Training Time 10 epochs 59 hours, 32 k40s 151 million 8.9 million 15 hours, 16 k40s 33.8 million 17 hours, 32 k40s 142.7 million 47 hours, 32 k40s ops/timestep TFLOPS /GPU 1.09 0.74 1.22 1.56
can be found in Appendix C.2. Results of these three models form the bottom line of Figure 2-right. Table 1 compares the results of these models to the best previously-published result on this dataset . Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6% of the computation.
Computational Efï¬ciency: We trained our models using TensorFlow (Abadi et al., 2016) on clus- ters containing 16-32 Tesla K40 GPUs. For each of our models, we determine computational efï¬- ciency in TFLOPS/GPU by dividing the number of ï¬oating point operations required to process one training batch by the observed step time and the number of GPUs in the cluster. The operation counts used here are higher than the ones we report in our ops/timestep numbers in that we include the backwards pass, we include the importance-sampling-based training of the softmax layer, and we count a multiply-and-add as two separate operations. For all of our MoE models, the ï¬oating point operations involved in the experts represent between 37% and 46% of the total.
For our baseline models wtih no MoE, observed computational efï¬ciency ranged from 1.07-1.29 TFLOPS/GPU. For our low-computation MoE models, computation efï¬ciency ranged from 0.74- 0.90 TFLOPS/GPU, except for the 4-expert model which did not make full use of the available parallelism. Our highest-computation MoE model was more efï¬cient at 1.56 TFLOPS/GPU, likely due to the larger matrices. These numbers represent a signiï¬cant fraction of the theoretical maximum of 4.29 TFLOPS/GPU claimed by NVIDIA. Detailed results are in Appendix C, Table 7.
100 BILLION WORD GOOGLE NEWS CORPUS
on a >< Alter Training on 10B words @-@ After Training on 100B words a 3} Ps a Test Perplexity o - & 8 o is} 10" 10° 10° 10° 10" Model Parameters Excluding Embedding and Softmax
Figure 3: Language modeling on a 100 billion word corpus. Models have similar computational budgets (8 million ops/timestep).
On the 1-billion-word corpus, adding additional capacity seems to produce diminishing returns as the number of parameters in the MoE layer exceeds 1 billion, as can be seen in Figure 2-left. We hypothesized that for a larger training set, even higher capacities would produce signiï¬cant quality improvements.
We constructed a similar training set consisting of shufï¬ed unique sentences from Googleâs internal news corpus, totalling roughly 100 billion words. Similarly to the previous section, we tested a series of models with similar computational costs of about 8 million ops/timestep. In addition to a baseline LSTM model, we trained models augmented with MoE layers containing 32, 256, 1024,
7
# Under review as a conference paper at ICLR 2017
4096, 16384, 65536, and 131072 experts. This corresponds to up to 137 billion parameters in the MoE layer. Details on architecture, training, and results are given in Appendix D.
Results: Figure 3 shows test perplexity as a function of capacity after training on 10 billion words (top line) and 100 billion words (bottom line). When training over the full 100 billion words, test perplexity improves signiï¬cantly up to 65536 experts (68 billion parameters), dropping 39% lower than the computationally matched baseline, but degrades at 131072 experts, possibly a result of too much sparsity. The widening gap between the two lines demonstrates (unsurprisingly) that increased model capacity helps more on larger training sets.
Even at 65536 experts (99.994% layer sparsity), computational efï¬ciency for the model stays at a respectable 0.72 TFLOPS/GPU.
5.3 MACHINE TRANSLATION (SINGLE LANGUAGE PAIR)
Model Architecture: Our model was a modiï¬ed version of the GNMT model described in (Wu et al., 2016). To reduce computation, we decreased the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We inserted MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). Each MoE layer contained up to 2048 experts each with about two million parameters, adding a total of about 8 billion parameters to the models. Further details on model architecture, testing procedure and results can be found in Appendix E.
Datasets: We benchmarked our method on the WMTâ14 EnâFr and EnâDe corpora, whose training sets have 36M sentence pairs and 5M sentence pairs, respectively. The experimental proto- cols were also similar to those in (Wu et al., 2016): newstest2014 was used as the test set to compare against previous work (Luong et al., 2015a; Zhou et al., 2016; Wu et al., 2016), while the combina- tion of newstest2012 and newstest2013 was used as the development set. We also tested the same model on a Googleâs Production English to French data.
Table 2: Results on WMTâ14 Enâ Fr newstest2014 (bold values represent best results). Model
Test Test ops/timenstep Training Total Time #Parameters 3 days/64 k40s 8.7B 8.7B 6 days/64 k40s 278M 6 days/96 k80s 278M 6 days/96 k80s Perplexity BLEU 40.35 40.56 39.22 39.92 37.0 31.5 33.1 37.7 39.2 MoE with 2048 Experts MoE with 2048 Experts (longer training) GNMT (Wu et al., 2016) GNMT+RL (Wu et al., 2016) PBMT (Durrani et al., 2014) LSTM (6-layer) (Luong et al., 2015b) LSTM (6-layer+PosUnk) (Luong et al., 2015b) DeepAtt (Zhou et al., 2016) DeepAtt+PosUnk (Zhou et al., 2016) 2.69 2.63 2.79 2.96 85M 85M 214M 214M
Table 3: Results on WMTâ14 En â De newstest2014 (bold values represent best results). Test Perplexity BLEU 26.03 24.91 24.66 20.7 20.6
Table 4: Results on the Google Production Enâ Fr dataset (bold values represent best results).
Model MoE with 2048 Experts GNMT (Wu et al., 2016) Test Perplexity BLEU Perplexity BLEU 36.57 35.56 Eval Eval Test 2.60 2.78 37.27 35.80 2.69 2.87 ops/timestep 85M 214M Total #Parameters 8.7B 278M Training Time 1 day/64 k40s 6 days/96 k80s
8
# Under review as a conference paper at ICLR 2017
Results: Tables 2, 3, and 4 show the results of our largest models, compared with published results. Our approach achieved BLEU scores of 40.56 and 26.03 on the WMTâ14 EnâFr and EnâDe benchmarks. As our models did not use RL reï¬nement, these results constitute signiï¬cant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in (Wu et al., 2016). The perplexity scores are also better.2 On the Google Production dataset, our model achieved 1.01 higher test BLEU score even after training for only one sixth of the time.
5.4 MULTILINGUAL MACHINE TRANSLATION
Dataset: (Johnson et al., 2016) train a single GNMT (Wu et al., 2016) model on a very large com- bined dataset of twelve language pairs. Results are somewhat worse than those for 12 separately trained single-pair GNMT models. This is not surprising, given that the twelve models have 12 times the capacity and twelve times the aggregate training of the one model. We repeat this ex- periment with a single MoE-augmented model. See Appendix E for details on model architecture. We train our model on the same dataset as (Johnson et al., 2016) and process the same number of training examples (about 3 billion sentence pairs). Our training time was shorter due to the lower computational budget of our model.
Results: Results for the single-pair GNMT models, the multilingual GNMT model and the mul- tilingual MoE model are given in Table 5. The MoE model achieves 19% lower perplexity on the dev set than the multilingual GNMT model. On BLEU score, the MoE model signiï¬cantly beats the multilingual GNMT model on 11 of the 12 language pairs (by as much as 5.84 points), and even beats the monolingual GNMT models on 8 of 12 language pairs. The poor performance on English â Korean seems to be a result of severe overtraining, as for the rarer language pairs a small number of real examples were highly oversampled in the training corpus.
Table 5: Multilingual Machine Translation (bold values represent best results). GNMT-Mono GNMT-Multi
MoE-Multi MoE-Multi vs. GNMT-Multi Parameters 278M / model ops/timestep training time, hardware Perplexity (dev) French â English Test BLEU German â English Test BLEU Japanese â English Test BLEU Korean â English Test BLEU Portuguese â English Test BLEU Spanish â English Test BLEU English â French Test BLEU English â German Test BLEU English â Japanese Test BLEU English â Korean Test BLEU English â Portuguese Test BLEU English â Spanish Test BLEU 212M various 36.47 31.77 23.41 25.42 44.40 38.00 35.37 26.43 23.66 19.75 38.40 34.50 278M 212M 8.7B 102M 21 days, 96 k20s 12 days, 64 k40s 4.14 34.40 31.17 21.62 22.87 42.53 36.04 34.00 23.15 21.10 18.41 37.35 34.25 3.35 37.46 34.80 25.91 28.71 46.13 39.39 36.59 24.53 22.78 16.62 37.90 36.21 -19% +3.06 +3.63 +4.29 +5.84 +3.60 +3.35 +2.59 +1.38 +1.68 -1.79 +0.55 +1.96
# 6 CONCLUSION
This work is the ï¬rst to demonstrate major wins from conditional computation in deep networks. We carefully identiï¬ed the design considerations and challenges of conditional computing and ad- dressed them with a combination of algorithmic and engineering solutions. While we focused on text, conditional computation may help in other domains as well, provided sufï¬ciently large train- ing sets. We look forward to seeing many novel implementations and applications of conditional computation in the years to come.
ACKNOWLEDGMENTS
We would like to thank all of the members of the Google Brain and Google Translate teams who helped us with this project, in particular Zhifeng Chen, Yonghui Wu, and Melvin Johnson. Thanks also to our anonymous ICLR reviewers for the helpful suggestions on making this paper better.
2Reported perplexities relative to the tokenization used by both our models and GNMT.
9
# Under review as a conference paper at ICLR 2017
# REFERENCES
MartÃn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Gre- gory S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian J. Good- fellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Józefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Gor- don Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul A. Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda B. Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorï¬ow: Large-scale machine learning on heterogeneous distributed systems. CoRR, abs/1603.04467, 2016. URL http://arxiv.org/abs/1603.04467.
Rahaf Aljundi, Punarjay Chakravarty, and Tinne Tuytelaars. Expert gate: Lifelong learning with a network of experts. CoRR, abs/1611.06194, 2016. URL http://arxiv.org/abs/1611. 06194.
A. Almahairi, N. Ballas, T. Cooijmans, Y. Zheng, H. Larochelle, and A. Courville. Dynamic Capac- ity Networks. ArXiv e-prints, November 2015.
Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jing- dong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Y. Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Y. Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yo- gatama, Jun Zhan, and Zhenyao Zhu. Deep speech 2: End-to-end speech recognition in english and mandarin. arXiv preprint arXiv:1512.02595, 2015.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, and Doina Precup. Conditional computation in neural networks for faster models. arXiv preprint arXiv:1511.06297, 2015.
Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005, 2013.
K. Cho and Y. Bengio. Exponentially Increasing the Capacity-to-Computation Ratio for Conditional Computation in Deep Learning. ArXiv e-prints, June 2014.
Ronan Collobert, Samy Bengio, and Yoshua Bengio. A parallel mixture of SVMs for very large scale problems. Neural Computing, 2002.
Andrew Davis and Itamar Arel. Low-rank approximations for conditional feedforward computation in deep neural networks. arXiv preprint arXiv:1312.4461, 2013.
Marc Peter Deisenroth and Jun Wei Ng. Distributed Gaussian processes. In ICML, 2015.
John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization, 2010.
Nadir Durrani, Barry Haddow, Philipp Koehn, and Kenneth Heaï¬eld. Edinburghâs phrase-based In Proceedings of the Ninth Workshop on Statistical machine translation systems for wmt-14. Machine Translation, 2014.
David Eigen, MarcâAurelio Ranzato, and Ilya Sutskever. Learning factored representations in a deep mixture of experts. arXiv preprint arXiv:1312.4314, 2013.
Ekaterina Garmash and Christof Monz. Ensemble learning for multi-source neural machine transla- tion. In staff.science.uva.nl/c.monz, 2016.
10
# Under review as a conference paper at ICLR 2017
Felix A. Gers, Jürgen A. Schmidhuber, and Fred A. Cummins. Learning to forget: Continual pre- diction with lstm. Neural Computation, 2000.
Audrunas Gruslys, Rémi Munos, Ivo Danihelka, Marc Lanctot, and Alex Graves. Memory-efï¬cient backpropagation through time. CoRR, abs/1606.03401, 2016. URL http://arxiv.org/ abs/1606.03401.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. IEEE Conference on Computer Vision and Pattern Recognition, 2015.
Geoffrey Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 2012.
Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 1997.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
Robert A. Jacobs, Michael I. Jordan, Steven J. Nowlan, and Geoffrey E. Hinton. Adaptive mixtures of local experts. Neural Computing, 1991.
Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda B. Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Googleâs multilingual neural machine translation system: Enabling zero-shot translation. CoRR, abs/1611.04558, 2016. URL http://arxiv.org/abs/1611.04558.
Michael I. Jordan and Robert A. Jacobs. Hierarchical mixtures of experts and the EM algorithm. Neural Computing, 1994.
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
Reinhard Kneser and Hermann. Ney. Improved backingoff for m-gram language modeling., 1995.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classiï¬cation with deep convo- lutional neural networks. In NIPS, 2012.
Quoc V. Le, MarcâAurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S. Corrado, Jeffrey Dean, and Andrew Y. Ng. Building high-level features using large scale unsupervised learning. In ICML, 2012.
Patrick Gallinari Ludovic Denoyer. Deep sequential neural network. arXiv preprint arXiv:1410.0510, 2014.
Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. Effective approaches to attention- based neural machine translation. EMNLP, 2015a.
Minh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. Addressing the rare word problem in neural machine translation. ACL, 2015b.
Carl Edward Rasmussen and Zoubin Ghahramani. Inï¬nite mixtures of Gaussian process experts. NIPS, 2002.
Hasim Sak, Andrew W Senior, and Françoise Beaufays. Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In INTERSPEECH, pp. 338â342, 2014.
Mike Schuster and Kaisuke Nakajima. Japanese and Korean voice search. ICASSP, 2012.
Babak Shahbaba and Radford Neal. Nonlinear models using dirichlet process mixtures. JMLR, 2009.
11
# Under review as a conference paper at ICLR 2017
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In NIPS, 2014.
Lucas Theis and Matthias Bethge. Generative image modeling using spatial LSTMs. In NIPS, 2015.
Volker Tresp. Mixtures of Gaussian Processes. In NIPS, 2001.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin John- son, Xiaobing Liu, Åukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Googleâs neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
Bangpeng Yao, Dirk Walther, Diane Beck, and Li Fei-fei. Hierarchical mixture of classiï¬cation experts uncovers interactions between brain regions. In NIPS. 2009.
Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.
Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. Deep recurrent models with fast-forward connections for neural machine translation. arXiv preprint arXiv:1606.04199, 2016.
12
# Under review as a conference paper at ICLR 2017
# APPENDICES
A LOAD-BALANCING LOSS
As discussed in section 4, for load-balancing purposes, we want to deï¬ne an additional loss function to encourage experts to receive roughly equal numbers of training examples. Unfortunately, the number of examples received by an expert is a discrete quantity, so it can not be used in back- propagation. Instead, we deï¬ne a smooth estimator Load(X) of the number of examples assigned to each expert for a batch X of inputs. The smoothness allows us to back-propagate gradients through the estimator. This is the purpose of the noise term in the gating function. We deï¬ne P (x, i) as the probability that G(x)i is nonzero, given a new random choice of noise on element i, but keeping the already-sampled choices of noise on the other elements. To compute P (x, i), we note that the G(x)i is nonzero if and only if H(x)i is greater than the kth-greatest element of H(x) excluding itself. The probability works out to be:
P(z,i) = Pr((e -W,); + StandardNormal() - Softplus((x - Whoise)i) A (8) > kth_excluding(H (2), k, i)
Where kth_excluding(v, k, i) means the kth highest component of v, excluding component i. Sim- plifying, we get:
. (a -W,)i â kth_excluding(H (x), k, i) P(x,i)=® (2,4) ( Softplus((x - Wnhoise)i) ) o
Where Φ is the CDF of the standard normal distribution.
Load(X)i = P (x, i) xâX (10)
We can now deï¬ne the load loss to be the square of the coefï¬cient of variation of the load vector, multiplied by a hand-tuned scaling factor wload.
Lload(X) = wload · CV (Load(X))2 (11)
Initial Load Imbalance: To avoid out-of-memory errors, we need to initialize the network in a state of approximately equal expert load (since the soft constraints need some time to work). To accomplish this, we initialize the matrices Wg and Wnoise to all zeros, which yields no signal and some noise.
Experiments: We trained a set of models with identical architecture (the MoE-256 model de- scribed in Appendix C), using different values of wimportance and wload. We trained each model for 10 epochs, then measured perplexity on the test set. We also measured the coefï¬cients of variation in Importance and Load, as well as ratio of the load on the most overloaded expert to the average load. This last value is signiï¬cant for load balancing purposes on distributed hardware. All of these metrics were averaged over several training batches.
Table 6: Experiments with different combinations of losses.
wimportance wload Test Perplexity CV (Importance(X)) CV (Load(X)) max(Load(X)) mean(Load(X)) 17.80 1.47 1.15 1.14 1.37 1.07
13
# Under review as a conference paper at ICLR 2017
Results: Results are reported in Table 6. All the combinations containing at least one the two losses led to very similar model quality, where having no loss was much worse. Models with higher values of wload had lower loads on the most overloaded expert.
B HIERACHICAL MIXTURE OF EXPERTS
If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted com- bination of âexperts", each of which is itself a secondary mixture-of-experts with its own gating network.3 If the hierarchical MoE consists of a groups of b experts each, we denote the primary gat- ing network by Gprimary, the secondary gating networks by (G1, G2..Ga), and the expert networks by (E0,0, E0,1..Ea,b). The output of the MoE is given by:
a b uit = S22 Gprimary (i+ Gil); + Bi (12) i=1 j=1
Our metrics of expert utilization change to the following:
Importance y(X) i,j = > Gprimary(X)i + Gi(x);j (13) weX
LoadH (X)i,j = Loadprimary(X)i · Loadi(X (i))j |X (i)| (14)
Loadprimary and Loadi deonte the Load functions for the primary gating network and ith sec- ondary gating network respectively. X (i) denotes the subset of X for which Gprimary(x)i > 0.
It would seem simpler to let LoadH (X)i,j = Loadi(Xi)j , but this would not have a gradient with respect to the primary gating network, so we use the formulation above.
C 1 BILLION WORD LANGUAGE MODELING BENCHMARK - EXPERIMENTAL DETAILS
8-MILLION-OPERATIONS-PER-TIMESTEP MODELS
Model Architecture: Our model consists of ï¬ve layers: a word embedding layer, a recurrent Long Short-Term Memory (LSTM) layer (Hochreiter & Schmidhuber, 1997; Gers et al., 2000), a MoE layer, a second LSTM layer, and a softmax layer. The dimensionality of the embedding layer, the number of units in each LSTM layer, and the input and output dimensionality of the MoE layer are all equal to 512. For every layer other than the softmax, we apply drouput (Zaremba et al., 2014) to the layer output, dropping each activation with probability DropP rob, otherwise dividing by (1 â DropP rob). After dropout, the output of the previous layer is added to the layer output. This residual connection encourages gradient ï¬ow (He et al., 2015).
MoE Layer Architecture: Each expert in the MoE layer is a feed forward network with one ReLU-activated hidden layer of size 1024 and an output layer of size 512. Thus, each expert contains [512 â 1024] + [1024 â 512] = 1M parameters. The output of the MoE layer is passed through a sigmoid function before dropout. We varied the number of experts between models, using ordinary MoE layers with 4, 32 and 256 experts and hierarchical MoE layers with 256, 1024 and 4096 experts. We call the resulting models MoE-4, MoE-32, MoE-256, MoE-256-h, MoE-1024-h and MoE-4096- h. For the hierarchical MoE layers, the ï¬rst level branching factor was 16, corresponding to the number of GPUs in our cluster. We use Noisy-Top-K Gating (see Section 2.1) with k = 4 for the ordinary MoE layers and k = 2 at each level of the hierarchical MoE layers. Thus, each example is processed by exactly 4 experts for a total of 4M ops/timestep. The two LSTM layers contribute 2M ops/timestep each for the desired total of 8M.
3 We have not found the need for deeper hierarchies.
14
# Under review as a conference paper at ICLR 2017
Computationally-Matched Baselines: The MoE-4 model does not employ sparsity, since all 4 experts are always used. In addition, we trained four more computationally-matched baseline models with no sparsity:
⢠MoE-1-Wide: The MoE layer consists of a single "expert" containing one ReLU-activated hidden layer of size 4096.
⢠MoE-1-Deep: The MoE layer consists of a single "expert" containing four ReLU-activated hidden layers, each with size 1024.
4xLSTM-512: We replace the MoE layer with two additional 512-unit LSTM layers.
⢠LSTM-2048-512: The model contains one 2048-unit LSTM layer (and no MoE). The out- put of the LSTM is projected down to 512 dimensions (Sak et al., 2014). The next timestep of the LSTM receives the projected output. This is identical to one of the models published in (Jozefowicz et al., 2016). We re-ran it to account for differences in training regimen, and obtained results very similar to the published ones.
Training: The models were trained on a cluster of 16 K40 GPUs using the synchronous method described in Section 3. Each batch consisted of a set of sentences totaling roughly 300,000 words. In the interest of time, we limited training to 10 epochs, (27,000 steps). Training took 12-16 hours for all models, except for MoE-4, which took 18 hours (since all the expert computation was performed on only 4 of 16 GPUs). We used the Adam optimizer (Kingma & Ba, 2015). The base learning rate was increased linearly for the ï¬rst 1000 training steps, and decreased after that so as to be proportional to the inverse square root of the step number. The Softmax output layer was trained efï¬ciently using importance sampling similarly to the models in (Jozefowicz et al., 2016). For each model, we performed a hyper-parmeter search to ï¬nd the best dropout probability, in increments of 0.1.
To ensure balanced expert utilization we set wimportance = 0.1 and wload = 0.1, as described in Section 4 and Appendix A.
Results: We evaluate our model using perplexity on the holdout dataset, used by (Chelba et al., 2013; Jozefowicz et al., 2016). We follow the standard procedure and sum over all the words in- cluding the end of sentence symbol. Results are reported in Table 7. For each model, we report the test perplexity, the computational budget, the parameter counts, the value of DropP rob, and the computational efï¬ciency.
Table 7: Model comparison on 1 Billion Word Language Modeling Benchmark. Models marked with * are from (Jozefowicz et al., 2016).
Model Kneser-Ney 5-gram* LSTM-512-512* LSTM-1024-512* LSTM-2048-512* LSTM-2048-512 4xLSTM-512 MoE-1-Wide MoE-1-Deep MoE-4 MoE-32 MoE-256 MoE-256-h MoE-1024-h MoE-4096-h 2xLSTM-8192-1024* MoE-34M MoE-143M Test Test Perplexity Perplexity 10 epochs (ï¬nal) 67.6 54.1 48.2 43.7 45.0 44.7 46.0 46.1 45.7 45.0 39.7 35.7 36.0 34.6 34.1 34.7 31.3 28.0 30.6 Total Drop- TFLOPS per GPU (observed) ops/timestep #Params excluding (millions) embed. & softmax #Params P rob (millions) (billions) 1.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.9 1.1 1.1 1.9 5.1 1.8 6.0 6.0 0.00001 2.4 4.7 9.4 9.4 8.4 8.4 8.4 8.4 8.4 8.6 8.4 8.5 8.9 151.0 33.8 142.7 2.4 4.7 9.4 9.4 8.4 8.4 8.4 8.4 37.8 272.9 272.9 1079.0 4303.4 151.0 4313.9 4371.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.2 0.2 0.25 0.3 0.4 0.61 1.21 1.07 1.29 1.29 0.52 0.87 0.81 0.89 0.90 0.74 1.09 1.22 1.56
15
# Under review as a conference paper at ICLR 2017
C.2 MORE EXPENSIVE MODELS
We ran two additional models (MoE-34M and MoE-143M) to investigate the effects of adding more computation in the presence of a large MoE layer. These models have computation budgets of 34M and 143M ops/timestep. Similar to the models above, these models use a MoE layer between two LSTM layers. The dimensionality of the embedding layer, and the input and output dimensionality of the MoE layer are set to 1024 instead of 512. For MoE-34M, the LSTM layers have 1024 units. For MoE-143M, the LSTM layers have 4096 units and an output projection of size 1024 (Sak et al., 2014). MoE-34M uses a hierarchical MoE layer with 1024 experts, each with a hidden layer of size 2048. MoE-143M uses a hierarchical MoE layer with 256 experts, each with a hidden layer of size 8192. Both models have 4B parameters in the MoE layers. We searched for the best DropP rob for each model, and trained each model for 10 epochs.
The two models achieved test perplexity of 31.3 and 28.0 respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table 7. The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by 18%.
D 100 BILLION WORD GOOGLE NEWS CORPUS - EXPERIMENTAL DETAILS
Model Architecture: The models are similar in structure to the 8-million-operations-per-timestep models described in the previous section. We vary the number of experts between models, using an ordinary MoE layer with 32 experts and hierarchical MoE layers with 256, 1024, 4096, 16384, 65536 and 131072 experts. For the hierarchical MoE layers, the ï¬rst level branching factors are 32, 32, 64, 128, 256 and 256, respectively.
Training: Models are trained on a cluster of 32 Tesla K40 GPUs, except for the last two models, which are trained on clusters of 64 and 128 GPUs so as to have enough memory for all the param- eters. For all models, training batch sizes are approximately 2.5 million words. Models are trained once-through over about 100 billion words.
We implement several memory optimizations in order to ï¬t up to 1 billion parameters per GPU. First, we do not store the activations of the hidden layers of the experts, but instead recompute them on the backwards pass. Secondly, we modify the optimizer on the expert parameters to require less auxiliary storage:
The Adam optimizer (Kingma & Ba, 2015) keeps ï¬rst and second moment estimates of the per- parameter gradients. This triples the required memory. To avoid keeping a ï¬rst-moment estimator, we set β1 = 0. To reduce the size of the second moment estimator, we replace it with a factored approximation. For a matrix of parameters, instead of maintaining a full matrix of second-moment estimators, we maintain vectors of row-wise and column-wise averages of that matrix. At each step, the matrix of estimators is taken to be the outer product of those two vectors divided by the mean of either one. This technique could similarly be applied to Adagrad (Duchi et al., 2010).
Table 8: Model comparison on 100 Billion Word Google News Dataset
Model Kneser-Ney 5-gram 4xLSTM-512 MoE-32 MoE-256-h MoE-1024-h MoE-4096-h MoE-16384-h MoE-65536-h MoE-131072-h Test Test Perplexity Perplexity 1 epoch .1 epochs 45.3 67.1 47.0 54.5 40.4 48.5 35.3 42.8 32.7 40.3 30.9 38.9 38.2 29.7 28.9 38.2 29.2 39.8 ops/timestep #Params excluding TFLOPS per GPU (billions) (observed) Total (millions) embed. & softmax #Params (millions) 0.00001 8.4 8.4 8.4 8.5 8.6 8.8 9.2 9.7 8.4 37.8 272.9 1079.0 4303.4 17201.0 68791.0 137577.6 76.0 0.1 0.1 0.4 1.2 4.4 17.3 68.9 137.7 1.23 0.83 1.11 1.14 1.07 0.96 0.72 0.30
Results: We evaluate our model using perplexity on a holdout dataset. Results are reported in Table 8. Perplexity after 100 billion training words is 39% lower for the 68-billion-parameter MoE
16
# Under review as a conference paper at ICLR 2017
model than for the baseline model. It is notable that the measured computational efï¬ciency of the largest model (0.30 TFLOPS/GPU) is very low compared to the other models. This is likely a result of the fact that, for purposes of comparison to the other models, we did not increase the training batch size proportionally to the number of GPUs. For comparison, we include results for a computationally matched baseline model consisting of 4 LSTMs, and for an unpruned 5-gram model with Kneser-Ney smoothing (Kneser & Ney, 1995).4
E MACHINE TRANSLATION - EXPERIMENTAL DETAILS
Model Architecture for Single Language Pair MoE Models: Our model is a modiï¬ed version of the GNMT model described in (Wu et al., 2016). To reduce computation, we decrease the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We insert MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). We use an attention mechanism between the encoder and decoder, with the ï¬rst decoder LSTM receiving output from and providing input for the attention 5. All of the layers in our model have input and output dimensionality of 512. Our LSTM layers have 2048 hidden units, with a 512-dimensional output projection. We add residual connections around all LSTM and MoE layers to encourage gradient ï¬ow (He et al., 2015). Similar to GNMT, to effectively deal with rare words, we used sub- word units (also known as âwordpieces") (Schuster & Nakajima, 2012) for inputs and outputs in our system.
We use a shared source and target vocabulary of 32K wordpieces. We also used the same beam search technique as proposed in (Wu et al., 2016).
We train models with different numbers of experts in the MoE layers. In addition to a baseline model with no MoE layers, we train models with ï¬at MoE layers containing 32 experts, and models with hierarchical MoE layers containing 512 and 2048 experts. The ï¬at MoE layers use k = 4 and the hierarchical MoE models use k = 2 at each level of the gating network. Thus, each input is processed by exactly 4 experts in each MoE layer. Each expert in the MoE layer is a feed forward network with one hidden layer of size 2048 and ReLU activation. Thus, each expert contains [512 â 2048] + [2048 â 512] = 2M parameters. The output of the MoE layer is passed through a sigmoid function. We use the strictly-balanced gating function described in Appendix F.
Model Architecture for Multilingual MoE Model: We used the same model architecture as for the single-language-pair models, with the following exceptions: We used noisy-top-k gating as described in Section 2.1, not the scheme from Appendix F. The MoE layers in the encoder and decoder are non-hierarchical MoEs with n = 512 experts, and k = 2. Each expert has a larger hidden layer of size 8192. This doubles the amount of computation in the MoE layers, raising the computational budget of the entire model from 85M to 102M ops/timestep.
Training: We trained our networks using the Adam optimizer (Kingma & Ba, 2015). The base learning rate was increased linearly for the ï¬rst 2000 training steps, held constant for an additional 8000 steps, and decreased after that so as to be proportional to the inverse square root of the step number. For the single-language-pair models, similarly to (Wu et al., 2016), we applied dropout (Zaremba et al., 2014) to the output of all embedding, LSTM and MoE layers, using DropP rob = 0.4. Training was done synchronously on a cluster of up to 64 GPUs as described in section 3. Each training batch consisted of a set of sentence pairs containing roughly 16000 words per GPU.
To ensure balanced expert utilization we set wimportance = 0.01 and wload = 0.01, as described in Section 4 and Appendix A.
Metrics: We evaluated our models using the perplexity and the standard BLEU score metric. We reported tokenized BLEU score as computed by the multi-bleu.pl script, downloaded from the public implementation of Moses (on Github), which was also used in (Luong et al., 2015a).
4While the original size of the corpus was 130 billion words, the neural models were trained for a maximum of 100 billion words. The reported Kneser-Ney 5-gram models were trained over 13 billion and 130 billion words respectively, giving them a slight advantage over the other reported results.
5For performance reasons, we use a slightly different attention function from the one described in (Wu et al., 2016) - See Appendix G
17
# Under review as a conference paper at ICLR 2017
Results: Tables 2, 3 and 4 in Section 5.3 show comparisons of our results to other published methods. Figure 4 shows test perplexity as a function of number of words in the (training dataâs) source sentences processed for models with different numbers of experts. As can be seen from the Figure, as we increased the number of experts to approach 2048, the test perplexity of our model continued to improve.
oe HExperts=0 yoy HExperts=32 aoa HExperts=512 moa HExperts=2048 o* HExperts=0 soa HExperts=2048 2 3 Ba) 5 \ See: a cme | as ide ee 4 30] tay y Saree ae NN ty A enn erty | 25| ey ey 20 2 a 40 3s 78 15 20 Number of source words processed 109 Number of source words processed toto
Figure 4: Perplexity on WMTâ14 Enâ Fr (left) and Google Production Enâ Fr (right) datasets as a function of number of words processed. The large differences between models at the beginning of training are due to different batch sizes. All models incur the same computational budget (85M ops/timestep) except the one with no experts.
We found that the experts indeed become highly specialized by syntax and/or semantics, as can be seen in Table 9. For example, one expert is used when the indeï¬nite article âa" introduces the direct object in a verb phrase indicating importance or leadership.
Table 9: Contexts corresponding to a few of the 2048 experts in the MoE layer in the encoder portion of the WMTâ14 Enâ Fr translation model. For each expert i, we sort the inputs in a training batch in decreasing order of G(x)i, and show the words surrounding the corresponding positions in the input sentences.
Expert 381 ... with researchers , ... ... to innovation . ... tics researchers . ... the generation of ... ... technology innovations is ... ... technological innovations , ... ... support innovation throughout ... ... role innovation will ... ... research scienti st ... ... promoting innovation where ... ... Expert 752 ... plays a core ... ... plays a critical ... ... provides a legislative ... ... play a leading ... ... assume a leadership ... ... plays a central ... ... taken a leading ... ... established a reconciliation ... ... played a vital ... ... have a central ... ... Expert 2004 ... with rapidly growing ... ... under static conditions ... ... to swift ly ... ... to dras tically ... ... the rapid and ... ... the fast est ... ... the Quick Method ... ... rec urrent ) ... ... provides quick access ... ... of volatile organic ... ...
F STRICTLY BALANCED GATING
Due to some peculiarities in our infrastructure which have since been ï¬xed, at the time we ran some of the machine translation experiments, our models ran faster if every expert received exactly the same batch size. To accommodate this, we used a different gating function which we describe below.
Recall that we deï¬ne the softmax gating function to be:
GÏ(x) = Sof tmax(x · Wg) (15)
18
# Under review as a conference paper at ICLR 2017
Sparse Gating (alternate formulation): To obtain a sparse gating vector, we multiply GÏ(x) component-wise with a sparse mask M (GÏ(x)) and normalize the output. The mask itself is a function of GÏ(x) and speciï¬es which experts are assigned to each input example:
Ga(2):M(Go(0))i Say Go(@)M(Go(@)); G(x): (16)
Top-K Mask: To implement top-k gating in this formulation, we would let M (v) = T opK(v, k), where:
1 if v; is in the top k elements of v. 17 0 otherwise. ay TopK(v, k); = {
Batchwise Mask: To force each expert to receive the exact same number of examples, we intro- duce an alternative mask function, Mbatchwise(X, m), which operates over batches of input vectors. Instead of keeping the top k values per example, we keep the top m values per expert across the training batch, where m = k|X|
1 if Xj, is in the top m values for to expert 7 18 0 otherwise (18) Mbatchwise(X;â¢)j,i = {
As our experiments suggest and also observed in (Ioffe & Szegedy, 2015), using a batchwise func- tion during training (such as Mbatchwise) requires modiï¬cations to the inference when we may not have a large batch of examples. Our solution to this is to train a vector T of per-expert threshold values to approximate the effects of the batchwise mask. We use the following mask at inference time:
1 ifa; >T; 0 otherwise a9) Mthreshota(,T)i = {
To learn the threshold values, we apply an additional loss at training time which is minimized when the batchwise mask and the threshold mask are identical.
|X| n Loatchwise(X,T,m) = Ss So (Minresnota(, T)i â Moatchwise(X,m)j,i)(Xj,i -â Ti) (20) j=l i=l
G ATTENTION FUNCTION
The attention mechanism described in GNMT (Wu et al., 2016) involves a learned âAttention Func- tion" A(xi, yj) which takes a âsource vector" xi and a âtarget vector" yj, and must be computed for every source time step i and target time step j. In GNMT, the attention function is implemented as a feed forward neural network with a hidden layer of size n. It can be expressed as:
n Aanar(%is yj) = Ss Vatanh((xiU )a + (yjW)a) (21) d=1
Where U and W are trainable weight matrices and V is a trainable weight vector.
For performance reasons, in our models, we used a slightly different attention function:
A(ais yj) = 3° Vatanh((a;U)a)tanh((yjW)a) (22) d=1
19
# Under review as a conference paper at ICLR 2017
With our attention function, we can simultaneously compute the attention function on multiple source time steps and multiple target time steps using optimized matrix multiplications. We found little difference in quality between the two functions.
20 | {
"id": "1502.03167"
} |
1701.01036 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | 7 1 0 2
l u J 1 ] V C . s c [
2 v 6 3 0 1 0 . 1 0 7 1 : v i X r a
# Demystifying Neural Style Transfer
Yanghao Liâ Xiaodi Houâ¡ â Institute of Computer Science and Technology, Peking University â¡ TuSimple
lyttonhao@pku.edu.cn winsty@gmail.com liujiaying@pku.edu.cn xiaodi.hou@gmail.com
# Abstract
Neural Style Transfer [Gatys et al., 2016] has re- cently demonstrated very exciting results which catches eyes in both academia and industry. De- spite the amazing results, the principle of neural style transfer, especially why the Gram matrices could represent style remains unclear. In this pa- per, we propose a novel interpretation of neural style transfer by treating it as a domain adapta- tion problem. Speciï¬cally, we theoretically show that matching the Gram matrices of feature maps is equivalent to minimize the Maximum Mean Dis- crepancy (MMD) with the second order polynomial kernel. Thus, we argue that the essence of neu- ral style transfer is to match the feature distribu- tions between the style images and the generated images. To further support our standpoint, we ex- periment with several other distribution alignment methods, and achieve appealing results. We believe this novel interpretation connects these two impor- tant research ï¬elds, and could enlighten future re- searches.
why Gram matrix can represent artistic style still remains a mystery.
In this paper, we propose a novel interpretation of neu- ral style transfer by casting it as a special domain adapta- tion [Beijbom, 2012; Patel et al., 2015] problem. We theo- retically prove that matching the Gram matrices of the neural activations can be seen as minimizing a speciï¬c Maximum Mean Discrepancy (MMD) [Gretton et al., 2012a]. This re- veals that neural style transfer is intrinsically a process of dis- tribution alignment of the neural activations between images. Based on this illuminating analysis, we also experiment with other distribution alignment methods, including MMD with different kernels and a simpliï¬ed moment matching method. These methods achieve diverse but all reasonable style trans- fer results. Speciï¬cally, a transfer method by MMD with lin- ear kernel achieves comparable visual results yet with a lower complexity. Thus, the second order interaction in Gram ma- trix is not a must for style transfer. Our interpretation pro- vides a promising direction to design style transfer methods with different visual results. To summarize, our contributions are shown as follows:
1 Introduction Transferring the style from one image to another image is an interesting yet difï¬cult problem. There have been many efforts to develop efï¬cient methods for automatic style transfer [Hertzmann et al., 2001; Efros and Freeman, 2001; Efros and Leung, 1999; Shih et al., 2014; Kwatra et al., 2005]. Recently, Gatys et al. proposed a seminal work [Gatys et al., 2016]: It captures the style of artistic images and transfer it to other images using Convolutional Neural Net- works (CNN). This work formulated the problem as ï¬nd- ing an image that matching both the content and style statis- tics based on the neural activations of each layer in CNN. It achieved impressive results and several follow-up works im- proved upon this innovative approaches [Johnson et al., 2016; Ulyanov et al., 2016; Ruder et al., 2016; Ledig et al., 2016]. Despite the fact that this work has drawn lots of attention, the fundamental element of style representation: the Gram ma- trix in [Gatys et al., 2016] is not fully explained. The reason
1. First, we demonstrate that matching Gram matrices in neural style transfer [Gatys et al., 2016] can be reformu- lated as minimizing MMD with the second order poly- nomial kernel.
2. Second, we extend the original neural style transfer with different distribution alignment methods based on our novel interpretation.
2 Related Work In this section, we brieï¬y review some closely related works and the key concept MMD in our interpretation.
Style Transfer Style transfer is an active topic in both academia and industry. Traditional methods mainly focus on the non-parametric patch-based texture synthesis and transfer, which resamples pixels or patches from the original source texture images [Hertzmann et al., 2001; Efros and Freeman, 2001; Efros and Leung, 1999; Liang et al., 2001]. Different methods were proposed to improve the quality of the patch- based synthesis and constrain the structure of the target im- age. For example, the image quilting algorithm based on dynamic programming was proposed to ï¬nd optimal texture
âCorresponding author
boundaries in [Efros and Freeman, 2001]. A Markov Random Field (MRF) was exploited to preserve global texture struc- tures in [Frigo et al., 2016]. However, these non-parametric methods suffer from a fundamental limitation that they only use the low-level features of the images for transfer.
Recently, neural style transfer [Gatys et al., 2016] has It demonstrated remarkable results for image stylization. fully takes the advantage of the powerful representation of Deep Convolutional Neural Networks (CNN). This method used Gram matrices of the neural activations from different layers of a CNN to represent the artistic style of a image. Then it used an iterative optimization method to generate a new image from white noise by matching the neural activa- tions with the content image and the Gram matrices with the style image. This novel technique attracts many follow-up works for different aspects of improvements and applications. To speed up the iterative optimization process in [Gatys et al., 2016], Johnson et al. [Johnson et al., 2016] and Ulyanov et al. [Ulyanov et al., 2016] trained a feed-forward generative network for fast neural style transfer. To improve the trans- fer results in [Gatys et al., 2016], different complementary schemes are proposed, including spatial constraints [Selim et al., 2016], semantic guidance [Champandard, 2016] and Markov Random Field (MRF) prior [Li and Wand, 2016]. There are also some extension works to apply neural style transfer to other applications. Ruder et al. [Ruder et al., 2016] incorporated temporal consistence terms by penaliz- ing deviations between frames for video style transfer. Selim et al. [Selim et al., 2016] proposed novel spatial constraints through gain map for portrait painting transfer. Although these methods further improve over the original neural style transfer, they all ignore the fundamental question in neural style transfer: Why could the Gram matrices represent the artistic style? This vagueness of the understanding limits the further research on the neural style transfer.
Domain Adaptation Domain adaptation belongs to the area of transfer learning [Pan and Yang, 2010]. It aims to transfer the model that is learned on the source domain to the unlabeled target domain. The key component of domain adaptation is to measure and minimize the difference between source and target distributions. The most common discrep- ancy metric is Maximum Mean Discrepancy (MMD) [Gret- ton et al., 2012a], which measure the difference of sample mean in a Reproducing Kernel Hilbert Space. It is a popu- lar choice in domain adaptation works [Tzeng et al., 2014; Long et al., 2015; Long et al., 2016]. Besides MMD, Sun et al. [Sun et al., 2016] aligned the second order statistics by whitening the data in source domain and then re-correlating to the target domain. In [Li et al., 2017], Li et al. proposed a parameter-free deep adaptation method by simply modulating the statistics in all Batch Normalization (BN) layers.
Maximum Mean Discrepancy Suppose there are two sets of samples X = {xi}n i=1 and Y = {yj}m j=1 where xi and yj are generated from distributions p and q, respectively. Maxi- mum Mean Discrepancy (MMD) is a popular test statistic for the two-sample testing problem, where acceptance or rejec- tion decisions are made for a null hypothesis p = q [Gretton
et al., 2012a]. Since the population MMD vanishes if and only p = q, the MMD statistic can be used to measure the difference between two distributions. Speciï¬cally, we calcu- lates MMD deï¬ned by the difference between the mean em- bedding on the two sets of samples. Formally, the squared MMD is deï¬ned as:
MMD?[X, Y] = ||E.[4(x)] - E,[4(y)]I)? I >> 608) - = 6) I" i=1 j=l non mom () = SY 060)" O(xir) + 5 5 ow)" o(y;) i=1 i/=1 j=l j/=1 nom = Sy (xi) T o(yi)s i=1 j=l
where ¢(-) is the explicit feature mapping function of MMD. Applying the associated kernel function k(x,y) = (o(x), o(y)), the Eq. |1 [ijcan be expressed in the form of ker- nel:
MMD°[X, Y] non mom = SEE Xi, Xi) a PL yjâ) i=1/=1 =1 (2) Fm Ms yj):
The kernel function k(·, ·) implicitly deï¬nes a mapping to a higher dimensional feature space.
3 Understanding Neural Style Transfer In this section, we ï¬rst theoretically demonstrate that match- ing Gram matrices is equivalent to minimizing a speciï¬c form of MMD. Then based on this interpretation, we extend the original neural style transfer with different distribution align- ment methods.
Before explaining our observation, we ï¬rst brieï¬y re- view the original neural style transfer approach [Gatys et al., 2016]. The goal of style transfer is to generate a stylized im- age xâ given a content image xc and a reference style im- age xs. The feature maps of xâ, xc and xs in the layer l of a CNN are denoted by Fl â RNlÃMl , Pl â RNlÃMl and Sl â RNlÃMl respectively, where Nl is the number of the feature maps in the layer l and Ml is the height times the width of the feature map.
In [Gatys et al., 2016], neural style transfer iteratively gen- erates xâ by optimizing a content loss and a style loss:
L = αLcontent + βLstyle, (3)
where α and β are the weights for content and style losses, Lcontent is deï¬ned by the squared error between the feature maps of a speciï¬c layer l for xâ and xc:
Ni M Leontent = >> VE *, (4) 2a
(2)
and Lstyle is the sum of several style loss Ll layers: style in different
Lstyle = wlLl style, (5) l
where wl is the weight of the loss in the layer l and Ll style is deï¬ned by the squared error between the features correlations expressed by Gram matrices of xâ and xs:
NN Ltyle = INTE > SiG; - > (6) i=1 j=l
where the Gram matrix Gl â RNlÃNl is the inner product between the vectorized feature maps of xâ in layer l:
M = FF ie: (7) k=1
and similarly Al is the Gram matrix corresponding to Sl.
3.1 Reformulation of the Style Loss In this section, we reformulated the style loss Lstyle in Eq. 6. By expanding the Gram matrix in Eq. 6, we can get the for- mulation of Eq. 8, where f l ·k is the k-th column of Fl ·k and sl and Sl.
By using the second order degree polynomial kernel k(x, y) = (xT y)2, Eq. 8 can be represented as:
M M Cote = nae » » GG ky=1k2=1 + k(s!p, .8!.,) â 2e(E4, s',,)) ) l whey? a) L 27zl ol IN? MMD*|F', Sâ],
where F l is the feature set of xâ where each sample is a col- umn of Fl, and S l corresponds to the style image xs. In this way, the activations at each position of feature maps is con- sidered as an individual sample. Consequently, the style loss ignores the positions of the features, which is desired for style transfer. In conclusion, the above reformulations suggest two important ï¬ndings:
1. The style of a image can be intrinsically represented by feature distributions in different layers of a CNN.
2. The style transfer can be seen as a distribution alignment process from the content image to the style image.
# 3.2 Different Adaptation Methods for Neural Style Transfer
Our interpretation reveals that neural style transfer can be seen as a problem of distribution alignment, which is also at the core in domain adaptation. If we consider the style of one image in a certain layer of CNN as a âdomainâ, style trans- fer can also be seen as a special domain adaptation problem. The specialty of this problem lies in that we treat the feature at each position of feature map as one individual data sam- ple, instead of that in traditional domain adaptation problem
in which we treat each image as one data sample. (e.g. The feature map of the last convolutional layer in VGG-19 model is of size 14 Ã 14, then we have totally 196 samples in this âdomainâ.)
Inspired by the studies of domain adaptation, we extend neural style transfer with different adaptation methods in this subsection.
MMD with Different Kernel Functions As shown in Eq. 9, matching Gram matrices in neural style transfer can been seen as a MMD process with second order polynomial kernel. It is very natural to apply other kernel functions for MMD in style transfer. First, if using MMD statistics to mea- sure the style discrepancy, the style loss can be deï¬ned as:
1 Live = A =,MMD?|F', S'], MM A> (« (£1,.£,) + k(s',,s!;) = 2k(f!,.8!,)), k j=1 j=l (10)
(10) where Z}. is the normalization term corresponding to differ- ent scale of the feature map in the layer / and the choice of kernel function. Theoretically, different kernel function im- plicitly maps features to different higher dimensional space. Thus, we believe that different kernel functions should cap- ture different aspects of a style. We adopt the following three popular kernel functions in our experiments: (1) Linear kernel: k(x, y) = x7 y; (2) Polynomial kernel: k(x, y) = (x? y + c)¢; (3) Gaussian kernel: k(x, y) = exp (â xy), For polynomial kernel, we only use the version with d = 2. Note that matching Gram matrices is equivalent to the poly- nomial kernel with c = 0 and d = 2. For the Gaussian ker- nel, we adopt the unbiased estimation of MMD al., 2012b], which samples MM, pairs in Eq. [10] and thus can be computed with linear complexity.
BN Statistics Matching In [Li et al., 2017], the authors found that the statistics (i.e. mean and variance) of Batch Normalization (BN) layers contains the traits of different do- mains. Inspired by this observation, they utilized separate BN statistics for different domain. This simple operation aligns the different domain distributions effectively. As a special domain adaptation problem, we believe that BN statistics of a certain layer can also represent the style. Thus, we con- struct another style loss by aligning the BN statistics (mean and standard deviation) of two feature maps between two im- ages:
Ni 1 . . . . Cage = ye D ((eee HS)? + (ips â 51)?), i=1
where µi F l is the mean and standard deviation of the i-th feature channel among all the positions of the feature map
NM M M ci = tote = aypare a BP ys =1 j=1 k=1 N,N, M M ~ GNP M2 D> (( Do Fak)â + i=1 j=l k=l No NM Mm (FY Fle, F,. F! ~ GNP M2 ky Fike Fike + i=1 j=l ky=1kg=1 M M NN = 4NP MP > } » ; > ee Fin, Fey Fig ky=1k9=1 i=1 j=1 M, M, =p Fir, Fle wae D(C via Fiky) Ul gy=1kg=1 © i=1 M, M, AN2\Mf2 © AN; M, ky=1ko=1 (D2 Six Six) M M = 2 PAF) Sh. Six)) k=1 Stk, Sper Sike Sjky â 2Fiks Fins Sika Spk2) L al L L iL + Shy Sjey Sika Sky â 2Fiey Eyer Site Spo) +(osins Sika)? â 2( (Fa ha *) S S (fe âfes)? + (s'n, stig)? â (Fh, Sky ),
k1=1
k2=1
# in the layer l for image xâ:
M 1 1 Hit = 7? Fi, orn â= ap LR j=l = pi)â, (12)
Sl correspond to the style image xs. The aforementioned style loss functions are all differen- tiable and thus the style matching problem can be solved by back propagation iteratively.
# 4 Results
In this section, we brieï¬y introduce some implementation de- tails and present results by our extended neural style transfer methods. Furthermore, we also show the results of fusing dif- ferent neural style transfer methods, which combine different style losses. In the following, we refer the four extended style transfer methods introduced in Sec. 3.2 as linear, poly, Gaus- sian and BN, respectively. The images in the experiments are collected from the public implementations of neural style transfer123.
Implementation Details In the implementation, we use the VGG-19 network [Simonyan and Zisserman, 2015] fol- lowing the choice in [Gatys et al., 2016]. We also adopt the relu4 2 layer for the content loss, and relu1 1, relu2 1, relu3 1, relu4 1, relu5 1 for the style loss. The default weight factor wl is set as 1.0 if it is not speciï¬ed. The target image xâ is initialized randomly and optimized iteratively until the rela- tive change between successive iterations is under 0.5%. The maximum number of iterations is set as 1000. For the method with Gaussian kernel MMD, the kernel bandwidth Ï2 is ï¬xed as the mean of squared l2 distances of the sampled pairs since
1https://github.com/dmlc/mxnet/tree/master/example/neural- style
it does not affect a lot on the visual results. Our implemen- tation is based on the MXNet [Chen et al., 2016] implemen- tation1 which reproduces the results of original neural style transfer [Gatys et al., 2016].
Since the scales of the gradients of the style loss differ for different methods, and the weights a and ( in Eq. [3] affect the results of style transfer, we fix some factors to make a fair comparison. Specifically, we set a = 1 because the content losses are the same among different methods. Then, for each method, we first manually select a proper 6â such that the gradients on the x* from the style loss are of the same order of magnitudes as those from the content loss. Thus, we can manipulate a balance factor 7 (8 = 7â) to make trade-off between the content and style matching.
# 4.1 Different Style Representations
q Layer 5 Style Image Layer 1 Layer 2 Layer 3 Layer 4
Figure 1: Style reconstructions of different methods in ï¬ve layers, respectively. Each row corresponds to one method and the recon- struction results are obtained by only using the style loss Lstyle with α = 0. We also reconstruct different style representations in differ- ent subsets of layers of VGG network. For example, layer 3 con- tains the style loss of the ï¬rst 3 layers (w1 = w2 = w3 = 1.0 and w4 = w5 = 0.0).
# 2https://github.com/jcjohnson/neural-style 3https://github.com/jcjohnson/fast-neural-style
To validate that the extended neural style transfer meth- ods can capture the style representation of an artistic image,
(8)
(a) Content / Style (b) γ = 0.1 (c) γ = 0.2 (d) γ = 1.0 (e) γ = 5.0 (f) γ = 10.0
Figure 2: Results of the four methods (linear, poly, Gaussian and BN) with different balance factor γ. Larger γ means more emphasis on the style loss.
we ï¬rst visualize the style reconstruction results of different methods only using the style loss in Fig. 1. Moreover, Fig. 1 also compares the style representations of different layers. On one hand, for a speciï¬c method (one row), the results show that different layers capture different levels of style: The tex- tures in the top layers usually has larger granularity than those in the bottom layers. This is reasonable because each neuron in the top layers has larger receptive ï¬eld and thus has the ability to capture more global textures. On the other hand, for a speciï¬c layer, Fig. 1 also demonstrates that the style captured by different methods differs. For example, in top layers, the textures captured by MMD with a linear kernel are composed by thick strokes. Contrarily, the textures captured by MMD with a polynomial kernel are more ï¬ne grained.
# 4.2 Result Comparisons
Effect of the Balance Factor We ï¬rst explore the effect of the balance factor between the content loss and style loss by varying the weight γ. Fig. 2 shows the results of four trans- fer methods with various γ from 0.1 to 10.0. As intended, the global color information in the style image is successfully transfered to the content image, and the results with smaller γ preserve more content details as shown in Fig. 2(b) and Fig. 2(c). When γ becomes larger, more stylized textures are incorporated into the results. For example, Fig. 2(e) and Fig. 2(f) have much more similar illumination and textures with the style image, while Fig. 2(d) shows a balanced result between the content and style. Thus, users can make trade-off between the content and the style by varying γ.
(a) Content / Style (b) linear (c) poly (d) sian Gaus- (e) BN
Figure 3: Visual results of several style transfer methods, includ- ing linear, poly, Gaussian and BN. The balance factors γ in the six examples are 2.0, 2.0, 2.0, 5.0, 5.0 and 5.0, respectively.
(a) Content / Style (b) (0.9, 0.1) (c) (0.7, 0.3) (d) (0.5, 0.5) (e) (0.3, 0.7) (f) (0.1, 0.9)
Figure 4: Results of two fusion methods: BN + poly and linear + Gaussian. The top two rows are the results of ï¬rst fusion method and the bottom two rows correspond to the second one. Each column shows the results of a balance weight between the two methods. γ is set as 5.0.
Comparisons of Different Transfer Methods Fig. 3 presents the results of various pairs of content and style im- ages with different transfer methods4. Similar to matching Gram matrices, which is equivalent to the poly method, the other three methods can also transfer satisï¬ed styles from the speciï¬ed style images. This empirically demonstrates the cor- rectness of our interpretation of neural style transfer: Style transfer is essentially a domain adaptation problem, which aligns the feature distributions. Particularly, when the weight on the style loss becomes higher (namely, larger γ), the dif- ferences among the four methods are getting larger. This indicates that these methods implicitly capture different as- pects of style, which has also been shown in Fig. 1. Since these methods have their unique properties, they could pro- vide more choices for users to stylize the content image. For example, linear achieves comparable results with other meth- ods, yet requires lower computation complexity.
Fusion of Different Neural Style Transfer Methods Since we have several different neural style transfer methods, we propose to combine them to produce new transfer results. Fig. 4 demonstrates the fusion results of two combinations (linear + Gaussian and poly + BN). Each row presents the results with different balance between the two methods. For example, Fig. 4(b) in the ï¬rst two rows emphasize more on BN and Fig. 4(f) emphasizes more on poly. The results in
the middle columns show the interpolation between these two methods. We can see that the styles of different methods are blended well using our method.
5 Conclusion Despite the great success of neural style transfer, the ratio- nale behind neural style transfer was far from crystal. The vital âtrickâ for style transfer is to match the Gram matrices of the features in a layer of a CNN. Nevertheless, subsequent literatures about neural style transfer just directly improves upon it without investigating it in depth. In this paper, we present a timely explanation and interpretation for it. First, we theoretically prove that matching the Gram matrices is equivalent to a speciï¬c Maximum Mean Discrepancy (MMD) process. Thus, the style information in neural style transfer is intrinsically represented by the distributions of activations in a CNN, and the style transfer can be achieved by distribu- tion alignment. Moreover, we exploit several other distribu- tion alignment methods, and ï¬nd that these methods all yield promising transfer results. Thus, we justify the claim that neural style transfer is essentially a special domain adapta- tion problem both theoretically and empirically. We believe this interpretation provide a new lens to re-examine the style transfer problem, and will inspire more exciting works in this research area.
4More results can be found at
http://www.icst.pku.edu.cn/struct/Projects/mmdstyle/result- 1000/show-full.html
Acknowledgement This work was supported by the National Natural Science Foundation of China under Contract 61472011.
# References [Beijbom, 2012] Oscar Beijbom.
for computer vision applications. arXiv:1211.4860, 2012. Domain adaptations arXiv preprint
[Champandard, 2016] Alex J Champandard. Semantic style transfer and turning two-bit doodles into ï¬ne artworks. arXiv preprint arXiv:1603.01768, 2016.
[Chen et al., 2016] Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. MXNet: A ï¬exible and efï¬cient machine learning library for heterogeneous distributed systems. NIPS Workshop on Machine Learn- ing Systems, 2016.
[Efros and Freeman, 2001] Alexei A Efros and William T Freeman. Image quilting for texture synthesis and transfer. In SIGGRAPH, 2001.
[Efros and Leung, 1999] Alexei A Efros and Thomas K Le- ung. Texture synthesis by non-parametric sampling. In ICCV, 1999.
[Frigo et al., 2016] Oriel Frigo, Neus Sabater, Julie Delon, and Pierre Hellier. Split and match: Example-based adap- tive patch sampling for unsupervised style transfer. In CVPR, 2016.
[Gatys et al., 2016] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer using convolutional neural networks. In CVPR, 2016.
[Gretton et al., 2012a] Arthur Gretton, Karsten M Borg- wardt, Malte J Rasch, Bernhard Sch¨olkopf, and Alexander Smola. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723â773, 2012.
[Gretton et al., 2012b] Arthur Gretton, Dino Sejdinovic, Heiko Strathmann, Sivaraman Balakrishnan, Massimil- iano Pontil, Kenji Fukumizu, and Bharath K Sriperum- budur. Optimal kernel choice for large-scale two-sample tests. In NIPS, 2012.
[Hertzmann et al., 2001] Aaron Hertzmann, Charles E Ja- cobs, Nuria Oliver, Brian Curless, and David H Salesin. Image analogies. In SIGGRAPH, 2001.
[Johnson et al., 2016] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV, 2016.
[Kwatra et al., 2005] Vivek Kwatra, Irfan Essa, Aaron Bo- bick, and Nipun Kwatra. Texture optimization for example-based synthesis. ACM Transactions on Graph- ics, 24(3):795â802, 2005.
[Ledig et al., 2016] Christian Ledig, Lucas Theis, Ferenc Husz´ar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. Photo-realistic single im- age super-resolution using a generative adversarial net- work. arXiv preprint arXiv:1609.04802, 2016.
[Li and Wand, 2016] Chuan Li and Michael Wand. Combin- ing Markov random ï¬elds and convolutional neural net- works for image synthesis. In CVPR, 2016.
[Li et al., 2017] Yanghao Li, Naiyan Wang, Jianping Shi, Ji- aying Liu, and Xiaodi Hou. Revisiting batch normalization for practical domain adaptation. ICLRW, 2017.
[Liang et al., 2001] Lin Liang, Ce Liu, Ying-Qing Xu, Bain- ing Guo, and Heung-Yeung Shum. Real-time texture syn- thesis by patch-based sampling. ACM Transactions on Graphics, 20(3):127â150, 2001.
Jianmin Wang, and Michael I Jordan. Learning transferable fea- tures with deep adaptation networks. In ICML, 2015. [Long et al., 2016] Mingsheng Long, Jianmin Wang, and Michael I Jordan. Unsupervised domain adaptation with residual transfer networks. In NIPS, 2016.
[Pan and Yang, 2010] Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions on Knowl- edge and Data Engineering, 22(10):1345â1359, 2010. [Patel et al., 2015] Vishal M Patel, Raghuraman Gopalan, Ruonan Li, and Rama Chellappa. Visual domain adapta- tion: A survey of recent advances. IEEE Signal Processing Magazine, 32(3):53â69, 2015.
[Ruder et al., 2016] Manuel Ruder, Alexey Dosovitskiy, and Thomas Brox. Artistic style transfer for videos. In GCPR, 2016.
[Selim et al., 2016] Ahmed Selim, Mohamed Elgharib, and Linda Doyle. Painting style transfer for head portraits us- ing convolutional neural networks. ACM Transactions on Graphics, 35(4):129, 2016.
[Shih et al., 2014] YiChang Shih, Sylvain Paris, Connelly Barnes, William T Freeman, and Fr´edo Durand. Style transfer for headshot portraits. ACM Transactions on Graphics, 33(4):148, 2014.
[Simonyan and Zisserman, 2015] Karen Simonyan and An- drew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
Jiashi Feng, and Kate Saenko. Return of frustratingly easy domain adaptation. AAAI, 2016.
[Tzeng et al., 2014] Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. Deep domain confu- sion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474, 2014.
[Ulyanov et al., 2016] Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, and Victor Lempitsky. Texture networks: Feed-forward synthesis of textures and stylized images. In ICML, 2016. | {
"id": "1603.01768"
} |
1701.00299 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | 8 1 0 2
r a M 5 ] G L . s c [
3 v 9 9 2 0 0 . 1 0 7 1 : v i X r a
# Dynamic Deep Neural Networks: Optimizing Accuracy-Efï¬ciency Trade-offs by Selective Execution
# Lanlan Liu llanlan@umich.edu
Jia Deng
jiadeng@umich.edu
# University of Michigan 2260 Hayward St, Ann Arbor, MI, 48105, USA
# Abstract
We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward deep neural network that allows selective execution. Given an input, only a subset of D2NN neurons are executed, and the particular subset is deter- mined by the D2NN itself. By pruning unnecessary com- putation depending on input, D2NNs provide a way to im- prove computational efï¬ciency. To achieve dynamic selec- tive execution, a D2NN augments a feed-forward deep neu- ral network (directed acyclic graph of differentiable mod- ules) with controller modules. Each controller module is a sub-network whose output is a decision that controls whether other modules can execute. A D2NN is trained end to end. Both regular and controller modules in a D2NN are learnable and are jointly trained to optimize both ac- curacy and efï¬ciency. Such training is achieved by inte- grating backpropagation with reinforcement learning. With extensive experiments of various D2NN architectures on im- age classiï¬cation tasks, we demonstrate that D2NNs are general and ï¬exible, and can effectively optimize accuracy- efï¬ciency trade-offs.
network whose output is a decision that controls whether other modules can execute. Fig. 1 (left) illustrates a simple D2NN with one control module (Q) and two regular mod- ules (N1, N2), where the controller Q outputs a binary de- cision on whether module N2 executes. For certain inputs, the controller may decide that N2 is unnecessary and in- stead execute a dummy node D to save on computation. As an example application, this D2NN can be used for binary classiï¬cation of images, where some images can be rapidly classiï¬ed as negative after only a small amount of compu- tation.
D2NNs are motivated by the need for computational ef- ï¬ciency, in particular, by the need to deploy deep networks on mobile devices and data centers. Mobile devices are con- strained by energy and power, limiting the amount of com- putation that can be executed. Data centers need energy efï¬ciency to scale to higher throughput and to save operat- ing cost. D2NNs provide a way to improve computational efï¬ciency by selective execution, pruning unnecessary com- putation depending on input. D2NNs also make it possible to use a bigger network under a computation budget by ex- ecuting only a subset of the neurons each time.
# 1. Introduction
This paper introduces Dynamic Deep Neural Networks (D2NN), a new type of feed-forward deep neural network (DNN) that allows selective execution. That is, given an input, only a subset of neurons are executed, and the partic- ular subset is determined by the network itself based on the particular input. In other words, the amount of computa- tion and computation sequence are dynamic based on input. This is different from standard feed-forward networks that always execute the same computation sequence regardless of input.
A D2NN is a feed-forward deep neural network (directed acyclic graph of differentiable modules) augmented with one or more control modules. A control module is a sub-
A D2NN is trained end to end. That is, regular modules and control modules are jointly trained to optimize both ac- curacy and efï¬ciency. We achieve such training by integrat- ing backpropagation with reinforcement learning, necessi- tated by the non-differentiability of control modules.
Compared to prior work that optimizes computational ef- ï¬ciency in computer vision and machine learning, our work is distinctive in four aspects: (1) the decisions on selective execution are part of the network inference and are learned end to end together with the rest of the network, as op- posed to hand-designed or separately learned [23, 29, 2]; (2) D2NNs allow more ï¬exible network architectures and execution sequences including parallel paths, as opposed to architectures with less variance [12, 27]; (3) our D2NNs di- rectly optimize arbitrary efï¬ciency metric that is deï¬ned by the user, while previous work has no such ï¬exibility be- cause they improve efï¬ciency indirectly through sparsity
1
N2 }>LN4 (NG 4 N8 N3_ }>N5 }(_N7
Figure 1. Two D2NN examples. Input and output nodes are drawn as circles with the output nodes shaded. Function nodes are drawn as rectangles (regular nodes) or diamonds (control nodes). Dummy nodes are shaded. Data edges are drawn as solid arrows and control edges as dashed arrows. A data edge with a user deï¬ned default value is decorated with a circle.
constraints[5, 7, 27]. (4) our method optimizes metrics such as the F-score that does not decompose over individual ex- amples. This is an issue not addressed in prior work. We will elaborate on these differences in the Related Work sec- tion of this paper.
We perform extensive experiments to validate our D2NNs algorithms. We evaluate various D2NN architec- tures on several tasks. They demonstrate that D2NNs are general, ï¬exible, and can effectively improve computational efï¬ciency.
Our main contribution is the D2NN framework that al- lows a user to augment a static feed-forward network with control modules to achieve dynamic selective execution. We show that D2NNs allow a wide variety of topologies while sharing a uniï¬ed training algorithm. To our knowl- edge, D2NN is the ï¬rst single framework that can support various qualitatively different efï¬cient network designs, in- cluding cascade designs and coarse-to-ï¬ne designs. Our D2NN framework thus provides a new tool for designing and training computationally efï¬cient neural network mod- els.
# 2. Related work
Input-dependent execution has been widely used in com- puter vision, from cascaded detectors [31, 15] to hierarchi- cal classiï¬cation [10, 6]. The key difference of our work from prior work is that we jointly learn both visual features and control decisions end to end, whereas prior work either hand-designs features and control decisions (e.g. threshold- ing), or learns them separately.
In the context of deep networks, two lines of prior work have attempted to improve computational efï¬ciency. One line of work tries to eliminate redundancy in data or com- putation in a way that is input-independent. The methods include pruning networks [18, 32, 3], approximating layers with simpler functions [13, 33], and using number represen- tations of limited precision [8, 17]. The other line of work exploits the fact that not all inputs require the same amount of computation, and explores input-dependent execution of DNNs. Our work belongs to the second line, and we will In fact, our input- contrast our work mainly with them. dependent D2NN can be combined with input-independent methods to achieve even better efï¬ciency.
Among methods leveraging input-dependent execution, some use pre-deï¬ned execution-control policies. For ex- ample, cascade methods [23, 29] rely on manually-selected thresholds to control execution; Dynamic Capacity Net- work [2] designs a way to directly calculate a saliency map for execution control. Our D2NNs, instead, are fully learn- able; the execution-control policies of D2NNs do not re- quire manual design and are learned together with the rest of the network.
Our work is closely related to conditional computation methods [5, 7, 27], which activate part of a network de- pending on input. They learn policies to encourage sparse neural activations[5] or sparse expert networks[27]. Our work differs from these methods in several ways. First, our control policies are learned to directly optimize arbitrary user-deï¬ned global performance metrics, whereas condi- tional computation methods have only learned policies that encourage sparsity. In addition, D2NNs allow more ï¬exi- ble control topologies. For example, in [5], a neuron (or block of neurons) is the unit controllee of their control poli- cies; in [27], an expert is the unit controllee. Compared to their ï¬xed types of controllees, our control modules can be added in any point of the network and control arbitrary sub- networks. Also, various policy parametrization can be used in the same D2NN framework. We show a variety of param- eterizations (as different controller networks) in our D2NN examples, whereas previous conditional computation works have used some ï¬xed formats: For example, control poli- cies are parametrized as the sigmoid or softmax of an afï¬ne transformation of neurons or inputs [5, 27].
Our work is also related to attention models [11, 25, 16]. Note that attention models can be categorized as hard at- tention [25, 4, 2] versus soft [16, 28]. Hard attention mod- els only process the salient parts and discard others (e.g. processing only a subset of image subwindows); in con- trast, soft attention models process all parts but up-weight the salient parts. Thus only hard attention models perform input-dependent execution as D2NNs do. However, hard attention models differ from D2NNs because hard atten- tion models have typically involved only one attention mod- ule whereas D2NNs can have multiple attention (controller) modules â conventional hard attention models are âsingle- threadedâ whereas D2NN can be âmulti-threadedâ. In addi- tion, prior work in hard attention models have not directly
optimized for accuracy-efï¬ciency trade-offs. It is also worth noting that many mixture-of-experts methods [20, 21, 14] also involve soft attention by soft gating experts: they pro- cess all experts but only up-weight useful experts, thus sav- ing no computation.
D2NNs also bear some similarity to Deep Sequential Neural Networks (DSNN) [12] in terms of input-dependent execution. However, it is important to note that although DSNNsâ structures can in principle be used to optimize accuracy-efï¬ciency trade-offs, DSNNs are not for the task of improving efï¬ciency and have no learning method pro- posed to optimize efï¬ciency. And the method to effectively optimize for efï¬ciency-accuracy trade-off is non-trivial as is shown in the following sections. Also, DSNNs are single- threaded: it always activates exactly one path in the com- putation graph, whereas for D2NNs it is possible to have multiple paths or even the entire graph activated.
# 3. Deï¬nition and Semantics of D2NNs
Here we precisely deï¬ne a D2NN and describe its se-
mantics, i.e. how a D2NN performs inference. D2NN deï¬nition A D2NN is deï¬ned as a directed acyclic graph (DAG) without duplicated edges. Each node can be one of the three types: input nodes, output nodes, and func- tion nodes. An input or output node represents an input or output of the network (e.g. a vector). A function node represents a (differentiable) function that maps a vector to another vector. Each edge can be one of the two types: data edges and control edges. A data edge represents a vector sent from one node to another, the same as in a conventional DNN. A control edge represents a control signal, a scalar, sent from one node to another. A data edge can optionally have a user-deï¬ned âdefault valueâ, representing the out- put that will still be sent even if the function node does not execute.
For simplicity, we have a few restrictions on valid D2NNs: (1) the outgoing edges from a node are either all data edges or all control edges (i.e. cannot be a mix of data edges and control edges); (2) if a node has an incoming con- trol edge, it cannot have an outgoing control edge. Note that these two simplicity constraints do not in any way restrict the expressiveness of a D2NN. For example, to achieve the effect of a node with a mix of outgoing data edges and con- trol edges, we can just feed its data output to a new node with outgoing control edges and let the new node be an identity function.
We call a function node a control node if its outgoing edges are control edges. We call a function node a regular node if its outgoing edges are data edges. Note that it is possible for a function node to take no data input and output a constant value. We call such nodes âdummyâ nodes. We will see that the âdefault valuesâ and âdummyâ nodes can signiï¬cantly extend the ï¬exibility of D2NNs. Hereafter we
may also call function nodes âsubnetworkâ, or âmodulesâ and will use these terms interchangeably. Fig. 1 illustrates simple D2NNs with all kinds of nodes and edges. D2NN Semantics Given a D2NN, we perform inference by traversing the graph starting from the input nodes. Because a D2NN is a DAG, we can execute each node in a topolog- ical order (the parents of a node are ordered before it; we take both data edges and control edges in consideration), same as conventional DNNs except that the control nodes can cause the computation of some nodes to be skipped.
After we execute a control node, it outputs a set of con- trol scores, one for each of its outgoing control edges. The control edge with the highest score is âactivatedâ, mean- ing that the node being controlled is allowed to execute. The rest of the control edges are not activated, and their controllees are not allowed to execute. For example, in Fig 1 (right), the node Q controls N2 and N3. Either N2 or N3 will execute depending on which has the higher con- trol score.
Although the main idea of the inference (skipping nodes) seems simple, due to D2NNsâ ï¬exibility, the inference topology can be far more complicated. For example, in the case of a node with multiple incoming control edges (i.e. controlled by multiple controllers), it should execute if any of the control edges are activated. Also, when the execution of a node is skipped, its output will be either the default value or null. If the output is the default value, subsequent execution will continue as usual. If the output is null, any downstream nodes that depend on this output will in turn skip execution and have a null output unless a default value has been set. This ânullâ effect will propagate to the rest of the graph. Fig. 1 (right) shows a slightly more complicated example with default values: if N2 skips execution and out- puts null, so will N4 and N6. But N8 will execute regardless because its input data edge has a default value. In our Ex- periments Section, we will demonstrate more sophisticated D2NNs.
We can summarize the semantics of D2NNs as follows: a D2NN executes the same way as a conventional DNN ex- cept that there are control edges that can cause some nodes to be skipped. A control edge is active if and only if it has the highest score among all outgoing control edges from a node. A node is skipped if it has incoming control edges and none of them is active, or if one of its inputs is null. If a node is skipped, its output will be either null or a user- deï¬ned default value. A null will cause downstream nodes to be skipped whereas a default value will not.
A D2NN can also be thought of as a program with condi- tional statements. Each data edge is equivalent to a variable that is initialized to either a default value or null. Execut- ing a function node is equivalent to executing a command assigning the output of the function to the variable. A con- trol edge is equivalent to a boolean variable initialized to
False. A control node is equivalent to a âswitch-caseâ state- ment that computes a score for each of the boolean variables and sets the one with the largest score to True. Checking the conditions to determine whether to execute a function is equivalent to enclosing the function with an âif-thenâ state- ment. A conventional DNN is a program with only func- tion calls and variable assignments without any conditional statements, whereas a D2NN introduces conditional state- ments with the conditions themselves generated by learn- able functions.
# 4. D2NN Learning
Due to the control nodes, a D2NN cannot be trained the same way as a conventional DNN. The output of the net- work cannot be expressed as a differentiable function of all trainable parameters, especially those in the control nodes. As a result, backpropagation cannot be directly applied. The main difï¬culty lies in the control nodes, whose out- puts are discretized into control decisions. This is similar to the situation with hard attention models [25, 4], which use reinforcement learning. Here we adopt the same general strategy. Learning a Single Control Node For simplicity of expo- sition we start with a special case where there is only one control node. We further assume that all parameters except those of this control node have been learned and ï¬xed. That is, the goal is to learn the parameters of the control node to maximize a user-deï¬ned reward, which in our case is a combination of accuracy and efï¬ciency. This results in a classical reinforcement learning setting: learning a control policy to take actions so as to maximize reward. We base our learning method on Q-learning [26, 30]. We let each outgoing control edge represent an action, and let the con- trol node approximate the action-value (Q) function, which is the expected return of an action given the current state (the input to the control node).
It is worth noting that unlike many prior works that use deep reinforcement learning, a D2NN is not recurrent. For each input to the network (e.g. an image), each control node only executes once. And the decisions of a control node completely depend on the current input. As a result, an ac- tion taken on one input has no effect on another input. That is, our reinforcement learning task consists of only one time step. Our one time-step reinforcement learning task can also be seen as a contextual bandit problem, where the context vector is the input to the control module, and the arms are the possible action outputs of the module. The one time- step setting simpliï¬es our Q-learning objective to that of the following regression task:
L = (Q(s, a) â r)2, (1)
where r is a user-deï¬ned reward, a is an action, s is the in- put to control node, and Q is computed by the control node.
As we can see, training a control node here is the same as training a network to predict the reward for each action un- der an L2 loss. We use mini-batch gradient descent; for each training example in a mini-batch, we pick the action with the largest Q, execute the rest of the network, observe a reward, and perform backpropagation using the L2 loss in Eqn. 1.
During training we also perform e-greedy exploration â instead of always choosing the action with the best Q value, we choose a random action with probability «. The hyper- parameter ¢ is initialized to 1 and decreases over time. The reward r is user defined. Since our goal is to optimize the trade-off between accuracy and efficiency, in our experi- ments we define the reward as a combination of an accuracy metric A (for example, F-score) and an efficiency metric (for example, the inverse of the number of multiplications), that is, 1A + (1 â \)E where X balances the trade-off.
Mini-Bags for Set-Based Metrics Our training algorithm so far has deï¬ned the state as a single training example, i.e., the control node takes actions and observes rewards on each training example independent of others. This setup, however, introduces a difï¬culty for optimizing for accuracy metrics that cannot be decomposed over individual exam- ples.
Consider precision in the context of binary classiï¬ca- tion. Given predictions on a set of examples and the ground truth, precision is deï¬ned as the proportion of true positives among the predicted positives. Although precision can be deï¬ned on a single example, precision on a set of examples does not generally equal the average of the precisions of individual examples. In other words, precision as a metric does not decompose over individual examples and can only be computed using a set of examples jointly. This is differ- ent from decomposable metrics such as error rate, which can be computed as the average of the error rates of individ- ual examples. If we use precision as our accuracy metric, it is not clear how to deï¬ne a reward independently for each example such that maximizing this reward independently for each example would optimize the overall precision. In general, for many metrics, including precision and F-score, we cannot compute them on individual examples and aver- age the results. Instead, we must compute them using a set of examples as a whole. We call such metrics âset-based metricsâ. Our learning setup so far is ill-equipped for such metrics because a reward is deï¬ned on each example inde- pendently.
To address this issue we generalize the deï¬nition of a state from a single input to a set of inputs. We deï¬ne such a set of inputs as a mini-bag. With a mini-bag of images, any set-based metric can be computed and can be used to di- rectly deï¬ne a reward. Note that a mini-bag is different from a mini-batch which is commonly used for batch updates in gradient decent methods. Actually in our training, we cal-
culate gradients using a mini-batch of mini-bags. Now, an action on a mini-bag s = (s1, . . . , sm) is now a joint action a = (a1, . . . , am) consisting of individual actions ai on ex- ample si. Let Q(s, a) be the joint action-value function on the mini-bag s and the joint action a. We constrain the para- metric form of Q to decompose over individual examples:
Q= 35 Wi,4:), (2)
where Q(si, ai) is a score given by the control node when choosing the action ai for example si. We then deï¬ne our new learning objective on a mini-bag of size m as
m HY Qs, ai))â, () =(r-âQ (s,a))?
where r is the reward observed by choosing the joint action a on mini-bag s. That is, the control node predicts an action- value for each example such that their sum approximates the reward deï¬ned on the whole mini-bag.
It is worth noting that the decomposition of Q into sums the best joint action aâ (Eqn. 2) enjoys a nice property: under the joint action-value Q(s, a) is simply the concate- nation of the best actions for individual examples because maximizing
at = arg max(Q(s, a)) = argmax() | Q(si,a:)) (4) i=1
is equivalent to maximizing the individual summands:
aâ i = arg max ai Q(si, ai), i = 1, 2...m. (5)
That is, during test time we still perform inference on each example independently.
Another implication of the mini-bag formulation is:
râ De Asy aj)) y) 2A(si a) â , (6) Ox; a=
where xi is the output of any internal neuron for example i in the mini-bag. This shows that there is no change to the implementation of backpropagation except that we scale the gradient using the difference between the mini-bag Q-value Q and reward r. Joint Training of All Nodes We have described how to train a single control node. We now describe how to extend this strategy to all nodes including additional control nodes as well as regular nodes. If a D2NN has multiple control nodes, we simply train them together. For each mini-bag, we perform backpropagation for multiple losses together. Speciï¬cally, we perform inference using the current param- eters, observe a reward for the whole network, and then use
the same reward (which is a result of the actions of all con- trol nodes) to backpropagate for each control node.
For regular nodes, we can place losses on them the same as on conventional DNNs. And we perform backpropaga- tion on these losses together with the control nodes. The implementation of backpropagation is the same as conven- tional DNNs except that each training example have a dif- ferent network topology (execution sequence). And if a node is skipped for a particular training example, then the node does not have a gradient from the example.
It is worth noting that our D2NN framework allows arbi- trary losses to be used for regular nodes. For example, for classiï¬cation we can use the cross-entropy loss on a regu- lar node. One important detail is that the losses on regular nodes need to be properly weighted against the losses on the control nodes; otherwise the regular losses may dominate, rendering the control nodes ineffective. One way to elimi- nate this issue is to use Q-learning losses on regular nodes as well, i.e. treating the outputs of a regular node as action- values. For example, instead of using the cross-entropy loss on the classiï¬cation scores, we treat the classiï¬cation scores as action-valuesâan estimated reward of each classiï¬cation decision. This way Q-learning is applied to all nodes in a uniï¬ed way and no additional hyperparameters are needed to balance different kinds of losses. In our experiments un- less otherwise noted we adopt this uniï¬ed approach.
# 5. Experiments
We here demonstrate four D2NN structures motivated by different demands of efï¬cient network design to show its ï¬exibility and effectiveness, and compare D2NNsâ ability to optimize efï¬ciency-accuracy trade-offs with prior work. We implement the D2NN framework in Torch. Torch provides functions to specify the subnetwork architecture inside a function node. Our framework handles the high- level communication and loss propagation. High-Low Capacity D2NN Our ï¬rst experiment is with a simple D2NN architecture that we call âhigh-low capacity D2NNâ. It is motivated by that we can save computation by choosing a low-capacity subnetwork for easy examples. It consists of a single control nodes (Q) and three regular nodes (N1-N3) as in Fig. 3a). The control node Q chooses between a high-capacity N2 and a low-capacity N3; the N3 has fewer neurons and uses less computation. The control node itself has orders of magnitude fewer computation than regular nodes (this is true for all D2NNs demonstrated).
We test this hypothesis using a binary classiï¬cation task in which the network classiï¬es an input image as face or non-face. We use the Labeled Faces in the Wild [19, 22] dataset. Speciï¬cally, we use the 13k ground truth face crops (112Ã112 pixels) as positive examples and randomly sampled 130k background crops (with an intersection over union less than 0.3) as negative examples. We hold out 11k
8 @
a) High-Low (LFW-B) b) Cascade (LFW-B) c) Chain (LFW-B) d) Hierarchy (ILSVRC-10) 0.8 1 d 1 a a 08 2 8 2 G06 0.6 807 S09 ââ D2NN 3 0.4 Y- 206 ââ D2NN L 8 Q2 ââ D2NN 0.2 0.5 â*= static NNs LJ o* 4_NN 0 04 0.8 0 0 0.20.40.60.8 1 0 0.20.40.60.8 1 0 0.20.40.60.8 1 0.2040.60.8 1 cost cost cost cost
Figure 2. The accuracy-cost or fscore-cost curves of various D2NN architectures, as well as conventional DNN baselines consisting of only regular nodes.
a) High-Low b) Cascade N1 N2 A -d c) Chain d) Hierarchy
Figure 3. Four different D2NN architectures.
images for validation and 22k for testing. We refer to this dataset as LFW-B and use it as a testbed to validate the ef- fectiveness of our new D2NN framework.
To evaluate performace we measure accuracy using the F1 score, a better metric than percentage of correct pre- dictions for an unbalanced dataset. We measure computa- tional cost using the number of multiplications following prior work [2, 27] and for reproductivity. Speciï¬cally, we use the number of multiplications (control nodes included), normalized by a conventional DNN consisting of N1 and N2, that is, the high-capacity execution path. Note that our D2NNs also allow to use other efï¬ciency measurement such as run-time, latency.
During training we deï¬ne the Q-learning reward as a lin- ear combination of accuracy A and efï¬ciency E (negative cost): r = λA + (1 â λ)E where λ â [0, 1]. We train instances of high-low capacity D2NNs using different λâs. As λ increases, the learned D2NN trades off efï¬ciency for accuracy. Fig. 2a) plots the accuracy-cost curve on the test set; it also plots the accuracy and efï¬ciency achieved by a conventional DNN with only the high capacity path N1+N2 (High NN) and a conventional DNN with only the low ca- pacity path N1+N3 (Low NN).
As we can see, the D2NN achieves a trade-off curve close to the upperbound: there are points on the curve that are as fast as the low-capacity node and as accurate as the high-capacity node. Fig. 4(left) plots the distribution of ex- amples going through different execution paths. It shows that as λ increases, accuracy becomes more important and more examples go through the high-capacity node. These
results suggest that our learning algorithm is effective for networks with a single control node.
With inference efï¬ciency improved, we also observe that for training, a D2NN typically takes 2-4 times more iter- ations to converge than a DNN, depending on particular model capacities, conï¬gurations and trade-offs. Cascade D2NN We next experiment with a more sophisti- cated design that we call a âcascade D2NNâ (Fig. 3b). It is inspired by the standard cascade design commonly used in computer vision. The intuition is that many negative ex- amples may be rejected early using simple features. The cascade D2NN consists of seven regular nodes (N1-N7) and three control nodes (Q1-Q3). N1-N7 form 4 cascade stages (i.e. 4 conventional DNNs, from small to large) of the cas- cade: N1+N2, N3+N4, N5+N6, N7. Each control node de- cides whether to execute the next cascade stage or not.
We evaluate the network on the same LFW-B face clas- siï¬cation task using the same evaluation protocol as in the high-low capacity D2NN. Fig. 2b) plots the accuracy- cost tradeoff curve for the D2NN. Also included are the accuracy-cost curve (âstatic NNsâ) achieved by the four conventional DNNs as baselines, each trained with a cross- entropy loss. We can see that the cascade D2NN can achieve a close to optimal trade-off, reducing computation signiï¬- cantly with negligible loss of accuracy. In addition, we can see that our D2NN curve outperforms the trade-off curve achieved by varying the design and capacity of static con- ventional networks. This result demonstrates that our al- gorithm is successful for jointly training multiple control nodes.
For a cascade, wall time of inference is often an impor- tant consideration. Thus we also measure the inference wall time (excluding data loading with 5 runs) in this Cascade D2NN. We ï¬nd that a 82% wall-time cost corresponds to a 53% number-of-multiplication cost; and a 95% corresponds to a 70%. Deï¬ning reward directly using wall time can fur- ther reduce the gap. Chain D2NN Our third design is a âChain D2NNâ (Fig. 3c). The network is shaped as a chain, where each link consists of a control node selecting between two (or more) regular nodes. In other words, we perform a sequence of vector-to- vector transforms; for each transform we choose between several subnetworks. One scenario that we can use this D2NN is that the conï¬guration of a conventional DNN (e.g. number of layers, ï¬lter sizes) cannot be fully decided. Also, it can simulate shortcuts between any two layers by using an identity function as one of the transforms. This chain D2NN is qualitatively different from other D2NNs with a tree-shaped data graph because it allows two divergent data paths to merge again. That is, the number of possible exe- cution paths can be exponential to the number of nodes.
In Fig. 3c), the ï¬rst link is that Q1 chooses between a low-capacity N2 and a high-capacity N3. If one of them is chosen, the other will output a default value zero. The node N4 adds the outputs of N2 and N3 together. Fig. 2c) plots the accuracy-cost curve on the LFW-B task. The two baselines are: a conventional DNN with the lowest capacity path (N1-N2-N5-N8-N10), and a conventional DNN with the highest capacity path (N1-N3-N6-N9-N10). The cost is measured as the number of multiplications, normalized by the cost of the high-capacity baseline.
Fig. 2c) shows that the chain D2NN achieves a trade- off curve close to optimal and can speed up computation signiï¬cantly with little accuracy loss. This shows that our learning algorithm is effective for a D2NN whose data graph is a general DAG instead of a tree. Hierarchical D2NN In this experiment we design a D2NN for hierarchical multiclass classiï¬cation. The idea is to ï¬rst classify images to coarse categories and then to ï¬ne cat- egories. This idea has been explored by numerous prior works [24, 6, 10], but here we show that the same idea can be implemented via a D2NN trained end to end.
We use ILSVRC-10, a subset of the ILSVRC-65 [9]. In ILSVRC-10, 10 classes are organized into a 3-layer hierar- chy: 2 superclasses, 5 coarse classes and 10 leaf classes. Each class has 500 training images, 50 validation images, and 150 test images. As in Fig. 3d), the hierarchy in this D2NN mirrors the semantic hierarchy in ILSVRC-10. An image ï¬rst goes through the root N1. Then Q1 decides whether to descend the left branch (N2 and its children), and Q2 decides whether to descend the right branch (N3 and its children). The leaf nodes N4-N8 are each responsible for classifying two ï¬ne-grained leaf classes. It is important to
note that an input image can go down parallel paths in the hierarchy, e.g. descending both the left branch and the right branch, because Q1 and Q2 make separate decisions. This âmulti-threadingâ allows the network to avoid committing to a single path prematurely if an input image is ambigu- ous.
Fig. 2d) plots the accuracy-cost curve of our hierarchi- cal D2NN. The accuracy is measured as the proportion of correctly classiï¬ed test examples. The cost is measured as the number of multiplications, normalized by the cost of a conventional DNN consisting only of the regular nodes (de- noted as NN in the ï¬gure). We can see that the hierarchi- cal D2NN can match the accuracy of the full network with about half of the computational cost.
Fig. 4(right) plots for the hierarchical D2NN the distri- bution of examples going through execution sequences with different numbers of nodes activated. Due to the parallelism of D2NN, there can be many different execution sequences. We also see that as λ increases, accuracy is given more weight and more nodes are activated. Comparison with Dynamic Capacity Networks In this experiment we empirically compare our approach to closely related prior work. Here we compare D2NNs with Dynamic Capacity Networks (DCN) [2], for which efï¬cency mea- surement is the absolute number of multiplications. Given an image, a DCN applies an additional high capacity sub- network to a set of image patches, selected using a hand- designed saliency based policy. The idea is that more inten- sive processing is only necessary for certain image regions. To compare, we evaluate with the same multiclass clas- siï¬cation task on the Cluttered MNIST [25], which consists of MNIST digits randomly placed on a background clut- tered with fragments of other digits. We train a chain D2NN of length 4 , which implements the same idea of choosing a high-capacity alternative subnetwork for certain inputs. Fig. 6 plots the accuracy-cost curve of our D2NN as well as the accuracy-cost point achieved by the DCN in [2]âan accuracy of 0.9861 and and a cost of 2.77Ã107. The closest point on our curve is an slightly lower accuracy of 0.9698 but slightly better efï¬ciency (a cost of 2.66 à 107). Note that although our accuracy of 0.9698 is lower, it compares favorably to those of other state-of-the-art methods such as DRAW [16]: 0.9664 and RAM [25]: 0.9189. Visualization of Examples in Different Paths In Fig. 5 (left), we show face examples in the high-low D2NN for λ=0.4. Examples in low-capacity path are generally eas- ier (e.g. more frontal) than examples in high-capacity path. In Fig. 5 (right), we show car examples in the hierarchical D2NN with 1) a single path executed and 2) the full graph executed (for λ=1). They match our intuition that examples with a single path executed should be easier (e.g. less occlu- sion) to classify than examples with the full graph executed. CIFAR-10 Results We train a Cascade D2NN on CIFAR-
= Ml, =0.525 Ta=0.8 M@a=1 8 ® o.6F Oo £08 © 0.4) 5 0.6 2 0.2; L © 2 0 go4 1 8 0.2 80 rs \ lidasa® A 5 : fo) 7 oe)
Figure 4. Distribution of examples going through different execution paths. Skipped nodes are in grey. The hyperparameter λ controls the trade-off between accuracy and efï¬ciency. A bigger λ values accuracy more. Left: for the high-low capacity D2NN. Right: for the hierarchical D2NN. The X-axis is the number of nodes activated.
Figure 5. Examples with different paths in a high-low D2NN (left) and a hierarchical D2NN (right).
0.8 accuracy oO [2) gos BR ââD2NN « DCN 0 2 4 6 8 #multiplications x10" o iN fo}
# 7. Acknowledgments
This work is partially supported by the National Science Foundation under Grant No. 1539011 and gifts from Intel.
# Appendix
# A. Implementation Details
Figure 6. Accuracy-cost curve for a chain D2NN on the CMNIST task compared to DCN [2].
10 where the corresponded DNN baseline is the ResNet- 110. We initialize this D2NN with pre-trained ResNet-110 weights, apply cross-entropy losses on regular nodes, and tune the mixed-loss weight as explained in Sec. 4. We see a 30% reduction of cost with a 2% loss (relative) on accuracy, and a 62% reduction of cost with a 7% loss (relative) on ac- curacy. The D2NNâs ability to improve efï¬ciency relies on the assumption that not all inputs require the same amount of computation. In CIFAR-10, all images are low resolution (32 à 32), and it is likely that few images are signiï¬cantly easier to classify than others. As a result, the efï¬ciency im- provement is modest compared to other datasets.
We implement the D2NN framework in Torch [1]. Torch already provides implementations of conventional neural network modules (nodes). So a user can specify the sub- network architecture inside a control node or a regular node using existing Torch functionalities. Our framework then handles the communication between the user-deï¬ned nodes in the forward and backward pass.
To handle parallel paths, default-valued nodes and nodes with multiple data parents, we need to keep track of an ex- ampleâs execution status (which nodes are activated by this example) and output status (which nodes have output for this example). An exampleâs output status is different from its execution status if some nodes are not activated but have default values. For runtime efï¬ciency, we implement the tracking of examples at the mini-batch level. That is, we perform forward and backward passes for a mini-batch of examples as a regular DNN does. Each mini-batch consists of several mini-bags of images.
# 6. Conclusion
We have introduced Dynamic Deep Neural Networks (D2NN), a new type of feed-forward deep neural networks that allow selective execution. Extensive experiments have demonstrated that D2NNs are ï¬exible and effective for op- timizing accuracy-efï¬ciency trade-offs.
We describe the implementation of D2NN learning pro- cedure as two steps. First, the preprocessing step: When a user-deï¬ned D2NN model is fed into our framework, we ï¬rst perform a breadth-ï¬rst search to get the DAG orders of nodes while performing structure error checks, contructing data and control relationships between nodes and calculat- ing the cost (number of multiplications) of each node.
After the preprocessing, the training step is similar to a regular DNN: a forward pass and a backward pass. All nodes are visited according to a topological ordering in a forward pass and the reverse ordering in a backward pass.
For each function node, the forward pass has three steps: fetch inputs, forward inside the node, and send data or con- trol signals to children nodes. When dealing with multiple data inputs and multiple control signals, the D2NN will ï¬l- ter examples with more than one null inputs or all negative control signals. When a default value has been set for a node, all examples have to send out data. If the node is not activated for a particular example, the output will take the default value. A backward pass has similar logic: fetch gra- dients from children, perform the backward pass inside and send out gradients to parents. It is worth noting that when a default value is used in a node, the gradients can be blocked by this node because it is not actually executed.
# B. ILSVRC-10 Semantic Hierarchy
The ILSVRC-10 dataset is a subset of the ILSVRC-65 In our ILSVRC-10, there are 10 classes or- dataset [9]. ganized into a 3-layer hierarchy: 2 superclasses, 5 coarse classes and 10 leaf classes as in Fig 7. Each class has 500 training images, 50 validation images, and 150 test images.
# C. Conï¬gurations
High-Low Capacity D2NN The high-low capacity D2NN consists of a single control node (Q) and three regular nodes (N1,N2,N3) as illustrated in Fig. 3a).
⢠Node N1: a convolutional layer with a 3Ã3 ï¬lter size, 8 ï¬lters and a stride of 2, followed by a 3Ã3 max- pooling layer with a stride of 2.
⢠Node N2: a convolutional layer with a 3Ã3 ï¬lter size and 16 ï¬lters, followed by a 3Ã3 max-pooling layer with a stride of 2. The output is reshaped and fed into a fully connected layer with 512 neurons followed by another fully connected layer with the 2-class output.
⢠Node N3: three 3Ã3 max-pooling layers, each with a stride of 2, followed by two fully connected layers with 32 neurons and the 2-class output.
⢠Node Q1: a convolutional layer with a 3Ã3 ï¬lter size and 2 ï¬lters, followed by a 3Ã3 max-pooling layer with a stride of 2. The output is reshaped and fed into a fully connected layer with 128 neurons followed by another fully connected layer with the 2-action output.
Cascade D2NN The cascade D2NN consists of a sequence of four regular nodes (N1 to N7) and three control nodes (Q1-Q3) as in Fig. 3b).
⢠Node N1: a convolutional layer with a 3Ã3 ï¬lter size, 2 ï¬lters and a stride of 2, followed by a 3Ã3 max- pooling layer with a stride of 2.
⢠Node N2: three 3Ã3 max-pooling layers with strides of 2. The output is reshaped and fed into a fully con- nected layer with the 2-class output.
⢠Node N3: two convolutional layers with both 3Ã3 ï¬l- ter sizes and 2, 8 ï¬lters respectively, each followed by a 3Ã3 max-pooling layer with a stride of 2.
two 3Ã3 max-pooling layers with strides of 2. The output is reshaped and fed into a fully con- nected layer with the 2-class output.
⢠Node N5: two convolutional layers with both 3Ã3 ï¬l- ter sizes and 4, 16 ï¬lters respectively, each followed by a 3Ã3 max-pooling layer with a stride of 2.
two 3Ã3 max-pooling layers with strides of 2. The output is reshaped and fed into a fully con- nected layer with the 2-class output.
⢠Node N7: ï¬ve convolutional layers with all 3Ã3 ï¬l- ter sizes and 2, 8, 32, 32, 64 ï¬lters repectively, each followed by a 3Ã3 max-pooling layer with a stride of 2 except for the third and ï¬fth layer. The output is reshaped and fed into a fully connected layer with 512 neurons followed by another fully connected layer with the 2-class output.
⢠Node Q1, Q2, Q3: the input is reshaped and fed into a fully connected layer with the 2-action output.
Chain D2NN The Chain D2NN is shaped as a chain, where each link consists of a control node selecting between two regular nodes. In the experiments of LFW-B dataset, we use a 3-stage Chain D2NN as in Fig. 3c).
⢠Node N1: a convolutional layer with a 3Ã3 ï¬lter size, 2 ï¬lters and a stride of 2, followed by a 3Ã3 max- pooling layer with a stride of 2.
⢠Node N2: a convolutional layer with a 1Ã1 ï¬lter size and 16 ï¬lters.
⢠Node N3: a convolutional layer with a 3Ã3 ï¬lter size and 16 ï¬lters.
⢠Node N4: a 3Ã3 max-pooling layer with a stride of 2.
⢠Node N5: a convolutional layer with a 1Ã1 ï¬lter size and 32 ï¬lters.
⢠Node N6: two convolutional layers with both 3Ã3 ï¬l- ter sizes and 32, 32 ï¬lters repectively.
Object a oS Vehicle Animal Boat Car Dog Bird . â⢠â Egygtion Bétsian oe Bik - Fireboat | Gondalo Ambulance Jeep Deerhound Ptarmigan cat Dane grouse
Figure 7. The semantic class hierarchy of the ILSVRC-10 dataset.
⢠Node N7: a 3Ã3 max-pooling layer with a stride of 2.
⢠Node N8: a convolutional layer with a 1Ã1 ï¬lter size and 32 ï¬lters followed by a 3Ã3 max-pooling layer with a stride of 2. The output is reshaped and fed into a fully connected layer with 256 neurons.
⢠Node N9: a convolutional layer with a 3Ã3 ï¬lter size and 64 ï¬lters. The output is reshaped and fed into a fully connected layer with 256 neurons.
2 and three fully connected layers with 2048 neurons, 2048 neurons and the 2 ï¬ne-class output respectively.
⢠Node Q1 and Q2: two convolutional layers with 5Ã5, 3Ã3 ï¬lter sizes and 16, 32 ï¬lters respectively (the for- mer has a 2Ã2 padding), each followed by a 3Ã3 max- pooling layer with a stride of 2. The output is reshaped and fed into three fully connected layers with 1024 neurons, 1024 neurons and the 2-action output respec- tively.
⢠Node N10: a fully connected layer with the 2-class output.
⢠Node Q1: a convolutional layer with a 3Ã3 ï¬lter size and 8 ï¬lters with a 3Ã3 max-pooling layer with a stride of 2 before and a 3Ã3 max-pooling layer with a stride of 2 after. The output is reshaped and fed into two fully connected layers with 64 neurons and the 2-action out- put respectively.
⢠Node Q3 Q7: two convolutional layers with 5Ã5, 3Ã3 ï¬lter sizes and 16, 32 ï¬lters respectively (the former has a 2Ã2 padding), each followed by a 3Ã3 max- pooling layer with a stride of 2. The output is reshaped and fed into three fully connected layers with 1024 neurons, 1024 neurons and the 2-action output respec- tively.
⢠Node Q2: a 3Ã3 max-pooling layer with a stride of 2 followed by a convolutional layer with a 3Ã3 ï¬lter size and 4 ï¬lters. The output is reshaped and fed into two fully connected layers with 64 neurons and the 2- action output respectively.
Comparison with Dynamic Capacity Networks We train a chain D2NN of length 4 similar to Fig. 3c).
⢠Node N1: a convolutional layer with a 3Ã3 ï¬lter size and 24 ï¬lters.
⢠Node Q3: a convolutional layer with a 3Ã3 ï¬lter size and 2 ï¬lters. The output is reshaped and fed into two fully connected layers with 64 neurons and the 2- action output respectively.
⢠Node N3: a convolutional layer with a 3Ã3 ï¬lter size and 24 ï¬lters.
⢠Node N4: a 2Ã2 max-pooling layer with a stride of 2.
Hierarchical D2NN Fig. 3d) illustrates the design of our hierarchical D2NN.
⢠Node N1: a convolutional layer with a 11Ã11 ï¬lter size, 64 ï¬lters, a stride of 4 and a 2Ã2 padding, fol- lowed by a 3Ã3 max-pooling layer with a stride of 2.
⢠Node N6: a convolutional layer with a 3Ã3 ï¬lter size and 24 ï¬lters.
⢠Node N7: an identity layer which directly uses inputs as outputs.
⢠Node N9: a convolutional layer with a 3Ã3 ï¬lter size and 24 ï¬lters.
⢠Node N2 and N3: a convolutional layer with a 5Ã5 ï¬lter size, 96 ï¬lters and a 2Ã2 padding.
⢠Node N10: a 2Ã2 max-pooling layer with a stride of 2.
⢠Node N4 N8: a 3Ã3 max-pooling layer with a stride of 2 followed by three convolutional layers with 3Ã3 ï¬l- ter sizes and 160, 128, 128 ï¬lters respectively. The out- put is fed into a 3Ã3 max-pooling layer with a stride of
⢠Node N12: a convolutional layer with a 3Ã3 ï¬lter size and 24 ï¬lters.
⢠Node N2, N5, N8, N11: an identity layer.
⢠Node N13: a convolutional layer with a 4Ã4 ï¬lter size, 96 ï¬lters, a stride of 2 and no padding, followed by a 11Ã11 max-pooling layer. The output is reshaped and fed into a fully connected layer with the 10-class output.
⢠Node Q1: a convolutional layer with a 3Ã3 ï¬lter size and 8 ï¬lters with two 2Ã2 max-pooling layers with strides of 2 before and one 2Ã2 max-pooling layer with a stride of 2 after. The output is reshaped and fed into two fully connected layers with 256 neurons and the 2-action output respectively.
⢠Node Q2: a convolutional layer with a 3Ã3 ï¬lter size and 8 ï¬lters with a 2Ã2 max-pooling layer with a stride of 2 before and a 2Ã2 max-pooling layer with a stride of 2 after. The output is reshaped and fed into two fully connected layers with 256 neurons and the 2-action output respectively.
⢠Node Q3: a convolutional layer with a 3Ã3 ï¬lter size and 8 ï¬lters with a 2Ã2 max-pooling layer with a stride of 2 before and a 2Ã2 max-pooling layer with a stride of 2 after. The output is reshaped and fed into two fully connected layers with 256 neurons and the 2-action output respectively.
⢠Node Q4: a convolutional layer with a 3Ã3 ï¬lter size and 8 ï¬lters, followed by a 2Ã2 max-pooling layer with a stride of 2. The output is reshaped and fed into two fully connected layers with 256 neurons and the 2-action output respectively.
For all 5 D2NNs, all convolutional layers use 1Ã1 padding and each is followed by a ReLU layer unless speci- ï¬ed individually. Each fully connected layer except the out- put layers is followed by a ReLU layer.
# References
[1] Torch. http://torch.ch/. 8 [2] A. Almahairi, N. Ballas, T. Cooijmans, Y. Zheng, H. Larochelle, and A. C. Courville. Dynamic capacity net- works. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pages 2549â2558, 2016. 1, 2, 6, 7, 8 [3] J. M. Alvarez and M. Salzmann. Learning the number of neurons in deep networks. In Advances in Neural Informa- tion Processing Systems, pages 2270â2278, 2016. 2
[4] J. Ba, V. Mnih, and K. Kavukcuoglu. Multiple object recog- nition with visual attention. arXiv preprint arXiv:1412.7755, 2014. 2, 4
[5] E. Bengio, P.-L. Bacon, J. Pineau, and D. Precup. Con- ditional computation in neural networks for faster models. arXiv preprint arXiv:1511.06297, 2015. 2
[6] S. Bengio, J. Weston, and D. Grangier. Label embedding trees for large multi-class tasks. In Advances in Neural In- formation Processing Systems, pages 163â171, 2010. 2, 7 [7] Y. Bengio, N. L´eonard, and A. Courville. Estimating or prop- agating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013. 2 [8] Y. Chen, T. Luo, S. Liu, S. Zhang, L. He, J. Wang, L. Li, T. Chen, Z. Xu, N. Sun, et al. Dadiannao: A machine- In Microarchitecture (MICRO), learning supercomputer. 2014 47th Annual IEEE/ACM International Symposium on, pages 609â622. IEEE, 2014. 2
[9] J. Deng, J. Krause, A. C. Berg, and L. Fei-Fei. Hedg- ing your bets: Optimizing accuracy-speciï¬city trade-offs in large scale visual recognition. In Computer Vision and Pat- tern Recognition (CVPR), 2012 IEEE Conference on, pages 3450â3457. IEEE, 2012. 7, 9
[10] J. Deng, S. Satheesh, A. C. Berg, and F. Li. Fast and bal- anced: Efï¬cient label tree learning for large scale object recognition. In Advances in Neural Information Processing Systems, pages 567â575, 2011. 2, 7
[11] M. Denil, L. Bazzani, H. Larochelle, and N. de Freitas. Learning where to attend with deep architectures for image tracking. Neural computation, 24(8):2151â2184, 2012. 2
[12] L. Denoyer and P. Gallinari. Deep sequential neural network. arXiv preprint arXiv:1410.0510, 2014. 1, 3
[13] E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fer- gus. Exploiting linear structure within convolutional net- In Advances in Neural In- works for efï¬cient evaluation. formation Processing Systems, pages 1269â1277, 2014. 2
[14] D. Eigen, M. Ranzato, and I. Sutskever. Learning factored representations in a deep mixture of experts. arXiv preprint arXiv:1312.4314, 2013. 3
[15] P. F. Felzenszwalb, R. B. Girshick, and D. McAllester. Cas- cade object detection with deformable part models. In Com- puter vision and pattern recognition (CVPR), 2010 IEEE conference on, pages 2241â2248. IEEE, 2010. 2
[16] K. Gregor, I. Danihelka, A. Graves, D. J. Rezende, and D. Wierstra. Draw: A recurrent neural network for image generation. In Proceedings of the 32nd International Con- ference on Machine Learning (ICML-15). Springer, JMLR Workshop and Conference Proceedings, 2015. 2, 7
[17] S. Gupta, A. Agrawal, K. Gopalakrishnan, and P. Narayanan. In Pro- Deep learning with limited numerical precision. ceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1737â1746, 2015. 2
[18] S. Han, J. Pool, J. Tran, and W. Dally. Learning both weights and connections for efï¬cient neural network. In Advances in Neural Information Processing Systems, pages 1135â1143, 2015. 2
[19] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical Re- port 07-49, University of Massachusetts, Amherst, October 2007. 5
[20] R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hin- ton. Adaptive mixtures of local experts. Neural computation, 3(1):79â87, 1991. 3
[21] M. I. Jordan and R. A. Jacobs. Hierarchical mixtures of ex- perts and the em algorithm. Neural computation, 6(2):181â 214, 1994. 3
[22] G. B. H. E. Learned-Miller. Labeled faces in the wild: Up- dates and new reporting procedures. Technical Report UM- CS-2014-003, University of Massachusetts, Amherst, May 2014. 5
[23] H. Li, Z. Lin, X. Shen, J. Brandt, and G. Hua. A convolu- tional neural network cascade for face detection. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5325â5334, 2015. 1, 2
[24] B. Liu, F. Sadeghi, M. Tappen, O. Shamir, and C. Liu. Prob- abilistic label trees for efï¬cient large scale image classiï¬ca- tion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 843â850, 2013. 7 [25] V. Mnih, N. Heess, A. Graves, et al. Recurrent models of vi- sual attention. In Advances in Neural Information Processing Systems, pages 2204â2212, 2014. 2, 4, 7
[26] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Play- ing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013. 4
[27] N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. Le, G. Hinton, and J. Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017. 1, 2, 6
[28] M. F. Stollenga, J. Masci, F. Gomez, and J. Schmidhuber. Deep networks with internal selective attention through feed- back connections. In Advances in Neural Information Pro- cessing Systems, pages 3545â3553, 2014. 2
[29] Y. Sun, X. Wang, and X. Tang. Deep convolutional net- work cascade for facial point detection. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 3476â3483. IEEE, 2013. 1, 2
[30] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction, volume 1. 4
[31] P. Viola and M. J. Jones. Robust real-time face detection. International journal of computer vision, 57(2):137â154, 2004. 2
[32] W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, pages 2074â2082, 2016. 2
[33] X. Zhang, J. Zou, K. He, and J. Sun. Accelerating very deep convolutional networks for classiï¬cation and detection. IEEE transactions on pattern analysis and machine intelli- gence, 38(10):1943â1955, 2016. 2 | {
"id": "1511.06297"
} |
1612.08083 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | 7 1 0 2
p e S 8 ] L C . s c [
3 v 3 8 0 8 0 . 2 1 6 1 : v i X r a
# Language Modeling with Gated Convolutional Networks
# Yann N. Dauphin 1 Angela Fan 1 Michael Auli 1 David Grangier 1
# Abstract
The pre-dominant approach to language mod- eling to date is based on recurrent neural net- works. Their success on this task is often linked to their ability to capture unbounded context. In this paper we develop a ï¬nite context ap- proach through stacked convolutions, which can be more efï¬cient since they allow paralleliza- tion over sequential tokens. We propose a novel simpliï¬ed gating mechanism that outperforms Oord et al. (2016b) and investigate the impact of key architectural decisions. The proposed ap- proach achieves state-of-the-art on the WikiText- 103 benchmark, even though it features long- term dependencies, as well as competitive re- sults on the Google Billion Words benchmark. Our model reduces the latency to score a sen- tence by an order of magnitude compared to a recurrent baseline. To our knowledge, this is the ï¬rst time a non-recurrent approach is competitive with strong recurrent models on these large scale language tasks.
outperform classical n-gram language models (Kneser & Ney, 1995; Chen & Goodman, 1996). These classical mod- els suffer from data sparsity, which makes it difï¬cult to rep- resent large contexts and thus, long-range dependencies. Neural language models tackle this issue by embedding words in continuous space over which a neural network is applied. The current state of the art for language model- ing is based on long short term memory networks (LSTM; Hochreiter et al., 1997) which can theoretically model ar- bitrarily long dependencies.
In this paper, we introduce new gated convolutional net- works and apply them to language modeling. Convolu- tional networks can be stacked to represent large context sizes and extract hierarchical features over larger and larger contexts with more abstractive features (LeCun & Bengio, 1995). This allows them to model long-term dependen- cies by applying O( N k ) operations over a context of size N and kernel width k. In contrast, recurrent networks view the input as a chain structure and therefore require a linear number O(N ) of operations.
# 1. Introduction
Statistical language models estimate the probability distri- bution of a sequence of words by modeling the probability of the next word given preceding words, i.e.
Analyzing the input hierarchically bears resemblance to classical grammar formalisms which build syntactic tree structures of increasing granuality, e.g., sentences consist of noun phrases and verb phrases each comprising further internal structure (Manning & Sch¨utze, 1999; Steedman, 2002). Hierarchical structure also eases learning since the number of non-linearities for a given context size is reduced compared to a chain structure, thereby mitigating the van- ishing gradient problem (Glorot & Bengio, 2010).
N P(wo,---,wn) = P(wo) Il P(wi|wo,--.,Wi-1), i=1
where wi are discrete word indices in a vocabulary. Lan- guage models are a critical part of systems for speech recognition (Yu & Deng, 2014) and machine translation (Koehn, 2010).
Modern hardware is well suited to models that are highly parallelizable. In recurrent networks, the next output de- pends on the previous hidden state which does not enable parallelization over the elements of a sequence. Convolu- tional networks, however, are very amenable to this com- puting paradigm since the computation of all input words can be performed simultaneously (§2).
Recently, neural networks (Bengio et al., 2003; Mikolov et al., 2010; Jozefowicz et al., 2016) have been shown to
1Facebook AI Research. Correspondence to: Yann N. Dauphin <ynd@fb.com>.
Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s).
Gating has been shown to be essential for recurrent neural networks to reach state-of-the-art performance (Jozefow- icz et al., 2016). Our gated linear units reduce the vanish- ing gradient problem for deep architectures by providing a linear path for the gradients while retaining non-linear ca- pabilities (§5.2).
Language Modeling with Gated Convolutional Networks
We show that gated convolutional networks outperform other recently published language models such as LSTMs trained in a similar setting on the Google Billion Word Benchmark (Chelba et al., 2013). We also evaluate the abil- ity of our models to deal with long-range dependencies on the WikiText-103 benchmark for which the model is con- ditioned on an entire paragraph rather than a single sen- tence and we achieve a new state-of-the-art on this dataset (Merity et al., 2016). Finally, we show that gated linear units achieve higher accuracy and converge faster than the LSTM-style gating of Oord et al. (2016; §4, §5).
# 2. Approach
In this paper we introduce a new neural language model that replaces the recurrent connections typically used in re- current networks with gated temporal convolutions. Neu- ral language models (Bengio et al., 2003) produce a repre- sentation H = [h0, . . . , hN ] of the context for each word w0, . . . , wN to predict the next word P (wi|hi). Recurrent neural networks f compute H through a recurrent function hi = f (hiâ1, wiâ1) which is an inherently sequential pro- cess that cannot be parallelized over i.1
Our proposed approach convolves the inputs with a func- tion f to obtain H = f â w and therefore has no tempo- ral dependencies, so it is easier to parallelize over the in- dividual words of a sentence. This process will compute each context as a function of a number of preceding words. Compared to recurrent networks, the context size is ï¬nite but we will demonstrate both that inï¬nite contexts are not necessary and our models can represent large enough con- texts to perform well in practice (§5).
-{_ Input sentence Text The cat sat on the mat Wo Wy Wp Wz Wy We We {Lookup Table <\ E=Dy, O000O @efeseze) OO000O OO000 OO0000 OO000O ( { maa) Ok os i BeE.V+e (~ i} OodpoCo}+_/ 0000 | CODSOO | _| @lelefere) 1 ©0000 Gating A = DTAqgAAAaAaAEe Ho=Ae0(B) IS) Ja} fo} |S] lal lal lo a} (oJ (SJ lal lS} [S| lo Stack L - 1 Convolution+Gating Blocks /-\__Softmax } Y= softmax(WH, ) ©0000
Figure 1 illustrates the model architecture. Words are rep- resented by a vector embedding stored in a lookup table D|V|Ãe where |V| is the number of words in the vocabulary and e is the embedding size. The input to our model is a sequence of words w0, . . . , wN which are represented by word embeddings E = [Dw0, . . . , DwN ]. We compute the hidden layers h0, . . . , hL as
Figure 1. Architecture of the gated convolutional network for lan- guage modeling.
hl(X) = (X â W + b) â Ï(X â V + c) (1)
where m, n are respectively the number of input and output feature maps and k is the patch size, X â RN Ãm is the input of layer hl (either word embeddings or the outputs of previous layers), W â RkÃmÃn, b â Rn, V â RkÃmÃn, c â Rn are learned parameters, Ï is the sigmoid function and â is the element-wise product between matrices.
When convolving inputs, we take care that hi does not contain information from future words. We address this by shifting the convolutional inputs to prevent the kernels
1Parallelization is usually done over multiple sequences in- stead.
from seeing future context (Oord et al., 2016a). Speciï¬- cally, we zero-pad the beginning of the sequence with k â 1 elements, assuming the ï¬rst input element is the beginning of sequence marker which we do not predict and k is the width of the kernel.
The output of each layer is a linear projection X â W + b modulated by the gates Ï(X â V + c). Similar to LSTMs, these gates multiply each element of the matrix X â W + b and control the information passed on in the hierarchy. We dub this gating mechanism Gated Linear Units (GLU). Stacking multiple layers on top of the input E gives a repre- sentation of the context for each word H = hLâ¦. . .â¦h0(E). We wrap the convolution and the gated linear unit in a pre- activation residual block that adds the input of the block to
Language Modeling with Gated Convolutional Networks
the output (He et al., 2015a). The blocks have a bottleneck structure for computational efï¬ciency and each block has up to 5 layers.
trast, the gradient of the gated linear unit
V[X ®@ o(X)] = VX @o(X)+X@o'(X)VX (3)
The simplest choice to obtain model predictions is to use a softmax layer, but this choice is often computationally inefï¬cient for large vocabularies and approximations such as noise contrastive estimation (Gutmann & Hyv¨arinen) or hierarchical softmax (Morin & Bengio, 2005) are pre- ferred. We choose an improvement of the latter known as adaptive softmax which assigns higher capacity to very fre- quent words and lower capacity to rare words (Grave et al., 2016a). This results in lower memory requirements as well as faster computation at both training and test time.
has a path âX â Ï(X) without downscaling for the ac- tivated gating units in Ï(X). This can be thought of as a multiplicative skip connection which helps gradients ï¬ow through the layers. We compare the different gating schemes experimentally in Section §5.2 and we ï¬nd gated linear units allow for faster convergence to better perplexi- ties.
# 4. Experimental Setup
# 4.1. Datasets
# 3. Gating Mechanisms
Gating mechanisms control the path through which infor- mation ï¬ows in the network and have proven to be use- ful for recurrent neural networks (Hochreiter & Schmidhu- ber, 1997). LSTMs enable long-term memory via a sep- arate cell controlled by input and forget gates. This al- lows information to ï¬ow unimpeded through potentially many timesteps. Without these gates, information could easily vanish through the transformations of each timestep. In contrast, convolutional networks do not suffer from the same kind of vanishing gradient and we ï¬nd experimentally that they do not require forget gates.
Therefore, we consider models possessing solely output gates, which allow the network to control what informa- tion should be propagated through the hierarchy of lay- ers. We show this mechanism to be useful for language modeling as it allows the model to select which words or features are relevant for predicting the next word. Par- allel to our work, Oord et al. (2016b) have shown the effectiveness of an LSTM-style mechanism of the form tanh(XâW+b)âÏ(XâV+c) for the convolutional mod- eling of images. Later, Kalchbrenner et al. (2016) extended this mechanism with additional gates for use in translation and character-level language modeling.
We report results on two public large-scale language mod- eling datasets. First, the Google Billion Word dataset (Chelba et al., 2013) is considered one of the largest lan- guage modeling datasets with almost one billion tokens and a vocabulary of over 800K words. In this dataset, words appearing less than 3 times are replaced with a special un- known symbol. The data is based on an English corpus of 30, 301, 028 sentences whose order has been shufï¬ed. Second, WikiText-103 is a smaller dataset of over 100M tokens with a vocabulary of about 200K words (Merity et al., 2016). Different from GBW, the sentences are con- secutive which allows models to condition on larger con- texts rather than single sentences. For both datasets, we add a beginning of sequence marker <S > at the start of each line and an end of sequence marker </S> at the end of each line. On the Google Billion Word corpus each sequence is a single sentence, while on WikiText-103 a sequence is an entire paragraph. The model sees <S> and </S > as input but only predicts the end of sequence marker </S>. We evaluate models by computing the per- i â log p(wi|...,wiâ1) on the standard held out plexity e test portion of each dataset.
# 4.2. Training
Gated linear units are a simpliï¬ed gating mechanism based on the work of Dauphin & Grangier (2015) for non- deterministic gates that reduce the vanishing gradient prob- lem by having linear units coupled to the gates. This retains the non-linear capabilities of the layer while allowing the gradient to propagate through the linear unit without scal- ing. The gradient of the LSTM-style gating of which we dub gated tanh unit (GTU) is
We implement our models in Torch (Collobert et al., 2011) and train on Tesla M40 GPUs. The majority of our models are trained on single GPU, as we focused on identifying compact architectures with good generalization and efï¬- cient computation at test time. We trained larger models with an 8-GPU setup by copying the model onto each GPU and dividing the batch such that each worker computes 1/8th of the gradients. The gradients are then summed us- ing Nvidia NCCL. The multi-GPU setup allowed us to train models with larger hidden units.
V[tanh(X) @ o(X)] = tanhâ(X)VX ® o(X) +o'(X)VX @ tanh(X).
Notice that it gradually vanishes as we stack layers because of the downscaling factors tanhâ(X) and o/(X). In con-
(2)
We train using Nesterovâs momentum (Sutskever et al., 2013). While the cost in terms of memory is storing an- other vector of the size of the parameters, it increases the speed of convergence signiï¬cantly with minimal additional
Language Modeling with Gated Convolutional Networks
Name GCNN-13 GCNN-14B GCNN-9 GCNN-8B GCNN-8 GCNN-14 Dataset Google Billion Word wikitext-103 Lookup 128 280 Conv1 [4, 1268] x 1 (5, 512] x 1 [4,807] x 1 [1,512] x 1 [4,900] x 1 6, 850] x 3 1,14 1,128 Conv2.x : ee | x 12 5,128 | x3 ; soy | x4 5,128 | x3 | [4,900] x 7 1,850] x 1 > 1,512 , 1,512 1,512 1, 256 Conv3.x 5,512 | x3 5,256 | x3 5,850] x 4 1, 1024 1,512 1, 1024 1, 1024 Conv4.x 5,1024 | x6 1,1024 | x1 1,850] x 1 1, 2048 1, 2048 1, 1024 Conv5.x 5,1024 | x1 4,850] x 3 1, 4096 Conv6.x [4, 1024] x 1 Conv7.x [4, 2048] x 1 AdaSoftmax 10k,40k,200k 4k,40k,200k 2k,10k,50k | 10k,20k,200k
Table 1. Architectures for the models. The residual building blocks are shown in brackets with the format [k, n]. âBâ denotes bottleneck architectures.
computation compared to standard stochastic gradient de- scent. The speed of convergence was further increased with gradient clipping (Pascanu et al., 2013) and weight normal- ization (Salimans & Kingma, 2016).
Pascanu et al. (2013) argue for gradient clipping because it prevents the gradient explosion problem that characterizes RNNs. However, gradient clipping is not tied to RNNs, as it can be derived from the general concept of trust region methods. Gradient clipping is found using a spherical trust region
In general, ï¬nding a good architecture was simple and the rule of thumb is that the larger the model, the better the per- formance. In terms of optimization, we initialize the lay- ers of the model with the Kaiming initialization (He et al., 2015b), with the learning rate sampled uniformly in the interval [1., 2.], the momentum set to 0.99, and clipping set to 0.1. Good hyper-parameters for the optimizer are quite straightforward to ï¬nd and the optimal values do not change much between datasets.
# 5. Results
Aé* = argmin f(0)+Vf7Ae@ s.t. ||Adl| <e . Vif = âmax(||Vfl|,e) =. 4 max(| VFO e «a
Empirically, our experiments converge signiï¬cantly faster with the use of gradient clipping even though we do not use a recurrent architecture.
In combination, these methods led to stable and fast con- vergence with comparatively large learning rates such as 1.
# 4.3. Hyper-parameters
LSTMs and recurrent networks are able to capture long term dependencies and are fast becoming cornerstones in natural language processing. In this section, we compare strong LSTM and RNN models from the literature to our gated convolutional approach on two datasets.
We ï¬nd the GCNN outperforms the comparable LSTM re- sults on Google billion words. To accurately compare these approaches, we control for the same number of GPUs and the adaptive softmax output model (Grave et al., 2016a), as these variables have a signiï¬cant inï¬uence on performance. In this setting, the GCNN reaches 38.1 test perplexity while the comparable LSTM has 39.8 perplexity (Table 2).
We found good hyper-parameter conï¬gurations by cross- validating with random search on a validation set. For the number of residual model architecture, we select blocks between {1, . . . , 10}, the size of the embed- dings with {128, . . . , 256}, the number of units between {128, . . . , 2048}, and the kernel width between {3, . . . , 5}.
Further, the GCNN obtains strong performance with much greater computational efï¬ciency. Figure 2 shows that our approach closes the previously signiï¬cant gap between models that use the full softmax and models with the usu- ally less accurate hierarchical softmax. Thanks to the adap-
Language Modeling with Gated Convolutional Networks
Model Sigmoid-RNN-2048 (Ji et al., 2015) Interpolated KN 5-Gram (Chelba et al., 2013) Sparse Non-Negative Matrix LM (Shazeer et al., 2014) RNN-1024 + MaxEnt 9 Gram Features (Chelba et al., 2013) LSTM-2048-512 (Jozefowicz et al., 2016) 2-layer LSTM-8192-1024 (Jozefowicz et al., 2016) BIG GLSTM-G4 (Kuchaiev & Ginsburg, 2017) LSTM-2048 (Grave et al., 2016a) 2-layer LSTM-2048 (Grave et al., 2016a) GCNN-13 GCNN-14 Bottleneck Test PPL Hardware 1 CPU 100 CPUs - 24 GPUs 32 GPUs 32 GPUs 8 GPUs 1 GPU 1 GPU 1 GPU 8 GPUs 68.3 67.6 52.9 51.3 43.7 30.6 23.3â 43.9 39.8 38.1 31.9
Table 2. Results on the Google Billion Word test set. The GCNN outperforms the LSTMs with the same output approximation.
55 eâ- LSTM+Softmax 50 eâ- GCNN+AdaSoftmax| B 545 2 © & % 40 2 35 30 0 200 400 600 800 1000 MFlops
Figure 2. In comparison to the state-of-the-art (Jozefowicz et al., 2016) which uses the full softmax, the adaptive softmax approxi- mation greatly reduces the number of operations required to reach a given perplexity.
Model LSTM-1024 (Grave et al., 2016b) GCNN-8 GCNN-14 Test PPL Hardware 1 GPU 1 GPU 4 GPUs 48.7 44.9 37.2
Table 3. Results for single models on the WikiText-103 dataset.
lion Word, the average sentence length is quite short â only 20 words. We evaluate on WikiText-103 to determine if the model can perform well on a dataset where much larger contexts are available. On WikiText-103, an input se- quence is an entire Wikipedia article instead of an individ- ual sentence - increasing the average length to 4000 words. However, the GCNN outperforms LSTMs on this problem as well (Table 3). The GCNN-8 model has 8 layers with 800 units each and the LSTM has 1024 units. These results show that GCNNs can model enough context to achieve strong results.
tive softmax, the GCNN only requires a fraction of the op- erations to reach the same perplexity values. The GCNN outperforms other single model state-of-the-art approaches except the much larger LSTM of Jozefowicz et al. (2016), a model which requires more GPUs and the much more computationally expensive full softmax. In comparison, the largest model we have trained reaches 31.9 test per- plexity compared to the 30.6 of that approach, but only re- quires training for 2 weeks on 8 GPUs compared to 3 weeks of training on 32 GPUs for the LSTM. Note that these re- sults can be improved by either using mixtures of experts (Shazeer et al., 2017) or ensembles of these models.
We evaluated on the Gigaword dataset following Chen et al. (2016) to compare with fully connected models. We found that the fully connected and convolutional network reach respectively 55.6 and 29.4 perplexity. We also ran pre- liminary experiments on the much smaller Penn tree bank dataset. When we score the sentences independently, the GCNN and LSTM have comparable test perplexity with 108.7 and 109.3 respectively. However, it is possible to achieve better results by conditioning on previous sen- tences. Unlike the LSTM, we found that the GCNN over- ï¬ts on this quite small dataset and so we note the model is better suited to larger scale problems.
# 5.1. Computational Efï¬ciency
Another relevant concern is if the GCNNâs ï¬xed context size can thoroughly model long sequences. On Google Bil-
âappeared after submission
Computational cost is an important consideration for lan- guage models. Depending on the application, there are a number of metrics to consider. We measure the throughput
Language Modeling with Gated Convolutional Networks
80 70 =< ReLU| 75) 65 â GTU 70| â GLU 2 2 60 3S fy 265 a g @ 55 = 60 bal 3 ® 59 Lad 55) Lad 50 45 435 5 10 15 20 25 30 35 405 50 100 Epochs Hours
Figure 3. Learning curves on WikiText-103 (left) and Google Billion Word (right) for models with different activation mechanisms. Models with gated linear units (GLU) converge faster and to a lower perplexity.
LSTM-2048 GCNN-9 GCNN-8 Bottleneck Throughput (CPU) 169 121 179 (GPU) 45,622 29,116 45,878 Responsiveness (GPU) 2,282 29,116 45,878
Table 4. Processing speed in tokens/s at test time for an LSTM with 2048 units and GCNNs achieving 43.9 perplexity on Google Billion Word. The GCNN with bottlenecks improves the respon- siveness by 20 times while maintaining high throughput.
of a model as the number of tokens that can be processed per second. Throughput can be maximized by processing many sentences in parallel to amortize sequential opera- tions. In contrast, responsiveness is the speed of process- ing the input sequentially, one token at a time. Through- put is important because it indicates the time required to process a corpus of text and responsiveness is an indicator of the time to ï¬nish processing a sentence. A model can have low responsiveness but high throughput by evaluating many sentences simultaneously through batching. In this case, such a model is slow in ï¬nishing processing individ- ual sentences, but can process many sentences at a good rate.
We evaluate the throughput and responsiveness for mod- els that reach approximately 43.9 perplexity on the Google Billion Word benchmark. We consider the LSTM with 2048 units in Table 2, a GCNN-8Bottleneck with 7 Resnet blocks that have a bottleneck structure as described by (He et al., 2015a) and a GCNN-8 without bottlenecks. A bot- tleneck block wedges a k > 1 convolution between two k = 1 layers. This designs reduces computational cost by reducing and increasing dimensionality with the k = 1 lay- ers so that the convolution operates in a lower dimensional space. Our results show that the use of bottleneck blocks is important to maintaining computational efï¬ciency.
The throughput of the LSTM is measured by using a large batch of 750 sequences of length 20, resulting in 15, 000 to- kens per batch. The responsiveness is the average speed to process a sequence of 15, 000 contiguous tokens. Table 4 shows that the throughput for the LSTM and the GCNN are similar. The LSTM performs very well on GPU be- cause the large batch size of 750 enables high paralleliza- tion over different sentences. This is because the LSTM implementation has been thoroughly optimized and uses cuDNN, whereas the cuDNN implementation of convolu- tions is not been optimized for the 1-D convolutions we use in our model. We believe much better performance can be achieved by a more efï¬cient 1-D cuDNN convolution. Un- like the LSTM, the GCNN can be parallelized both over sequences as well as across the tokens of each sequence, allowing the GCNN to have 20x higher responsiveness.
# 5.2. Gating Mechanisms
In this section, we compare the gated linear unit with other mechanisms as well as to models without gating. We consider the LSTM-style gating mechanism (GTU) tanh(X â W + b) â Ï(X â V + c) of (Oord et al., 2016b) and networks that use regular ReLU or Tanh activations. Gating units add parameters, so for fair comparison, we carefully cross-validate models with a comparable number of parameters. Figure 3 (left) shows that GLU networks converge to a lower perplexity than the other approaches on WikiText-103. Similar to gated linear units, the ReLU has a linear path that lets the gradients easily pass through the active units. This translates to much faster convergence for both the ReLU and the GLU. On the other hand, neither Tanh nor GTU have this linear path, and thus suffer from the vanishing gradient problem. In the GTU, both the in- puts as well as the gating units can cut the gradient when the units saturate.
Comparing the GTU and Tanh models allows us to measure
Language Modeling with Gated Convolutional Networks
w w B BB O © OS N SB Test Perplexity w FS 32 10 20 30 40 50 60 70 Context 90 a a i) S 3 S Test Perplexity uw So 405, 10 15 20 25 Context
Figure 4. Test perplexity as a function of context for Google Billion Word (left) and Wiki-103 (right). We observe that models with bigger context achieve better results but the results start diminishing quickly after a context of 20.
the effect of gating since the Tanh model can be thought of as a GTU network with the sigmoid gating units removed. The results (Figure 3, left) show that the gating units make a vast difference and provide useful modeling capabilities, as there is a large difference in the performance between GTU and Tanh units. Similarly, while ReLU unit is not an exact ablation of the gating units in the GLU, it can be seen as a simpliï¬cation ReLU(X) = X â (X > 0) where the gates become active depending on the sign of the input. Also in this case, GLU units lead to lower perplexity.
In Figure 3 (right) we repeat the same experiment on the larger Google Billion Words dataset. We consider a ï¬xed time budget of 100 hours because of the considerable train- ing time required for this task. Similar to WikiText-103, the gated linear units achieve the best results on this prob- lem. There is a gap of about 5 perplexity points between the GLU and ReLU which is similar to the difference be- tween the LSTM and RNN models measured by (Jozefow- icz et al., 2016) on the same dataset.
hl(X) = (X â W + b) â (X â V + c).
140) â Linear ââ Bilinear 120 â GLU 2 x o 2 100) o & 3 80) 2 60) 40 () 50 100 Hours
Figure 5. Learning curves on Google Billion Word for models with varying degrees of non-linearity.
# 5.3. Non-linear Modeling
The experiments so far have shown that the gated linear unit beneï¬ts from the linear path the unit provides com- pared to other non-linearities. Next, we compare networks with GLUs to purely linear networks and networks with bilinear layers in order to measure the impact of the non- linear path provided by the gates of the GLU. One mo- tivation for this experiment is the success of linear mod- els on many natural language processing tasks (Manning & Sch¨utze, 1999). We consider deep linear convolutional networks where the layers lack the gating units of the GLU and take the form hl(X) = X â W + b. Stacking sev- eral layers on top of each other is simply a factorization of the model which remains linear up to the softmax, at which point it becomes log-linear. Another variation of GLUs are bilinear layers (Mnih & Hinton, 2007) which take the form
Figure 5 shows that GLUs perform best, followed by bilin- ear layers and then linear layers. Bilinear layers improve over linear ones by more than 40 perplexity points, and the GLU improves another 20 perplexity points over the bilin- ear model. The linear model performs very poorly at per- plexity 115 even compared to 67.6 of a Kneser-Ney 5-gram model, even though the former has access to more context. Surprisingly, the introduction of the bilinear units is enough to reach 61 perplexity on Google Billion Word, which sur- passes both Kneser-Ney 5-gram models and the non-linear neural model of (Ji et al., 2015).
# 5.4. Context Size
Figure 4 shows the impact of context size for the gated CNN. We tried different combinations of network depth and kernel widths for each context size and chose the best performing one for each size. Generally, larger contexts
Language Modeling with Gated Convolutional Networks
improve accuracy but returns drastically diminish with win- dows larger than 40 words, even for WikiText-103 where we may condition on an entire Wikipedia article. This means that the unlimited context offered by recurrent mod- els is not strictly necessary for language modeling. Fur- thermore, this ï¬nding is also congruent with the fact that good performance with recurrent networks can be obtained by truncating gradients after only 40 timesteps using trun- cated back propagation through time. Figure 4 also shows that WikiText-103 beneï¬ts much more from larger context size than Google Billion Word as the performance degrades more sharply with smaller contexts. WikiText-103 pro- vides much more context than Google Billion Word where the average sentence size is 20. However, while the average size of the documents is close to 4000 tokens, we ï¬nd that strong performance can be achieved with a context size as low as 30 tokens.
# 6. Conclusion
We introduce a convolutional neural network for language modeling with a novel gating mechanism. Compared to recurrent neural networks, our approach builds a hierarchi- cal representation of the input words that makes it easier to capture long-range dependencies, similar in spirit to the tree-structured analysis of linguistic grammar formalisms. The same property eases learning since features are passed through a ï¬xed number of layers and non-linearities, un- like for recurrent networks where the number of processing steps differs depending on the position of the word in the input. The results show that our gated convolutional net- work achieves a new state of the art on WikiText-103. On the Google Billion Word benchmark, we show competitive results can be achieved with signiï¬cantly fewer resources.
# Acknowledgments
# 5.5. Training
In this section, we perform an ablation study of the impact of weight normalization and gradient clipping. We sepa- rately cross-validate the hyper-parameters of each conï¬gu- ration to make the comparison fair. Due to the high cost of each of these experiments, we only consider a single itera- tion over the training data. Figure 6 shows that both meth- ods signiï¬cantly speed up convergence. Weight normal- ization in particular improves the speed by over two times. This speedup is partly due to the ability to use much larger learning rates (1 instead of 0.01) than would otherwise be possible. Both clipping and weight normalization add com- putational overhead, but it is minor compared to the large gains in convergence speed.
We would like to thank Ben Graham, Jonas Gehring, Edouard Grave, Armand Joulin and Ronan Collobert for helpful discussions.
# References
Bengio, Yoshua, Ducharme, R´ejean, Vincent, Pascal, and Jauvin, journal of Christian. A neural probabilistic language model. machine learning research, 3(Feb):1137â1155, 2003.
Chelba, Ciprian, Mikolov, Tomas, Schuster, Mike, Ge, Qi, Brants, Thorsten, Koehn, Phillipp, and Robinson, Tony. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005, 2013.
Chen, Stanley F and Goodman, Joshua. An empirical study of smoothing techniques for language modeling. In Proceedings of the 34th annual meeting on Association for Computational Linguistics, pp. 310â318. Association for Computational Lin- guistics, 1996.
140 130 â Without Clipping 120) ââ Without WeightNorm > â With Both £110 Eq ov âa 100} o 2 90 rn & 80 70 60 50 40000 80000 120000 160000 Updates
Chen, Wenlin, Grangier, David, and Auli, Michael. Strategies for training large vocabulary neural language models. CoRR, abs/1512.04906, 2016.
Collobert, Ronan, Kavukcuoglu, Koray, and Farabet, Clement. Torch7: A Matlab-like Environment for Machine Learning. In BigLearn, NIPS Workshop, 2011. URL http://torch.ch.
Dauphin, Yann N and Grangier, David. butions with linearizing belief networks. arXiv:1511.05622, 2015. Predicting distri- arXiv preprint
Glorot, Xavier and Bengio, Yoshua. Understanding the difï¬culty of training deep feedforward neural networks. The handbook of brain theory and neural networks, 2010.
Figure 6. Effect of weight normalization and gradient clipping on Google Billion Word.
Grave, E., Joulin, A., Ciss´e, M., Grangier, D., and J´egou, H. Efï¬cient softmax approximation for GPUs. ArXiv e-prints, September 2016a.
Improving Neural Lan- guage Models with a Continuous Cache. ArXiv e-prints, De- cember 2016b.
Language Modeling with Gated Convolutional Networks
Gutmann, Michael and Hyv¨arinen, Aapo. Noise-contrastive esti- mation: A new estimation principle for unnormalized statisti- cal models.
Oord, Aaron van den, Kalchbrenner, Nal, and Kavukcuoglu, arXiv preprint Koray. arXiv:1601.06759, 2016a. Pixel recurrent neural networks.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015a.
Oord, Aaron van den, Kalchbrenner, Nal, Vinyals, Oriol, Espe- holt, Lasse, Graves, Alex, and Kavukcuoglu, Koray. Condi- tional image generation with pixelcnn decoders. arXiv preprint arXiv:1606.05328, 2016b.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectiï¬ers: Surpassing human-level perfor- mance on imagenet classiï¬cation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1026â1034, 2015b.
Pascanu, Razvan, Mikolov, Tomas, and Bengio, Yoshua. On the difï¬culty of training recurrent neural networks. In Proceedings of The 30th International Conference on Machine Learning, pp. 1310â1318, 2013.
Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short-term memory. Neural computation, 9(8):1735â1780, 1997.
Salimans, Tim and Kingma, Diederik P. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. arXiv preprint arXiv:1602.07868, 2016.
Ji, Shihao, Vishwanathan, SVN, Satish, Nadathur, Anderson, Michael J, and Dubey, Pradeep. Blackout: Speeding up recur- rent neural network language models with very large vocabu- laries. arXiv preprint arXiv:1511.06909, 2015.
Shazeer, Noam, Pelemans, Joris, and Chelba, Ciprian. Skip-gram language modeling using sparse non-negative matrix probabil- ity estimation. arXiv preprint arXiv:1412.1454, 2014.
Jozefowicz, Rafal, Vinyals, Oriol, Schuster, Mike, Shazeer, Noam, and Wu, Yonghui. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.
Shazeer, Noam, Mirhoseini, Azalia, Maziarz, Krzysztof, Davis, Andy, Le, Quoc V., Hinton, Geoffrey E., and Dean, Jeff. Out- rageously large neural networks: The sparsely-gated mixture- of-experts layer. CoRR, abs/1701.06538, 2017. URL http: //arxiv.org/abs/1701.06538.
Kalchbrenner, Nal, Espeholt, Lasse, Simonyan, Karen, van den Oord, Aaron, Graves, Alex, and Kavukcuoglu, Koray. Neural Machine Translation in Linear Time. arXiv, 2016.
Steedman, Mark. The syntactic process. 2002.
Kneser, Reinhard and Ney, Hermann. Improved backing-off for m-gram language modeling. In Acoustics, Speech, and Signal Processing, 1995. ICASSP-95., 1995 International Conference on, volume 1, pp. 181â184. IEEE, 1995.
Koehn, Philipp. Statistical Machine Translation. Cambridge Uni- versity Press, New York, NY, USA, 1st edition, 2010. ISBN 0521874157, 9780521874151.
Sutskever, Ilya, Martens, James, Dahl, George E, and Hinton, Ge- offrey E. On the importance of initialization and momentum in deep learning. 2013.
Wang, Mingxuan, Lu, Zhengdong, Li, Hang, Jiang, Wenbin, and gencnn: A convolutional architecture for word Liu, Qun. sequence prediction. CoRR, abs/1503.05034, 2015. URL http://arxiv.org/abs/1503.05034.
Kuchaiev, Oleksii and Ginsburg, Boris. Factorization tricks for LSTM networks. CoRR, abs/1703.10722, 2017. URL http: //arxiv.org/abs/1703.10722.
Yu, Dong and Deng, Li. Automatic Speech Recognition: A Deep Learning Approach. Springer Publishing Company, Incorpo- rated, 2014. ISBN 1447157788, 9781447157786.
LeCun, Yann and Bengio, Yoshua. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361(10):1995, 1995.
Manning, Christopher D and Sch¨utze, Hinrich. Foundations of statistical natural language processing, 1999.
Merity, S., Xiong, C., Bradbury, J., and Socher, R. Pointer Sen- tinel Mixture Models. ArXiv e-prints, September 2016.
Mikolov, Tom´aËs, Martin, Karaï¬Â´at, Burget, Luk´aËs, Cernock´y, Jan, and Khudanpur, Sanjeev. Recurrent Neural Network based Language Model. In Proc. of INTERSPEECH, pp. 1045â1048, 2010.
Mnih, Andriy and Hinton, Geoffrey. Three new graphical models for statistical language modelling. In Proceedings of the 24th international conference on Machine learning, pp. 641â648. ACM, 2007.
Morin, Frederic and Bengio, Yoshua. Hierarchical probabilistic neural network language model. In Aistats, volume 5, pp. 246â 252. Citeseer, 2005. | {
"id": "1511.06909"
} |
1612.07837 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | 7 1 0 2
b e F 1 1 ] D S . s c [
2 v 7 3 8 7 0 . 2 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
SAMPLERNN: AN UNCONDITIONAL END-TO-END NEURAL AUDIO GENERATION MODEL
Soroush Mehri University of Montreal Kundan Kumar IIT Kanpur Ishaan Gulrajani University of Montreal Shubham Jain IIT Kanpur Jose Sotelo University of Montreal Aaron Courville University of Montreal CIFAR Fellow Yoshua Bengio University of Montreal CIFAR Senior Fellow
# ABSTRACT
In this paper we propose a novel model for unconditional audio generation based on generating one audio sample at a time. We show that our model, which proï¬ts from combining memory-less modules, namely autoregressive multilayer percep- trons, and stateful recurrent neural networks in a hierarchical structure is able to capture underlying sources of variations in the temporal sequences over very long time spans, on three datasets of different nature. Human evaluation on the gener- ated samples indicate that our model is preferred over competing models. We also show how each component of the model contributes to the exhibited performance.
1
# INTRODUCTION
Audio generation is a challenging task at the core of many problems of interest, such as text-to- speech synthesis, music synthesis and voice conversion. The particular difï¬culty of audio generation is that there is often a very large discrepancy between the dimensionality of the the raw audio signal and that of the effective semantic-level signal. Consider the task of speech synthesis, where we are typically interested in generating utterances corresponding to full sentences. Even at a relatively low sample rate of 16kHz, on average we will have 6,000 samples per word generated. 1
Traditionally, the high-dimensionality of raw audio signal is dealt with by ï¬rst compressing it into spectral or hand-engineered features and deï¬ning the generative model over these features. However, when the generated signal is eventually decompressed into audio waveforms, the sample quality is often degraded and requires extensive domain-expert corrective measures. This results in compli- cated signal processing pipelines that are to adapt to new tasks or domains. Here we propose a step in the direction of replacing these handcrafted systems.
In this work, we investigate the use of recurrent neural networks (RNNs) to model the dependencies in audio data. We believe RNNs are well suited as they have been designed and are suited solutions for these tasks (see Graves (2013), Karpathy (2015), and Siegelmann (1999)). However, in practice it is a known problem of these models to not scale well at such a high temporal resolution as is found when generating acoustic signals one sample at a time, e.g., 16000 times per second. This is one of the reasons that Oord et al. (2016) proï¬ts from other neural modules such as one presented by Yu & Koltun (2015) to show extremely good performance.
In this paper, an end-to-end unconditional audio synthesis model for raw waveforms is presented while keeping all the computations tractable.2 Since our model has different modules operating at different clock-rates (which is in contrast to WaveNet), we have the ï¬exibility in allocating the amount of computational resources in modeling different levels of abstraction. In particular, we can potentially allocate very limited resource to the module responsible for sample level alignments
1Statistics based on the average speaking rate of a set of TED talk speakers http://sixminutes. dlugan.com/speaking-rate/
2Code https://github.com/soroushmehr/sampleRNN_ICLR2017 and samples https:// soundcloud.com/samplernn/sets
1
Published as a conference paper at ICLR 2017
operating at the clock-rate equivalent to sample-rate of the audio, while allocating more resources in modeling dependencies which vary very slowly in audio, for example identity of phoneme being spoken. This advantage makes our model arbitrarily ï¬exible in handling sequential dependencies at multiple levels of abstraction.
Hence, our contribution is threefold:
1. We present a novel method that utilizes RNNs at different scales to model longer term de- pendencies in audio waveforms while training on short sequences which results in memory efï¬ciency during training.
2. We extensively explore and compare variants of models achieving the above effect. 3. We study and empirically evaluate the impact of different components of our model on three audio datasets. Human evaluation also has been conducted to test these generative models.
# 2 SAMPLERNN MODEL
In this paper we propose SampleRNN (shown in Fig. 1), a density model for audio waveforms. SampleRNN models the probability of a sequence of waveform samples X = {x1, x2, . . . , xT } (a random variable over input data sequences) as the product of the probabilities of each sample conditioned on all previous samples:
T-1 2X) = T] praler...-) a) i=0
RNNs are commonly used to model sequential data which can be formulated as:
ht = H(htâ1, xi=t)
p(xi+1|x1, . . . , xi) = Sof tmax(M LP (ht))
with H being one of the known memory cells, Gated Recurrent Units (GRUs) (Chung et al., 2014), Long Short Term Memory Units (LSTMs) (Hochreiter & Schmidhuber, 1997), or their deep varia- tions (Section 3). However, raw audio signals are challenging to model because they contain struc- ture at very different scales: correlations exist between neighboring samples as well as between ones thousands of samples apart.
SampleRNN helps to address this challenge by using a hierarchy of modules, each operating at a different temporal resolution. The lowest module processes individual samples, and each higher module operates on an increasingly longer timescale and a lower temporal resolution. Each module conditions the module below it, with the lowest module outputting sample-level predictions. The entire hierarchy is trained jointly end-to-end by backpropagation.
2.1 FRAME-LEVEL MODULES
Rather than operating on individual samples, the higher-level modules in SampleRNN operate on non-overlapping frames of F S(k) (âFrame Sizeâ) samples at the kth level up in the hierarchy at a time (frames denoted by f (k)). Each frame-level module is a deep RNN which summarizes the history of its inputs into a conditioning vector for the next module downward.
The variable number of frames we condition upon up to timestep t â 1 is expressed by a ï¬xed length hidden state or memory h(k) t where t is related to clock rate at that tier. The RNN makes a memory update at timestep t as a function of the previous memory h(k) . This input for top tier k = K is simply the input frame. For intermediate tiers (1 < k < K) this input is a linear combination of conditioning vector from higher tier and current input frame. See Eqs. 4â5.
Because different modules operate at different temporal resolutions, we need to upsample each vector c at the output of a module into a series of r(k) vectors (where r(k) is the ratio between the temporal resolutions of the modules) before feeding it into the input of the next module downward (Eq. 6). We do this with a set of r(k) separate linear projections.
2
(2) (3)
Published as a conference paper at ICLR 2017
Xi+15 Xit16> +++ X31 Tier3 > Xit12, ees Xit2ds «++ »Xi427 Xi428, «++ » Xi431 Tier 2 c c c c Tier 1 Xi+285 00 Xi431 ey Tf ots Xit3Lo vey P(%i+32 | X<i+32) P(Xi+33 | X<i+33) P(%i+34 | X<i+34) PRi+35 | X<i+35)
Figure 1: Snapshot of the unrolled model at timestep i with K = 3 tiers. As a simpliï¬cation only one RNN and up-sampling ratio r = 4 is used for all tiers.
Here we are formalizing the frame-level module in tier k. Note that following equations are exclusive to tier k and timestep t for that speciï¬c tier. To increase the readability, unless necessary superscript (k) is not shown for t, inp(k), W (k)
t + c(k+1) t inpt = ; 1 < k < K k = K (4)
Wxf (k) f (k=K) ; t ht = H(htâ1, inpt)
(5)
c(k) (tâ1)âr+j = Wjht;
1 ⤠j ⤠r (6)
Our approach of upsampling with r(k) linear projections is exactly equivalent to upsampling by adding zeros and then applying a linear convolution. This is sometimes called âperforatedâ upsam- pling in the context of convolutional neural networks (CNNs). It was ï¬rst demonstrated to work well in Dosovitskiy et al. (2016) and is a fairly common upsampling technique.
2.2 SAMPLE-LEVEL MODULE
The lowest module (tier k = 1; Eqs. 7â9) in the SampleRNN hierarchy outputs a distribution over a sample xi+1, conditioned on the F S(1) preceding samples as well as a vector c(k=2) from the next higher module which encodes information about the sequence prior to that frame. As F S(1) is usually a small value and correlations in nearby samples are easy to model by a simple memoryless module, we implement it with a multilayer perceptron (MLP) rather than RNN which slightly speeds up the training. Assuming ei represents xi after passing through embedding layer (section 2.2.1), conditional distribution in Eq. 1 can be achieved by following and for further clarity two consecutive sample-level frames are shown. In addition, Wx in Eq. 8 is simply used to linearly combine a frame and conditioning vector from above.
f (1) iâ1 = f latten([eiâF S(1), . . . , eiâ1]) f (1) i = f latten([eiâF S(1)+1, . . . , ei]) i + c(2) inp(1) i = W (1) p(xi+1|x1, . . . , xi) = Sof tmax(M LP (inp(1) x f (1) i )) i (7) (8) (9)
We use a Softmax because we found that better results were obtained by discretizing the audio signals (also see van den Oord et al. (2016)) and outputting a Multinoulli distribution rather than using a Gaussian or Gaussian mixture to represent the conditional density of the original real-valued signal. When processing an audio sequence, the MLP is convolved over the sequence, processing
3
Published as a conference paper at ICLR 2017
each window of F S(1) samples and predicting the next sample. At generation time, the MLP is run repeatedly to generate one sample at a time. Table 1 shows a considerable gap between the baseline model RNN and this model, suggesting that the proposed hierarchically structured architecture of SampleRNN makes a big difference.
2.2.1 OUTPUT QUANTIZATION
The sample-level module models its output as a q-way discrete distribution over possible quantized values of xi (that is, the output layer of the MLP is a q-way Softmax).
To demonstrate the importance of a discrete output distribution, we apply the same architecture on real-valued data by replacing the q-way Softmax with a Gaussian Mixture Models (GMM) output distribution. Table 2 shows that our model outperforms an RNN baseline even when both models use real-valued outputs. However, samples from the real-valued model are almost indistinguishable from random noise.
In this work we use linear quantization with q = 256, corresponding to a per-sample bit depth of 8. Unintuitively, we realized that even linearly decreasing the bit depth (resolution of each audio sam- ple) from 16 to 8 can ease the optimization procedure while generated samples still have reasonable quality and are artifact-free.
In addition, early on we noticed that the model can achieve better performance and generation quality when we embed the quantized input values before passing them through the sample-level MLP (see Table 4). The embedding steps maps each of the q discrete values to a real-valued vector embedding. However, real-valued raw samples are still used as input to the higher modules.
2.2.2 CONDITIONALLY INDEPENDENT SAMPLE OUTPUTS
To demonstrate the importance of a sample-level autoregressive module, we try replacing it with âMulti-Softmaxâ (see Table 4), where the prediction of each sample xi depends only on the con- ditioning vector c from Eq. 9. In this conï¬guration, the model outputs an entire frame of F S(1) samples at a time, modeling all samples in a frame as conditionally independent of each other. We ï¬nd that this Multi-Softmax model (which lacks a sample-level autoregressive module) scores sig- niï¬cantly worse in terms of log-likelihood and fails to generate convincing samples. This suggests that modeling the joint distribution of the acoustic samples inside each frame is very important in order to obtain good acoustic generation. We found this to be true even when the frame size is re- duced, with best results always with a frame size of 1, i.e., generating only one acoustic sample at a time.
# 2.3 TRUNCATED BPTT
Training recurrent neural networks on long sequences can be very computationally expensive. Oord et al. (2016) avoid this problem by using a stack of dilated convolutions instead of any recurrent con- nections. However, when they can be trained efï¬ciently, recurrent networks have been shown to be very powerful and expressive sequence models. We enable efï¬cient training of our recurrent model using truncated backpropagation through time, splitting each sequence into short subsequences and propagating gradients only to the beginning of each subsequence. We experiment with different subsequence lengths and demonstrate that we are able to train our networks, which model very long-term dependencies, despite backpropagating through relatively short subsequences.
Table 3 shows that by increasing the subsequence length, performance substantially increases along- side with train-time memory usage and convergence time. Yet it is noteworthy that our best models have been trained on subsequences of length 512, which corresponds to 32 milliseconds, a small fraction of the length of a single a phoneme of human speech while generated samples exhibit longer word-like structures.
Despite the aforementioned fact, this generative model can mimic the existing long-term structure of the data which results in more natural and coherent samples that is preferred by human listeners. (More on this in Sections 3.2â3.3.) This is due to the fast updates from TBPTT and specialized frame-level modules (Section 2.1) with top tiers designed to model a lower resolution of signal while leaving the process of ï¬lling the details to lower tiers.
4
Published as a conference paper at ICLR 2017
# 3 EXPERIMENTS AND RESULTS
In this section we are introducing three datasets which have been chosen to evaluate the proposed architecture for modeling raw acoustic sequences. The description of each dataset and their prepro- cessing is as follows:
Blizzard which is a dataset presented by Prahallad et al. (2013) for speech synthesis task, contains 315 hours of a single female voice actor in English; however, for our experiments we are using only 20.5 hours. The training/validation/test split is 86%-7%-7%. Onomatopoeia3, a relatively small dataset with 6,738 sequences adding up to 3.5 hours, is human vocal sounds like grunting, screaming, panting, heavy breathing, and coughing. Di- versity of sound type and the fact that these sounds were recorded from 51 actors and many categories makes it a challenging task. To add to that, this data is extremely unbalanced. The training/validation/test split is 92%-4%-4%. Music dataset is the collection of all 32 Beethovenâs piano sonatas publicly available on https://archive.org/ amounting to 10 hours of non-vocal audio. The training/val- idation/test split is 88%-6%-6%.
See Fig. 2 for a visual demonstration of examples from datasets and generated samples. For all the datasets we are using a 16 kHz sample rate and 16 bit depth. For the Blizzard and Music datasets, preprocessing simply amounts to chunking the long audio ï¬les into 8 seconds long se- quences on which we will perform truncated backpropagation through time. Each sequence in the Onomatopoeia dataset is few seconds long, ranging from 1 to 11 seconds. To train the models on this dataset, zero-padding has been applied to make all the sequences in a mini-batch have the same length and corresponding cost values (for the predictions over the added 0s) would be ignored when computing the gradients.
We particularly explored two gated variants of RNNsâGRUs and LSTMs. For the case of LSTMs, the forget gate bias is initialized with a large positive value of 3, as recommended by Zaremba (2015) and Gers (2001), which has been shown to be beneï¬cial for learning long-term dependencies.
As for models that take real-valued input, e.g. the RNN-GMM and SampleRNN-GMM (with 4 components), normalization is applied per audio sample with the global mean and standard deviation obtained from the train split. For most of our experiments where the model demands discrete input, binning was applied per audio sample.
All the models have been trained with teacher forcing and stochastic gradient decent (mini-batch size 128) to minimize the Negative Log-Likelihood (NLL) in bits per dimension (per audio sample). Gra- dients were hard-clipped to remain in [-1, 1] range. Update rules from the Adam optimizer (Kingma| (6, = 0.9, Bg = 0.999, and « = 1leâ8) with an initial learning rate of 0.001 was used to adjust the parameters. For training each model, random search over hyper-parameter val- ues (Bergstra & Bengio}/2012) was conducted. The initial RNN state of all the RNN-based models was always learnable. Weight Normalization (Salimans & Kingma| 2016) has been used for all the linear layers in the model (except for the embedding layer) to accelerate the training procedure. Size of the embedding layer was 256 and initialized by standard normal distribution. Orthogonal weight matrices used for hidden-to-hidden connections and other weight matrices initialized similar to|He| (2015). In final model, we found GRU to work best (slightly better than LSTM). 1024 was the the number of hidden units for all GRUs (1 layer per tier for 3-tier and 3 layer for 2-tier model) and MLPs (3 fully connected layers with ReLU activation with output dimension being 1024 for first two layers and 256 for the final layer before softmax). Also FSâ) = FS) = 2 and FS) = 8 were found to result in lowest NLL.
3.1 WAVENET RE-IMPLEMENTATION
We implemented the WaveNet architecture as described in Oord et al. (2016). Ideally, we would have liked to replicate their model exactly but owing to missing details of architecture and hyper- parameters, as well as limited compute power at our disposal, we made our own design choices so that the model would ï¬t on a single GPU while having a receptive ï¬eld of around 250 milliseconds,
3Courtesy of Ubisoft
5
Published as a conference paper at ICLR 2017
2
Blizzard Onomatopoeia Music a, ni NAIA Ini WIN Nirirerernnvinnn NAN year Wily hal yal WN a perma VV WWW vv WA RA tt ota du lululil seth me Ma ahs ih lids i ero monn nines
Figure 2: Examples from the datasets compared to samples from our models. In the ï¬rst 3 rows, 2 seconds of audio are shown. In the bottom 3 rows, 100 milliseconds of audio are shown. Rows 1 and 4 are ground truth from which one can see how the datasets look different and have complex structure in low resolution which the frame-level component of the SampleRNN is designed to capture. Samples also to some extent mimic the same global structure. At the same time, zoomed-in samples of our model shows that it can perfectly resemble the high resolution structure present in the data as well.
Table 1: Test NLL in bits for three presented datasets. Model Blizzard Onomatopoeia Music RNN (Eq. 2) WaveNet (re-impl.) 1.434 1.480 2.034 2.285 1.410 1.464 SampleRNN (2-tier) SampleRNN (3-tier) 1.392 1.387 2.026 1.990 1.076 1.159
Table 2: Average NLL on Blizzard test set for real-valued models.
# Model
# Model
# Average Test NLL
RNN-GMM -2.415 SampleRNN-GMM (2-tier) -2.782
6
Published as a conference paper at ICLR 2017
Table 3: Effect of subsequence length on NLL (bits per audio sample) computed on the Blizzard validation set.
# Subsequence Length
32
64
128
256
512
NLL Validation 1.575 1.468 1.412 1.391 1.364
Table 4: Test (validation) set NLL (bits per audio sample) for Blizzard. Variants of SampleRNN are provided to compare the contribution of each component in performance. NLL Test (Validation) Model
SampleRNN (2-tier) Without Embedding Multi-Softmax 1.392 (1.369) 1.566 (1.539) 1.685 (1.656)
while having a reasonable number of updates per unit time. Although our model is very similar to WaveNet, the design choices, e.g. number of convolution ï¬lters in each dilated convolution layer, length of target sequence to train on simultaneously (one can train with a single target with all sam- ples in the receptive ï¬eld as input or with target sequence length of size T with input of size receptive ï¬eld + T - 1), batch-size, etc. might make our implementation different from what the authors have done in the original WaveNet model. Hence, we note here that although we did our best at exactly reproducing their results, there would very likely be different choice of hyper-parameters between our implementation and the one of the authors.
For our WaveNet implementation, we have used 4 dilated convolution blocks each having 10 dilated convolution layers with dilation 1, 2, 4, 8 up to 512. Hence, our network has a receptive ï¬eld the parameters of multinomial distribution of sample at time step of 4092 acoustic samples i.e. t, p(xi) = fθ(xiâ1, xiâ2, . . . xiâ4092) where θ is model parameters. We train on target sequence length of 1600 and use batch size of 8. Each dilated convolution ï¬lter has size 2 and the number of output channels is 64 for each dilated convolutional layer (128 ï¬lters in total due to gated non- linearity). We trained this model using Adam optimizer with a ï¬xed global learning rate of 0.001 for Blizzard dataset and 0.0001 for Onomatopoeia and Music datasets. We trained these models for about one week on a GeForce GTX TITAN X. We dropped the learning rate in the Blizzard experiment to 0.0001 after around 3 days of training.
3.2 HUMAN EVALUATION
Apart from reporting NLL, we conducted AB preference tests for random samples from four models trained on the Blizzard dataset. For unconditional generation of speech which at best sounds like mumbling, this type of test is the one which is more suited. Competing models were the RNN, SampleRNN (2-tier), SampleRNN (3-tier), and our implementation of WaveNet. The rest of the models were excluded as the quality of samples were deï¬nitely lower and also to keep the number of pair comparison tests manageable. We will release the samples that have been used in this test too.
All the samples were set to have the same volume. Every user is then shown a set of twenty pairs of samples with one random pair at a time. Each pair had samples from two different models. The human evaluator is asked to listen to the samples and had the option of choosing between the two model or choosing not to prefer any of them. Hence, we have a quantiï¬cation of preference between every pair of models. We used the online tool made publicly available by Jillings et al. (2015).
Results in Fig. 3 clearly points out that SampleRNN (3-tier) is a winner by a huge margin in terms of preference by human raters, then SampleRNN (2-tier) and afterward two other models, which matches with the performance comparison in Table 1.
The same evaluation was conducted for Music dataset except for an additional ï¬ltering process of samples. Speciï¬c to only this dataset, we observed that a batch of generated samples from competing models (this time restricted to RNN, SampleRNN (2-tier), and SampleRNN (3-tier)) were either music-like or random noise. For all these models we only considered random samples that were not random noise. Fig. 4 is dedicated to result of human evaluation on Music dataset.
7
Published as a conference paper at ICLR 2017
100 100 100 3+tier 3-tier 3-tier 2 80 80 80 ⬠5 £ 60 60 60 3 ES 2 = 40 40 40 2 g £ 20 20 20 2-tier No-Pref. RNN _ No-Pref. WaveN. nio-pref, 0 0 0 848 101 51 842 «8.9 69 39.0 7.0 40 100 100 100 g 80 2-tier 80 80 c RNN 2 60 co}. 2tier 60 3 ES g 5 40 40 Waven. 40 $ aveN 5 . £& 20 RNN 20 â 20 No-Pref. No-Pref. No-Pref. 0 0 0 790 180 3.0 602 320 78 22.4 633 143
Figure 3: Pairwise comparison of 4 best models based on the votes from listeners conducted on samples generated from models trained on Blizzard dataset.
100, 100, 100 © 3+tier 2-tier z 8o 80 80 ⬠5 5 60 2-tier 60 60 g g 2 40) 3 tier 40 40 2 2 20 No-Pref. 20 No-Pref. 20 No-Pref. } } RNN } RNN 32.6 57.0 10.5 83.5 47 11.8 85.1 2.3 12.6
Figure 4: Pairwise comparison of 3 best models based on the votes from listeners conducted on samples generated from models trained on Music dataset.
3.3 QUANTIFYING INFORMATION RETENTION
For the last experiment we are interested in measuring the memory span of the model. We trained our model, SampleRNN (3-tier), with best hyper-parameters on a dataset of 2 speakers reading audio books, one male and one female, respectively, with mean fundamental frequency of 125.3 and 201.8Hz. Each speaker has roughly 10 hours of audio in the dataset that has been preprocessed similar to Blizzard. We observed that it learned to stay consistent generating samples from the same speaker without having any knowledge about the speaker ID or any other conditioning information. This effect is more apparent here in comparison to the unbalanced Onomatopoeia that sometimes mixes two different categories of sounds.
Another experiment was conducted to test the effect of memory and study the effective memory horizon. We inject 1 second of silence in the middle of sampling procedure in order to see if it will remember to generate from the same speaker or not. Initially when sampling we let the model generate 2 seconds of audio as it normally do. From 2 to 3 seconds instead of feeding back the generated sample at that timestep a silent token (zero amplitude) would be fed. From 3 to 5 seconds again we sample normally; feeding back the generated token.
We did classiï¬cation based on mean fundamental frequency of speakers for the ï¬rst and last 2 seconds. In 83% of samples SampleRNN generated from the same person in two separate segments.
8
Published as a conference paper at ICLR 2017
This is in contrast to a model with ï¬xed past window like WaveNet where injecting 16000 silent tokens (3.3 times the receptive ï¬eld size) is equivalent to generating from scratch which has 50% chance (assuming each 2-second segment is coherent and not a mixed sound of two speakers).
# 4 RELATED WORK
Our work is related to earlier work on auto-regressive multi-layer neural networks, starting with Bengio & Bengio (1999), then NADE (Larochelle & Murray, 2011) and more recently Pix- elRNN (van den Oord et al., 2016). Similar to how they tractably model joint distribution over units of the data (e.g. words in sentences, pixels in images, etc.) through an auto-regressive decomposi- tion, we transform the joint distribution of acoustic samples using Eq. 1.
The idea of having part of the model running at different clock rates is related to multi-scale RNNs (Schmidhuber, 1992; El Hihi & Bengio, 1995; Koutnik et al., 2014; Sordoni et al., 2015; Serban et al., 2016).
Chung et al. (2015) also attempt to model raw audio waveforms which is in contrast to traditional approaches which use spectral features as in Tokuda et al. (2013), Bertrand et al. (2008), and Lee et al. (2009).
Our work is closely related to WaveNet (Oord et al., 2016), which is why we have made the above comparisons, and makes it interesting to compare the effect of adding higher-level RNN stages working at a low resolution. Similar to this work, our models generate one acoustic sample at a time conditioned on all previously generated samples. We also share the preprocessing step of quantizing the acoustics into bins. Unlike this model, we have different modules in our models running at different clock-rates. In contrast to WaveNets, we mitigate the problem of long-term dependency with hierarchical structure and using stateful RNNs, i.e. we will always propagate hidden states to the next training sequence although the gradient of the loss will not take into account the samples in previous training sequence.
# 5 DISCUSSION AND CONCLUSION
We propose a novel model that can address unconditional audio generation in the raw acoustic domain, which typically has been done until recently with hand-crafted features. We are able to show that a hierarchy of time scales and frequent updates will help to overcome the problem of modeling extremely high-resolution temporal data. That allows us, for this particular application, to learn the data manifold directly from audio samples. We show that this model can generalize well and generate samples on three datasets that are different in nature. We also show that the samples generated by this model are preferred by human raters.
Success in this application, with a general-purpose solution as proposed here, opens up room for more improvement when speciï¬c domain knowledge is applied. This method, however, proposed with audio generation application in mind, can easily be adapted to other tasks that require learning the representation of sequential data with high temporal resolution and long-range complex struc- ture.
# ACKNOWLEDGMENTS
The authors would like to thank JoËao Felipe Santos and Kyle Kastner for insightful comments and discussion. We would like to thank the Theano Development Team (2016)4 and MILA staff. We acknowledge the support of the following agencies for research funding and computing support: NSERC, Calcul Qu´ebec, Compute Canada, the Canada Research Chairs and CIFAR. Jose Sotelo also thanks the Consejo Nacional de Ciencia y Tecnolog´ıa (CONACyT) as well as the Secretar´ıa de Educaci´on P´ublica (SEP) for their support. This work was a collaboration with Ubisoft.
# 4http://deeplearning.net/software/theano/
9
Published as a conference paper at ICLR 2017
# REFERENCES
Yoshua Bengio and Samy Bengio. Modeling high-dimensional discrete data with multi-layer neural networks. In NIPS, volume 99, pp. 400â406, 1999.
James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(Feb):281â305, 2012.
Alexander Bertrand, Kris Demuynck, Veronique Stouten, et al. Unsupervised learning of auditory ï¬lter banks using non-negative matrix factorisation. In 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 4713â4716. IEEE, 2008.
Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Ben- gio. A recurrent latent variable model for sequential data. In Advances in neural information processing systems, pp. 2980â2988, 2015.
Alexey Dosovitskiy, Jost Springenberg, Maxim Tatarchenko, and Thomas Brox. Learning to gener- ate chairs, tables and cars with convolutional networks. 2016.
Salah El Hihi and Yoshua Bengio. Hierarchical recurrent neural networks for long-term dependen- cies. In NIPS, volume 400, pp. 409. Citeseer, 1995.
Felix Gers. Long short-term memory in recurrent neural networks. PhD thesis, Universit¨at Han- nover, 2001.
Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectiï¬ers: Surpassing human-level performance on imagenet classiï¬cation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1026â1034, 2015.
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735â1780, 1997.
Nicholas Jillings, David Moffat, Brecht De Man, and Joshua D. Reiss. Web Audio Evaluation Tool: A browser-based listening test environment. In 12th Sound and Music Computing Conference, July 2015.
Andrej Karpathy. The unreasonable effectiveness of recurrent neural networks. Andrej Karpathy blog, 2015.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Jan Koutnik, Klaus Greff, Faustino Gomez, and Juergen Schmidhuber. A clockwork rnn. arXiv preprint arXiv:1402.3511, 2014.
Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In AISTATS, volume 1, pp. 2, 2011.
Honglak Lee, Peter Pham, Yan Largman, and Andrew Y Ng. Unsupervised feature learning for audio classiï¬cation using convolutional deep belief networks. In Advances in neural information processing systems, pp. 1096â1104, 2009.
Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016.
Kishore Prahallad, Anandaswarup Vadapalli, Naresh Elluru, G Mantena, B Pulugundla, P Bhaskararao, HA Murthy, S King, V Karaiskos, and AW Black. The blizzard challenge 2013â indian language task. In Blizzard Challenge Workshop 2013, 2013.
10
Published as a conference paper at ICLR 2017
Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to ac- celerate training of deep neural networks. arXiv preprint arXiv:1602.07868, 2016.
J¨urgen Schmidhuber. Learning complex, extended sequences using the principle of history com- pression. Neural Computation, 4(2):234â242, 1992.
Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the 30th AAAI Conference on Artiï¬cial Intelligence (AAAI-16), 2016.
Hava T Siegelmann. Computation beyond the turing limit. In Neural Networks and Analog Compu- tation, pp. 153â164. Springer, 1999.
Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and Jian-Yun Nie. A hierarchical recurrent encoder-decoder for generative context-aware query sug- gestion. In Proceedings of the 24th ACM International on Conference on Information and Knowl- edge Management, pp. 553â562. ACM, 2015.
Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. URL http://arxiv.org/abs/ 1605.02688.
Keiichi Tokuda, Yoshihiko Nankaku, Tomoki Toda, Heiga Zen, Junichi Yamagishi, and Keiichiro Oura. Speech synthesis based on hidden markov models. Proceedings of the IEEE, 101(5): 1234â1252, 2013.
Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016.
Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122, 2015.
Wojciech Zaremba. An empirical exploration of recurrent network architectures. 2015.
# APPENDIX A
A MODEL VARIANT: SAMPLERNN-WAVENET HYBRID
SampleRNN-WaveNet model has two modules operating at two different clock-rate. The slower clock-rate module (frame-level module) sees one frame (each of which has size FS) at a time while the faster clock-rate component(sample-level component) sees one acoustic sample at a time i.e. the ratio of clock-rates for these two modules would be the size of a single frame. Number of sequential steps for frame-level component would be FS times lower. We repeat the output of each step of frame-level component FS times so that number of time-steps for output of both the components match. The output of both these modules are concatenated for every time-step which is further operated by non-linearities for every time-step independently before generating the ï¬nal output.
In our experiments, we kept size of a single frame (FS) to be 128. We tried two variants of this model: 1. fully convolutional WaveNet and 2. RNN-WaveNet. In fully convolutional WaveNet, both modules described above are implemented using dilated convolutions as described in original WaveNet model. In RNN-WaveNet, we use high capacity RNN in the frame-level module to model the dependency between frames. The sample-level WaveNet in RNN-WaveNet has receptive ï¬eld of size 509 samples from the past.
Although these models are designed with the intention of combining the two models to harness their best features, preliminary experiments show that this variant is not meeting our expectations at the moment which directs us to a possible future work.
11 | {
"id": "1602.07868"
} |
1612.04936 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | 7 1 0 2
b e F 3 1 ] L C . s c [
4 v 6 3 9 4 0 . 2 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# LEARNING THROUGH DIALOGUE INTERACTIONS BY ASKING QUESTIONS
Jiwei Li, Alexander H. Miller, Sumit Chopra, MarcâAurelio Ranzato, Jason Weston Facebook AI Research, New York, USA {jiwel,ahm,spchopra,ranzato,jase}@fb.com
# ABSTRACT
A good dialogue agent should have the ability to interact with users by both re- sponding to questions and by asking questions, and importantly to learn from both types of interaction. In this work, we explore this direction by designing a simu- lator and a set of synthetic tasks in the movie domain that allow such interactions between a learner and a teacher. We investigate how a learner can beneï¬t from asking questions in both ofï¬ine and online reinforcement learning settings, and demonstrate that the learner improves when asking questions. Finally, real exper- iments with Mechanical Turk validate the approach. Our work represents a ï¬rst step in developing such end-to-end learned interactive dialogue agents.
# INTRODUCTION
When a student is asked a question by a teacher, but is not conï¬dent about the answer, they may ask for clariï¬cation or hints. A good conversational agent (a learner/bot/student) should have this ability to interact with a dialogue partner (the teacher/user). However, recent efforts have mostly focused on learning through ï¬xed answers provided in the training set, rather than through interactions. In that case, when a learner encounters a confusing situation such as an unknown surface form (phrase or structure), a semantically complicated sentence or an unknown word, the agent will either make a (usually poor) guess or will redirect the user to other resources (e.g., a search engine, as in Siri). Humans, in contrast, can adapt to many situations by asking questions.
We identify three categories of mistakes a learner can make during dialogue1: (1) the learner has problems understanding the surface form of the text of the dialogue partner, e.g., the phrasing of a question; (2) the learner has a problem with reasoning, e.g. they fail to retrieve and connect the relevant knowledge to the question at hand; (3) the learner lacks the knowledge necessary to answer the question in the ï¬rst place â that is, the knowledge sources the student has access to do not contain the needed information.
All the situations above can be potentially addressed through interaction with the dialogue partner. Such interactions can be used to learn to perform better in future dialogues. If a human student has problems understanding a teacherâs question, they might ask the teacher to clarify the question. If the student doesnât know where to start, they might ask the teacher to point out which known facts are most relevant. If the student doesnât know the information needed at all, they might ask the teacher to tell them the knowledge theyâre missing, writing it down for future use.
In this work, we try to bridge the gap between how a human and an end-to-end machine learning dialogue agent deal with these situations: our student has to learn how to learn. We hence design a simulator and a set of synthetic tasks in the movie question answering domain that allow a bot to interact with a teacher to address the issues described above. Using this framework, we explore how a bot can beneï¬t from interaction by asking questions in both ofï¬ine supervised settings and online reinforcement learning settings, as well as how to choose when to ask questions in the latter setting. In both cases, we ï¬nd that the learning system improves through interacting with users.
1This list is not exhaustive; for example, we do not address a failure in the dialogue generation stage.
1
Published as a conference paper at ICLR 2017
Finally, we validate our approach on real data where the teachers are humans using Amazon Me- chanical Turk, and observe similar results.
# 2 RELATED WORK
Learning language through interaction and feedback can be traced back to the 1950s, when Wittgen- stein argued that the meaning of words is best understood from their use within given language games (Wittgenstein, 2010). The direction of interactive language learning through language games has been explored in the early seminal work of Winograd (Winograd, 1972), and in the recent SHRD- LURN system (Wang et al., 2016). In a broader context, the usefulness of feedback and interactions has been validated in the setting of multiple language learning, such as second language learning (Bassiri, 2011) and learning by students (Higgins et al., 2002; Latham, 1997; Werts et al., 1995).
In the context of dialogue, with the recent popularity of deep learning models, many neural dialogue systems have been proposed. These include the chit-chat type end-to-end dialogue systems (Vinyals & Le, 2015; Li et al., 2015; Sordoni et al., 2015), which directly generate a response given the previous history of user utterance. It also include a collection of goal-oriented dialogue systems (Wen et al., 2016; Su et al., 2016; Bordes & Weston, 2016), which complete a certain task such as booking a ticket or making a reservation at a restaurant. Another line of research focuses on supervised learning for question answering from dialogues (Dodge et al., 2015; Weston, 2016), using either a given database of knowledge (Bordes et al., 2015; Miller et al., 2016) or short stories (Weston et al., 2015). As far as we know, current dialogue systems mostly focus on learning through ï¬xed supervised signals rather than interacting with users.
Our work is closely related to the recent work of Weston (2016), which explores the problem of learning through conducting conversations, where supervision is given naturally in the response dur- ing the conversation. Their work introduced multiple learning schemes from dialogue utterances. In particular the authors discussed Imitation Learning, where the agent tries to learn by imitating the dialogue interactions between a teacher and an expert student; Reward-Based Imitation Learn- ing, which only learns by imitating the dialogue interactions which have have correct answers; and Forward Prediction, which learns by predicting the teacherâs feedback to the studentâs response. Despite the fact that Forward Prediction does not uses human-labeled rewards, the authors show that it yields promising results. However, their work did not fully explore the ability of an agent to learn via questioning and interaction. Our work can be viewed as a natural extension of theirs.
# 3 THE TASKS
In this section we describe the dialogue tasks we designed2. They are tailored for the three different situations described in Section 1 that motivate the bot to ask questions: (1) Question Clariï¬cation, in which the bot has problems understanding its dialogue partnerâs text; (2) Knowledge Operation, in which the bot needs to ask for help to perform reasoning steps over an existing knowledge base; and (3) Knowledge Acquisition, in which the botâs knowledge is incomplete and needs to be ï¬lled.
For our experiments we adapt the WikiMovies dataset (Weston et al., 2015), which consists of roughly 100k questions over 75k entities based on questions with answers in the open movie dataset (OMDb). The training/dev/test sets respectively contain 181638 / 9702 / 9698 examples. The accu- racy metric corresponds to the percentage of times the student gives correct answers to the teacherâs questions.
Each dialogue takes place between a teacher and a bot. In this section we describe how we gener- ate tasks using a simulator. Section 4.2 discusses how we test similar setups with real data using Mechanical Turk.
The bot is ï¬rst presented with facts from the OMDb KB. This allows us to control the exact knowl- edge the bot has access to. Then, we include several teacher-bot question-answer pairs unrelated to the question the bot needs to answer, which we call conversation histories3. In order to explore the
2 Code and data are available at https://github.com/facebook/MemNN/tree/master/AskingQuestions. 3 These history QA pairs can be viewed as distractions and are used to test the botâs ability to separate the
3 These history QA pairs can be viewed as tions and are wheat from the chaff. For each dialogue, we incorporate 5 extra QA pairs (10 sentences). ility to separate the
wheat from the chaff. For each dialogue, we incorporate 5 extra QA pairs (10 sentences).
2
Published as a conference paper at ICLR 2017
beneï¬ts of asking clariï¬cation questions during a conversation, for each of the three scenarios, our simulator generated data for two different settings, namely, Question-Answering (denoted by QA), and Asking-Question (denoted by AQ). For both QA and AQ, the bot needs to give an answer to the teacherâs original question at the end. The details of the simulator can be found in the appendix.
# 3.1 QUESTION CLARIFICATION.
In this setting, the bot does not understand the teacherâs question. We focus on a special situation where the bot does not understand the teacher because of typo/spelling mistakes, as shown in Figure 1. We intentionally misspell some words in the questions such as replacing the word âmovieâ with âmovvieâ or âstarâ with âsttarâ.4 To make sure that the bot will have problems understanding the question, we guarantee that the bot has never encountered the misspellings beforeâthe misspelling- introducing mechanisms in the training, dev and test sets are different, so the same word will be misspelled in different ways in different sets. We present two AQ tasks: (i) Question Paraphrase where the student asks the teacher to use a paraphrase that does not contain spelling mistakes to clarify the question by asking âwhat do you mean?â; and (ii) Question Veriï¬cation where the stu- dent asks the teacher whether the original typo-bearing question corresponds to another question without the spelling mistakes (e.g., âDo you mean which ï¬lm did Tom Hanks appear in?â). The teacher will give feedback by giving a paraphrase of the original question without spelling mistakes (e.g., âI mean which ï¬lm did Tom Hanks appear inâ) in Question Paraphrase or positive/negative feedback in Question Veriï¬cation. Next the student will give an answer and the teacher will give positive/negative feedback depending on whether the studentâs answer is correct. Positive and nega- tive feedback are variants of âNo, thatâs incorrectâ or âYes, thatâs rightâ5. In these tasks, the bot has access to all relevant entries in the KB.
3.2 KNOWLEDGE OPERATION
The bot has access to all the relevant knowledge (facts) but lacks the ability to perform necessary reasoning operations over them; see Figure 2. We focus on a special case where the bot will try to understand what are the relevant facts. We explore two settings: Ask For Relevant Knowledge (Task 3) where the bot directly asks the teacher to point out the relevant KB fact and Knowledge Veriï¬cation (Task 4) where the bot asks whether the teacherâs question is relevant to one particular KB fact. The teacher will point out the relevant KB fact in the Ask For Relevant Knowledge setting or give a positive or negative response in the Knowledge Veriï¬cation setting. Then the bot will give an answer to the teacherâs original question and the teacher will give feedback on the answer.
3.3 KNOWLEDGE ACQUISITION
For the tasks in this subsection, the bot has an incomplete KB and there are entities important to the dialogue missing from it, see Figure 3. For example, given the question âWhich movie did Tom Hanks star in?â, the missing part could either be the entity that the teacher is asking about (question entity for short, which is Tom Hanks in this example), the relation entity (starred actors), the answer to the question (Forrest Gump), or the combination of the three. In all cases, the bot has little chance of giving the correct answer due to the missing knowledge. It needs to ask the teacher the answer to acquire the missing knowledge. The teacher will give the answer and then move on to other questions (captured in the conversational history). They later will come back to reask the question. At this point, the bot needs to give an answer since the entity is not new any more.
Though the correct answer has effectively been included in the earlier part of the dialogue as the answer to the botâs question, as we will show later, many of the tasks are not as trivial as they look when the teacher reasks the question. This is because the botâs model needs to memorize the missing entity and then construct the links between the missing entities and known ones. This is akin to the real world case where a student might make the same mistake again and again even though each time the teacher corrects them if their answer is wrong. We now detail each task in turn.
4Many reasons could lead to the bot not understanding the teacherâs question, e.g., the teacherâs question has an unknown phrase structure, rather than unknown words. We choose to use spelling mistakes because of the ease of dataset construction.
5In the datasets we build, there are 6 templates for positive feedback and 6 templates for negative feedback.
3
Published as a conference paper at ICLR 2017
Task 1: Question Paraphrase (QA) kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, Robin Wright, Gary Sinise kb: Forrest Gump directed by Robert Zemeckis T/S : Conversation History. T : Which movvie did Tom Hanks sttar in ? S : Larry Crowne T : Wrong. (-) Task 1: Question Paraphrase (AQ): kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, Robin Wright, Gary Sinise kb: Forrest Gump directed by Robert Zemeckis T/S : Conversation History. T : Which movvie did Tom Hanks sttar in ? S : What do you mean ? T : I mean which ï¬lm did Tom Hanks appear in. T : Which movvie did Tom Hanks sttar in ? S : Forrest Gump T : Thatâs correct. (+) Task 2: Question Veriï¬cation (QA): kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, Robin Wright, Gary Sinise kb: Forrest Gump directed by Robert Zemeckis T/S : Conversation History. T : Which movvie did Tom Hanks sttar in ? S : Forrest Gump T : Thatâs correct. (+) Task 2: Question Veriï¬cation (AQ) kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, Robin Wright, Gary Sinise kb: Forrest Gump directed by Robert Zemeckis T/S : Conversation History. T : Which movvie did Tom Hanks sttar in ? S : Do you mean which ï¬lm did Tom Hanks appear in ? T : Thatâs correct. (+) T : Which movvie did Tom Hanks sttar in ? S : Larry Crowne T : Sorry, thatâs not right. (-)
Figure 1: Sample dialogues for the two Question Clariï¬cation tasks (rows) using both the traditional QA setting (left column) and AQ setting (right column). In each case the same example is given for simplicity. Black text preï¬xed by âkb:â denotes KB knowledge that the student has access to. Blue text is spoken by the teacher, denoted by T. (+) denotes a reward of 1 (and 0 otherwise) that the teacher assigns to the bot. Red text denotes responses or questions posed by the bot, denoted by S. Brown denotes typos deliberately introduced by the authors. For the Question Veriï¬cation setting, the student can either ask a correct (pertinent) question (as in this example) or an incorrect (irrelevant) one. The teacher will give positive or negative feedback based on the correctness of the studentâs question. In our ofï¬ine superised learning experiments, the probability of asking pertinent questions and correctly answering the original question from the teacher is set to 0.5. Finally, T/S denotes 5 pairs of questions and answers that are irrelevant to the rest of the conversation.
Task 3: Ask For Relevant Knowledge (AQ): kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, Robin Wright, Gary Sinise kb: Forrest Gump directed by Robert Zemeckis T/S : Conversation History. T : Which movie did Tom Hanks star in ? S : Can you give me a hint ? T : It is related to the kb fact that Task 4: Knowledge Veriï¬cation (AQ): kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, Robin Wright, Gary Sinise kb: Forrest Gump directed by Robert Zemeckis T/S : Conversation History. T : Which movie did Tom Hanks star in ? S : Does it have something to do with Larry Crowne directed by Tom Hanks? Forrest Gump starred actors Tom Hanks Robin Wright, Gary Sinise T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Thatâs correct. (+) T : No. Thatâs wrong. (-) T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Thatâs correct. (+)
Figure 2: Sample dialogues for Knowledge Operation tasks.
Missing Question Entity: The entity that the teacher is asking about is missing from the knowledge base. All KB facts containing the question entity will be hidden from the bot. In the example for Task 5 in Figure 3, since the teacherâs question contains the entity Tom Hanks, the KB facts that contain Tom Hanks are hidden from the bot.
4
Published as a conference paper at ICLR 2017
Figure 3: Different Tasks for Knowledge Acquisition. Crossed lines correspond to entries of the KB whose retrieval is missed.
. Task 5: Missing Question Entity (QA) kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks kb: Forrest Gump starred actors Sally Field kb: Forrest Gump directed by Robert Zemeckis T/S : Conversation History. T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Thatâs correct. (+)
Task 5: Missing Question Entity (AQ) kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, kb: Forrest Gump starred actors Sally Field kb: Forrest Gump directed by Robert Zemeckis T : Which movie did Tom Hanks star in ? S : I donât know. Whatâs the answer? T : The answer is Forrest Gump. T/S : Conversation History. T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Thatâs correct. (+)
Task 6: Missing Answer Entity (AQ) kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks kb: Forrest Gump starred actors Sally Field kb: Forrest Gump directed by Robert Zemeckis T : Which movie did Tom Hanks star in ? S : I donât know. Whatâs the answer? T : The answer is Forrest Gump. T/S : Conversation History. T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Thatâs correct. (+)
Task 7: Missing Relation Entity (AQ) kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, kb: Forrest Gump starred actors Sally Field kb: Forrest Gump directed by Robert Zemeckis T : Which movie did Tom Hanks star in ? S : I donât know. Whatâs the answer? T : The answer is Forrest Gump. T/S : Conversation History. T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Thatâs correct. (+)
. Task 8: Missing Triple (AQ) kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks kb: Forrest Gump starred actors Sally Field kb: Forrest Gump directed by Robert Zemeckis T : Which movie did Tom Hanks star in ? S : I donât know. Whatâs the answer? T : The answer is Forrest Gump. T/S : Conversation History. T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Thatâs correct. (+)
Task 9: Missing Everything (AQ) kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, kb: Forrest Gump starred actors Sally Field kb: Forrest Gump directed by Robert Zemeckis T : Which movie did Tom Hanks star in ? S : I donât know. Whatâs the answer? T : The answer is Forrest Gump. T/S : Conversation History. T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Thatâs correct. (+)
Missing Answer Entity: The answer entity to the question is unknown to the bot. All KB facts that contain the answer entity will be hidden. Hence, in Task 6 of Figure 3, all KB facts containing the answer entity Forrest Gump will be hidden from the bot.
Missing Relation Entity: The relation type is unknown to the bot. In Task 7 of Figure 3, all KB facts that express the relation starred actors are hidden from the bot.
Missing Triples: The triple that expresses the relation between the question entity and the answer In Task 8 of Figure 3, the triple âForrest Gump (question entity) entity is hidden from the bot. starred actors Tom Hanks (answer entity)â will be hidden.
Missing Everything: The question entity, the relation entity, the answer entity are all missing from the KB. All KB facts in Task 9 of Figure 3 will be removed since they either contain the relation entity (i.e., starred actors), the question entity (i.e., Forrest Gump) or the answer entity Tom Hanks.
5
Published as a conference paper at ICLR 2017
# 4 TRAIN/TEST REGIME
We now discuss in detail the regimes we used to train and test our models, which are divided between evaluation within our simulator and using real data collected via Mechanical Turk.
4.1 SIMULATOR
Using our simulator, our objective was twofold. We ï¬rst wanted to validate the usefulness of ask- ing questions in all the settings described in Section 3. Second, we wanted to assess the ability of our student bot to learn when to ask questions. In order to accomplish these two objectives we ex- plored training our models with our simulator using two methodologies, namely, Ofï¬ine Supervised Learning and Online Reinforcement Learning.
4.1.1 OFFLINE SUPERVISED LEARNING
The motivation behind training our student models in an ofï¬ine supervised setting was primarily to test the usefulness of the ability to ask questions. The dialogues are generated as described in the previous section, and the botâs role is generated with a ï¬xed policy. We chose a policy where answers to the teacherâs questions are correct answers 50% of the time, and incorrect otherwise, to add a degree of realism. Similarly, in tasks where questions can be irrelevant they are only asked correctly 50% of the time.6
The ofï¬ine setting explores different combinations of training and testing scenarios, which mimic different situations in the real world. The aim is to understand when and how observing interactions between two agents can help the bot improve its performance for different tasks. As a result we construct training and test sets in three ways across all tasks, resulting in 9 different scenarios per task, each of which correspond to a real world scenario.
The three training sets we generated are referred to as TrainQA, TrainAQ, and TrainMix. TrainQA follows the QA setting discussed in the previous section: the bot never asks questions and only tries to immediately answer. TrainAQ follows the AQ setting: the student, before answering, ï¬rst always asks a question in response to the teacherâs original question. TrainMix is a combination of the two where 50% of time the student asks a question and 50% does not.
The three test sets we generated are referred to as TestQA, TestAQ, and TestModelAQ. TestQA and TestAQ are generated similarly to TrainQA and TrainAQ, but using a perfect ï¬xed policy (rather than 50% correct) for evaluation purposes. In the TestModelAQ setting the model has to get the form of the question correct as well. In the Question Veriï¬cation and Knowledge Veriï¬cation tasks there are many possible ways of forming the question and some of them are correct â the model has to choose the right question to ask. E.g. it should ask âDoes it have something to do with the fact that Larry Crowne directed by Tom Hanks?ârather than âDoes it have something to do with the fact that Forrest Gump directed by Robert Zemeckis?â when the latter is irrelevant (the candidate list of questions is generated from the known knowledge base entries with respect to that question). The policy is trained using either the TrainAQ or TrainMix set, depending on the training scenario. The teacher will reply to the question, giving positive feedback if the studentâs question is correct and no response and negative feedback otherwise. The student will then give the ï¬nal answer. The difference between TestModelAQ and TestAQ only exists in the Question Veriï¬cation and Knowledge Veriï¬cation tasks; in other tasks there is only one way to ask the question and TestModelAQ and TestAQ are identical.
To summarize, for every task listed in Section 3 we train one model for each of the three training sets (TrainQA, TrainAQ, TrainMix) and test each of these models on the three test sets (TestQA, TestAQ, and TestModelAQ), resulting in 9 combinations. For the purpose of notation the train/test combination is denoted by âTrainSetting+TestSettingâ. For example, TrainAQ+TestQA denotes a model which is trained using the TrainAQ dataset and tested on TestQA dataset. Each combination has a real world interpretation. For instance, TrainAQ+TestQA would refer to a scenario where a student can ask the teacher questions during learning but cannot to do so while taking an exam. Similarly, TrainQA+TestQA describes a stoic teacher that never answers a studentâs question at either learning or examination time. The setting TrainQA+TestAQ corresponds to the case where a lazy
6This only makes sense in tasks like Question or Knowledge Veriï¬cation. In tasks where the question is static such as âWhat do you mean?â there is no way to ask an irrelevant question, and we do not use this policy.
6
Published as a conference paper at ICLR 2017
student never asks question at learning time but gets anxious during the examination and always asks a question.
4.1.2 ONLINE REINFORCEMENT LEARNING (RL)
We also explored scenarios where the student learns the ability to decide when to ask a question. In other words, the student learns how to learn.
Although it is in the interest of the student to ask questions at every step of the conversation, since the response to its question will contain extra information, we donât want our model to learn this behavior. Each time a human student asks a question, thereâs a cost associated with that action. This cost is a reï¬ection of the patience of the teacher, or more generally of the users interacting with the bot in the wild: users wonât ï¬nd the bot engaging if it always asks clariï¬cation questions. The student should thus be judicious about asking questions and learn when and what to ask. For instance, if the student is conï¬dent about the answer, there is no need for it to ask. Or, if the teacherâs question is so hard that clariï¬cation is unlikely to help enough to get the answer right, then it should also refrain from asking.
We now discuss how we model this problem under the Reinforcement Learning framework. The bot is presented with KB facts (some facts might be missing depending on the task) and a question. It needs to decide whether to ask a question or not at this point. The decision whether to ask is made by a binary policy PRLQuestion. If the student chooses to ask a question, it will be penalized by costAQ. We explored different values of costAQ ranging from [0, 2], which we consider as modeling the patience of the teacher. The goal of this setting is to ï¬nd the best policy for asking/not- asking questions which would lead to the highest cumulative reward. The teacher will appropriately reply if the student asks a question. The student will eventually give an answer to the teacherâs initial question at the end using the policy PRLAnswer, regardless of whether it had asked a question. The student will get a reward of +1 if its ï¬nal answer is correct and â1 otherwise. Note that the student can ask at most one question and that the type of question is always speciï¬ed by the task under consideration. The ï¬nal reward the student gets is the cumulative reward over the current dialogue episode. In particular the reward structure we propose is the following:
# Asking Question Not asking Question
Final Answer Correct Final Answer Incorrect 1-costAQ -1-costAQ 1 -1
Table 1: Reward structure for the Reinforcement Learning setting.
For each of the tasks described in Section 3, we consider three different RL scenarios. Good-Student: The student will be presented with all relevant KB facts. There are no misspellings or unknown words in the teacherâs question. This represents a knowledgable student in the real world that knows as much as it needs to know (e.g., a large knowledge base, large vocabulary). This setting is identical across all missing entity tasks (5 - 9). Poor-Student: The KB facts or the questions presented to the student are ï¬awed depending on each task. For example, for the Question Clariï¬cation tasks, the student does not understand the question due to spelling mistakes. For the Missing Question Entity task the entity that the teacher asks about is unknown by the student and all facts containing the entity will be hidden from the student. This setting is similar to a student that is underprepared for the tasks. Medium-Student: The combination of the previous two settings where for 50% of the questions, the student has access to the full KB and there are no new words or phrases or entities in the question, and 50% of the time the question and KB are taken from the Poor-Student setting.
4.2 MECHANICAL TURK DATA
Finally, to validate our approach beyond our simulator by using real language, we collected data via Amazon Mechanical Turk. Due to the cost of data collection, we focused on real language versions of Tasks 4 (Knowledge Veriï¬cation) and 8 (Missing Triple), see Secs. 3.2 and 3.3 for the simulator versions. That is, we collect dialoguess and use them in an ofï¬ine supervised learning setup similar to Section 4.1.1. This setup allows easily reproducibile experiments.
For Mechanical Turk Task 4, the bot is asked a question by a human teacher, but before answering can ask the human if the question is related to one of the facts it knows about from its memory.
7
Published as a conference paper at ICLR 2017
a Which mowvie did Tom Hanks sttar in? AQ Qa cd 5 2S) what do you mean? ® is âLarry Crowne BS I mean which film did Tom Hanks appear in. & Gh thatâs incorrect (-) ES) Forest Gump. ~ Reward: -1 a Thatâs correct (+) Reward: 1-CostAQ illustration of the poor-student setting for RL Task 1 (Question
Figure 4: An illustration of the poor-student setting for RL Task 1 (Question Paraphrase).
It is then required to answer the original question, after some additional dialog turns relating to other question/answer pairs (called âconversational historyâ, as before). For Task 8, the bot is asked a question by a human but lacks the triple in its memory that would be needed to answer it. It is allowed to ask for the missing information, the human responds to the question in free-form language. The bot is then required to answer the original question, again after some âconversational historyâ has transpired.
We collect around 10,000 episodes (dialogues) for training, 1000 for validation, and 2500 for testing for each of the two tasks. In each case, we give instructions to the Turkers that still follow the original form of the task, but make the tasks contain realistic language written by humans. The instructions given to the Turkers are given in the appendix.
For both tasks, while the human turkers replace the simulator that the bot was previously conversing with, the botâs dialogue actions (capabilities) are essentially unchanged from before. That is, when answering questions, now the bot is required to answer a humanâs questions rather than templated questions from the simulator. When the bot is asking questions, the bot still asks in the same form as before, e.g. questions like âDoes it have something to do with Xâ for Task 4 or âI donât know. Whatâs the answer?â for Task 8. However, now its questions are answered by a human. In both cases (asking and answering) the human data is richer with potentially more complex language and lexical variability. Examples of the collected dialogues are given in Figure 5.
MTurk Task 4: Knowledge Veriï¬cation (AQ): kb: In Their Skin written by Joshua Close, Jeremy Power Regimbal kb: In Their Skin starred actors Selma Blair, James Dâarcy, Rachel Miner, Joshua Close T: What ï¬lm was written by Joshua Close? S: Does it have something to do with âIn Their Skin starred actors Joshua Closeâ? T: No we were looking for who wrote the ï¬lm not who starred in it. . . . MTurk Task 8: Missing Triple (AQ): T :What movies does Timothy Scott Bogart have writing credit for? S: I donât know. Whatâs the answer? T: Extreme Ops or The I Inside is the right answer. . . .
Figure 5: Sample dialogues for Mechanical Turk versions of Tasks 4 and 8. Compared to the original tasks (see Figs 2 and 3) the teacherâs questions, and the teacher responses to the studentâs questions, are written by humans and are more complex and contain more variety.
# 5 MODELS
For both ofï¬ine supervised and online RL settings, we use the End-to-End Memory Network model (MemN2N) (Sukhbaatar et al., 2015) as a backbone. The model takes as input the last utterance of the dialogue history (the question from the teacher) as well as a set of memory contexts including short-term memories (the dialogue history between the bot and the teacher) and long-term memories
8
Published as a conference paper at ICLR 2017
(the knowledge base facts that the bot has access to), and outputs a label. We refer readers to the Appendix for more details about MemN2N.
Ofï¬ine Supervised Settings: The ï¬rst learning strategy we adopt is the reward-based imitation strategy (denoted vanilla-MemN2N) described in (Weston, 2016), where at training time, the model maximizes the log likelihood probability of the correct answers the student gave (examples with incorrect ï¬nal answers are discarded). Candidate answers are words that appear in the memories, which means the bot can only predict the entities that it has seen or known before.
We also use a variation of MemN2N called âcontext MemN2Nâ (Cont-MemN2N for short) where we replace each wordâs embedding with the average of its embedding (random for unseen words) and the embeddings of the other words that appear around it. We use both the preceeding and following words as context and the number of context words is a hyperparameter selected on the dev set.
An issue with both vanilla-MemN2N and Cont-MemN2N is that the model only makes use of the botâs answers as signals and ignores the teacherâs feedback. We thus propose to use a model that jointly predicts the botâs answers and the teacherâs feedback (denoted as TrainQA (+FP)). The botâs answers are predicted using a vanilla-MemN2N and the teacherâs feedback is predicted using the Forward Prediction (FP) model as described in (Weston, 2016). We refer the readers to the Appendix for the FP model details. At training time, the models learn to jointly predict the teacherâs feedback and the answers with positive reward. At test time, the model will only predict the botâs answer.
For the TestModelAQ setting described in Section 4, the model needs to decide the question to ask. Again, we use vanilla-MemN2N that takes as input the question and contexts, and outputs the question the bot will ask.
Online RL Settings: A binary vanilla-MemN2N (denoted as PRL(Question)) is used to decide whether the bot should or should not ask a question, with the teacher replying if the bot does ask something. A second MemN2N is then used to decide the botâs answer, denoted as PRL(Answer). PRL(Answer) for QA and AQ are two separate models, which means the bot will use different models for ï¬nal-answer prediction depending on whether it chooses to ask a question or not.7
to update PRL(Question) and We use the REINFORCE algorithm (Williams, 1992) PRL(Answer). For each dialogue, the bot takes two sequential actions (a1, a2): to ask or not to ask a question (denoted as a1); and guessing the ï¬nal answer (denoted as a2). Let r(a1, a2) denote the cumulative reward for the dialogue episode, computed using Table 1. The gradient to update the policy is given by:
p(a1, a2) = PRL(Question)(a1) · PRL(answer)(a2) âJ(θ) â â log p(a1, a2)[r(a1, a2) â b] (1)
where b is the baseline value, which is estimated using another MemN2N model that takes as input the query x and memory C, and outputs a scalar b denoting the estimation of the future reward. The baseline model is trained by minimizing the mean squared loss between the estimated reward b and actual cumulative reward r, ||r â b||2. We refer the readers to (Ranzato et al., 2015; Zaremba & Sutskever, 2015) for more details. The baseline estimator model is independent from the policy models and the error is not backpropagated back to them.
train only In practice, we ï¬nd the following training strategy yields better results: ï¬rst PRL(answer), updating gradients only for the policy that predicts the ï¬nal answer. After the botâs ï¬nal-answer policy is sufï¬ciently learned, train both policies in parallel8. This has a real-world anal- ogy where the bot ï¬rst learns the basics of the task, and then learns to improve its performance via a question-asking policy tailored to the userâs patience (represented by costAQ) and its own ability to asnwer questions.
7An alternative is to train one single model for ï¬nal answer prediction in both AQ and QA cases, similar to the TrainMix setting in the supervised learning setting. But we ï¬nd training AQ and QA separately for the ï¬nal answer prediction yields a little better result than the single model setting.
8 We implement this by running 16 epochs in total, updating only the modelâs policy for ï¬nal answers in the ï¬rst 8 epochs while updating both policies during the second 8 epochs. We pick the model that achieves the best reward on the dev set during the ï¬nal 8 epochs. Due to relatively large variance for RL models, we repeat each task 5 times and keep the best model on each task.
9
Published as a conference paper at ICLR 2017
Question Clariï¬cation Knowledge Operation Train \Test TrainQA (Context) TrainAQ (Context) TrainMix (Context) Task 1: Q. Paraphrase TestAQ TestQA 0.726 0.754 0.889 0.640 0.846 0.751 Task 2: Q. Veriï¬cation TestQA 0.742 0.643 0.740 TestAQ 0.684 0.807 0.789 Task 3: Ask For Relevant K. TestQA 0.883 0.716 0.870 TestAQ 0.947 0.985 0.985 Task 4: K. Veriï¬cation TestQA 0.888 0.852 0.875 TestAQ 0.959 0.987 0.985 Train \Test TrainQA (Context) TrainAQ (Context) TrainMix (Context) TestQA TestAQ Task 5: Q. Entity <0.01 0.224 <0.01 0.639 <0.01 0.632 Knowledge Acquisition TestAQ TestQA Task 6: Answer Entity <0.01 <0.01 <0.01 TestQA Task 7: Relation Entity 0.241 0.143 0.216 TestAQ 0.120 0.885 0.852 0.301 0.893 0.898 TestQA TestAQ Task 8: Triple 0.339 0.154 0.298 0.251 0.884 0.886
TestQA TestAQ Task 9: Everything <0.01 0.058 <0.01 0.908 <0.01 0.903
Table 2: Results for Cont-MemN2N on different tasks.
6 EXPERIMENTS
6.1 SIMULATOR
Ofï¬ine Results: Ofï¬ine results are presented in Tables 2, 7 and 8 (the latter two are in the appendix). Table 7 presents results for the vanilla-MemN2N and Forward Prediction models. Table 2 presents results for Cont-MemN2N, which is better at handling unknown words. We repeat each experiment 10 times and report the best result. Finally, Table 8 presents results for the test scenario where the bot itself chooses when to ask questions. Observations can be summarized as as follows:
Asking questions helps at test time, which is intuitive since it provides additional evidence:
⢠TrainAQ+TestAQ (questions can be asked at both training and test time) performs the best across all the settings.
⢠TrainQA+TestAQ (questions can be asked at training time but not at test time) performs worse than TrainQA+TestQA (questions can be asked at neither training nor test time) in tasks Question Clariï¬cation and Knowledge Operation due to the discrepancy between training and testing.
⢠TrainQA+TestAQ performs better than TrainQA+TestQA on all Knowledge Acquisition tasks, the only exception being the Cont-MemN2N model on the Missing Triple setting. The explanation is that for most tasks in Knowledge Acquisition, the learner has no chance of giving the correct answer without asking questions. The beneï¬t from asking is thus large enough to compensate for the negative effect introduced by data discrepancy between training and test time.
⢠TrainMix offers ï¬exibility in bridging the gap between datasets generated using QA and AQ, very slightly underperforming TrainAQ+TestAQ, but gives competitive results on both TestQA and TestAQ in the Question Clariï¬cation and Knowledge Operations tasks.
⢠TrainAQ+TestQA (allowing questions at training time but forbid questions at test time) per- forms the worst, even worse than TrainQA+TestQA. This has a real-world analogy where a student becomes dependent on the teacher answering their questions, later struggling to answer the test questions without help.
⢠In the Missing Question Entity task (the student does not know about the question entity), the Missing Answer Entity task (the student does not know about the answer entity), and Missing Everything task, the bot achieves accuracy less than 0.01 if not asking questions at test time (i.e., TestQA).
⢠The performance of TestModelAQ, where the bot relies on its model to ask questions at test time (and thus can ask irrelevant questions) performs similarly to asking the correct question at test time (TestAQ) and better than not asking questions (TestQA).
- Cont-MemN2N signiï¬cantly outperforms vanilla-MemN2N. One explanation is that considering context provides signiï¬cant evidence distinguishing correct answers from candidates in the dialogue history, especially in cases where the model encounters unfamiliar words.
RL Results For the RL settings, we present results for Task 2 (Question Veriï¬cation) and Task 6 (Missing Answer Entities) in Figure 6. Task 2 represents scenarios where different types of student
10
Published as a conference paper at ICLR 2017
Task2 Question Verification Question-Asking Rate vs Question Cost Task2 Question Verification Final Accuracy vs Question Cost 5S g +3 0.8| ae 2 ov 0.6 % ¢ = 6 0.4 i S02 .72| [=e good oO âa medium m= poor 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.00 0.05 010 015 0.20 0.25 0.30 Question Cost Question Cost Task6 Missing Answer Entity Task6 Missing Answer Entity *1opeeQuestion-Asking Rate vs Question Cost Final Accuracy vs Question Cost 2 % 0.8, fe > > 8 § os} 50. o 3 $ < < = § 0.4, 3 0 B £ a it 3 0.2 o 8a OS⢠ie Ts a0 2s 0 «|= 8S OS 80 Question Cost Question Cost
Figure 6: Results of online learning for Task 2 and Task 6
have different abilities to correctly answer questions (e.g., a poor student can still sometimes give correct answers even when they do not fully understand the question). Task 6 represents tasks where a poor learner who lacks the knowledge necessary to answer the question can hardly give a correct answer. All types of students including the good student will theoretically beneï¬t from asking questions (asking for the correct answer) in Task 6. We show the percentage of question-asking versus the cost of AQ on the test set and the accuracy of question-answering on the test set vs the cost of AQ. Our main ï¬ndings were:
⢠A good student does not need to ask questions in Task 2 (Question Veriï¬cation), because they already understand the question. The student will raise questions asking for the correct answer when cost is low for Task 6 (Missing Answer Entities).
⢠A poor student always asks questions when the cost is low. As the cost increases, the frequency of question-asking declines.
As the AQ cost increases gradually, good students will stop asking questions earlier than the medium and poor students. The explanation is intuitive: poor students beneï¬t more from asking questions than good students, so they continue asking even with higher penalties. ⢠As the probability of question-asking declines, the accuracy for poor and medium students
drops. Good students are more resilient to not asking questions.
6.2 MECHANICAL TURK
Results for the Mechanical Turk Tasks are given in Table 3. We again compare vanilla-MemN2N and Cont-MemN2N, using the same TrainAQ/TrainQA and TestAQ/TestQA combinations as before, for Tasks 4 and 8 as described in Section 4.2. We tune hyperparameters on the validation set and repeat each experiment 10 times and report the best result.
While performance is lower than on the related Task 4 and Task 8 simulator tasks, we still arrive at the same trends and conclusions when real data from humans is used. The performance was expected to be lower because (i) real data has more lexical variety, complexity and noise; and (ii) the training set was smaller due to data collection costs (10k vs. 180k). We perform an analysis of the difference between simulated and real training data (or combining the two) in the appendix, which shows that using real data is indeed important and measurably superior to using simulated data.
11
Published as a conference paper at ICLR 2017
vanilla-MemN2N Cont-MemN2N Train \Test TrainQA TrainAQ Task 4: K. Veriï¬cation TestQA 0.331 0.318 TestAQ 0.313 0.375 Task 8: Triple TestQA 0.133 0.072 TestAQ 0.162 0.422 Task 4: K. Veriï¬cation TestQA 0.712 0.679 TestAQ 0.703 0.774 Task 8: Triple TestQA 0.308 0.137 TestAQ 0.234 0.797
Table 3: Mechanical Turk Task Results. Asking Questions (AQ) outperforms only answering ques- tions without asking (QA).
More importantly, the same main conclusion is observed as before: TrainAQ+TestAQ (questions can be asked at both training and test time) performs the best across all the settings. That is, we show that a bot asking questions to humans learns to outperform one that only answers them.
# 7 CONCLUSIONS
In this paper, we explored how an intelligent agent can beneï¬t from interacting with users by asking questions. We developed tasks where interaction via asking questions is desired. We explore both online and ofï¬ine settings that mimic different real world situations and show that in most cases, teaching a bot to interact with humans facilitates language understanding, and consequently leads to better question answering ability.
# REFERENCES
Mohammad Amin Bassiri. Interactional feedback and the impact of attitude and motivation on noticing l2 form. English Language and Literature Studies, 1(2):61, 2011.
Antoine Bordes and Jason Weston. Learning end-to-end goal-oriented dialog. arXiv preprint arXiv:1605.07683, 2016.
Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075, 2015.
Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander Miller, Arthur Szlam, and Jason Weston. Evaluating prerequisite qualities for learning end-to-end dialog sys- tems. arXiv preprint arXiv:1511.06931, 2015.
Richard Higgins, Peter Hartley, and Alan Skelton. The conscientious consumer: Reconsidering the role of assessment feedback in student learning. Studies in higher education, 27(1):53â64, 2002.
Andrew S Latham. Learning through feedback. Educational Leadership, 54(8):86â87, 1997.
Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. A diversity-promoting objective function for neural conversation models. arXiv preprint arXiv:1510.03055, 2015.
Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Ja- arXiv preprint son Weston. Key-value memory networks for directly reading documents. arXiv:1606.03126, 2016.
MarcâAurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732, 2015.
Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. A neural network approach to context-sensitive generation of conversational responses. arXiv preprint arXiv:1506.06714, 2015.
Pei-Hao Su, Milica Gasic, Nikola Mrksic, Lina Rojas-Barahona, Stefan Ultes, David Vandyke, Tsung-Hsien Wen, and Steve Young. Continuously learning neural dialogue management. arXiv preprint arXiv:1606.02689, 2016.
Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advances in neural information processing systems, pp. 2440â2448, 2015.
12
Published as a conference paper at ICLR 2017
Oriol Vinyals and Quoc Le. A neural conversational model. arXiv preprint arXiv:1506.05869, 2015.
Sida I Wang, Percy Liang, and Christopher D Manning. Learning language games through interac- tion. arXiv preprint arXiv:1606.02447, 2016.
Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes, David Vandyke, and Steve Young. A network-based end-to-end trainable task-oriented dialogue system. arXiv preprint arXiv:1604.04562, 2016.
Margaret G Werts, Mark Wolery, Ariane Holcombe, and David L Gast. Instructive feedback: Review of parameters and effects. Journal of Behavioral Education, 5(1):55â75, 1995.
Jason Weston. Dialog-based language learning. arXiv preprint arXiv:1604.06045, 2016.
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri¨enboer, Armand Joulin, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015.
Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
Terry Winograd. Understanding natural language. Cognitive psychology, 3(1):1â191, 1972.
# Ludwig Wittgenstein. Philosophical investigations. John Wiley & Sons, 2010.
Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprint arXiv:1505.00521, 362, 2015.
# Appendix
End-to-End Memory Networks The input to an end-to-end memory network model (MemN2N) is the last utterance of the dialogue history x as well as a set of memories (context) (C=c1, c2, ..., cN ). Memory C encodes both short-term memory, e..g, dialogue histories between the bot and the teacher and long-term memories, e.g., the knowledgebase facts that the bot has access to. Given the input x and C, the goal is to produce an output/label a.
In the ï¬rst step, the query x is transformed to a vector representation u0 by summing up its con- stituent word embeddings: u0 = Ax. The input x is a bag-of-words vector and A is the d à V word embedding matrix where d denotes the vector dimensionality and V denotes the vocabulary size. Each memory ci is similarly transformed to vector mi. The model will read information from the memory by linking input representation q with memory vectors mi using softmax weights:
a= So pim pi = softmax(ug mi) (2) i
The goal is to select memories relevant to the last utterance x, i.e., the memories with large values of p1 i . The queried memory vector o1 is the weighted sum of memory vectors. The queried memory vector o1 will be added on top of original input, u1 = o1 + u0. u1 is then used to query the memory vector. Such a process is repeated by querying the memory N times (so called âhopsâ). N is set to three in all experiments in this paper.
In the end, uN is input to a softmax function for the ï¬nal prediction:
N y1, uT where L denotes the number of candidate answers and y denotes the representation of the answer. If the answer is a word, y is the corresponding word embedding. If the answer is a sentence, y is the embedding for the sentence achieved in the same way as we obtain embeddings for query x and memory c.
Reward Based Imitation (RBI) and Forward Prediction (FP) RBI and FP are two dialogue learn- ing strategies proposed in (Weston, 2016) by harnessing different types of dialogue signals. RBI handles the case where the reward or the correctness of a botâs answer is explicitly given (for ex- ample, +1 if the botâs answer is correct and 0 otherwise). The model is directly trained to predict the correct answers (with label 1) at training time, which can be done using End-to-End Memory Networks (MemN2N) (Sukhbaatar et al., 2015) that map a dialogue input to a prediction.
13
Published as a conference paper at ICLR 2017
FP handles the situation where a real-valued reward for a botâs answer is not available, meaning that there is no +1 or 0 labels paired with a studentâs utterance. However, the teacher will give a response to the botâs answer, taking the form of a dialogue utterance. More formally, suppose that x denotes the teacherâs question and C=c1, c2, ..., cN denotes the dialogue history. In our AQ settings, the bot will ask a question a regarding the teacherâs question, denoted as a â A, where A denotes the studentâs question pool. The teacher will provide an utterance in response to the student question a. In FP, the model ï¬rst maps the teacherâs initial question x and dialogue history C to vector representation u using a memory network with multiple hops. Then the model will perform another hopof attention over all possible studentâs questions in A, with an additional part that incorporates the information of which candidate (i.e., a) was actually selected in the dialogue:
pËa = softmax(uT yËa) o = pËa(yËa + β · 1[Ëa = a]) ËaâA (4)
where yËa denotes the vector representation for the studentâs question candidate Ëa. β is a d- dimensional vector to signify the actual action a that the student chooses. For tasks where the student only has one way to ask questions (e.g., âwhat do you meanâ), there is no need to perform hops of attention over candidates since the cardinality of A is just 1. We thus directly assign a probability of 1 to the studentâs question, making o the sum of vector representation of ya and β.
o is then combined with u to predict the teacherâs feedback t using a softmax: u1 = o + u t = softmax(uT where xri denotes the embedding for the ith response. Dialogue Simulator In this section we further detail the simulator and the datasets we generated in order to realize the various scenarios discussed in Section 3. We focused on the problem of movieQA where we adapted the WikiMovies dataset proposed in Weston et al. (2015). The dataset consists of roughly 100k questions with over 75k entities from the open movie dataset (OMDb).
Each dialogue generated by the simulator takes place between a student and a teacher. The simulator samples a random question from the WikiMovies dataset and fetches the set of all KB facts relevant to the chosen question. This question is assumed to be the one the teacher asks its student, and is referred to as the âoriginalâ question. The student is ï¬rst presented with the relevant KB facts followed by the original question. Providing the KB facts to the student allows us to control the exact knowledge the student is given access to while answering the questions. At this point, depending on the task at hand and the studentâs ability to answer, the student might choose to directly answer it or ask a âfollowupâ question. The nature of the followup question will depend on the scenario under consideration. If the student answers the question, it gets a response from the teacher about its correctness and the conversation ends. However if the student poses a followup question, the teacher gives an appropriate response, which should give additional information to the student to answer the original question. In order to make things more complicated, the simulator pads the conversation with several unrelated student-teacher question-answer pairs. These question-answer pairs can be viewed as distractions and are used to test the studentâs ability to remember the additional knowledge provided by the teacher after it was queried. For each dialogue, the simulator incorporates 5 such pairs (10 sentences). We refer to these pairs as conversational histories.
For the QA setting (see Section 3), the dialogues generated by the simulator are such that the student never asks a clariï¬cation question. Instead, it simply responds to the original question, even if it is wrong. For the dialogs in the AQ setting, the student always asks a clariï¬cation question. The nature of the question asked is dependent on the scenario (whether it is Question Clariï¬cation, Knowledge Operation, or Knowledge Acquisition) under consideration. In order to simulate the case where the student sometimes choses to directly answer the original question and at other times choses to ask question, we created training datasets, which were a combination of QA and AQ (called âMixedâ). For all these cases, the student needs to give an answer to the teacherâs original question at the end.
# Instructions given to Turkers
These are the instructions given for the textual feedback Mechanical Turk task (we also constructed a separate task to collect the questions to ask the bot with similar instructions, not described here):
Task 4 (answers to botâs questions):
14
Published as a conference paper at ICLR 2017
Title: Write brief responses to given dialogue exchanges (about 15 min)
Description: Write a brief response answering a provided question (25 questions per HIT).
# Directions:
Each task consists of the following triplets: 1) a question by the teacher 2) the correct answer(s) to the question (separated by âORâ), unknown to the student 3) a clarifying question asking for feedback from the teacher Consider the scenario where you are the teacher and have already asked the question, and received the reply from the student. Please compose a brief response replying to the studentâs question. The correct answers are provided so that you know whether the studentâs question was relevant or not. For example, given 1) question: âwhat is a color in the united states ï¬ag?â; 2) correct answer: âwhite OR blue OR redâ; 3) student reply: âdoes this have to do with âUS Flag has colors red,white,blueâ?â, your response could be something like âthatâs right!â; for 3) reply: âdoes this have to do with âUnited States has population 320 millionâ, you might say âNo, that fact is not relevantâ or âNot reallyâ. Please vary responses and try to minimize spelling mistakes. If the same responses are copied/pasted or similar responses are overused, weâll reject the HIT. Avoid naming the student or addressing âthe classâ directly. We will consider bonuses for higher quality responses during review.
Task 8: answers to botâs questions:
Title: Write brief responses to given dialogue exchanges (about 10 min)
Description: Write a sentence describing the answer to a question (25 questions per HIT).
# Directions:
Each task consists of the following triplets: 1) a question by the teacher 2) the correct answer(s) to the question (separated by âORâ), unknown to the student 3) a question from the student asking the teacher for the answer Consider the scenario where you are the teacher and have already asked the question, and received the reply from the student. Please compose a brief response replying to the studentâs question. The correct answers are provided so that you know which answers to provide. For example, given 1) question: âwhat is a color in the united states ï¬ag?â; 2) correct answer: âwhite OR blue OR redâ; 3) student reply: âi dont know. whatâs the answer ?â, your response could be something like âthe color white is in the US ï¬agâ or âblue and red both appear in itâ. Please vary responses and try to minimize spelling mistakes, and do not include the capitalized âORâ in your response. If the same responses are copied/pasted or similar responses are overused, weâll reject the HIT. You donât need to mention every correct answer in your response. Avoid naming the student or addressing âthe classâ directly. We will consider bonuses for higher quality responses during review.
# Additional Mechanical Turk Experiments
Here we provide additional experiments to supplement the ones described in Section 6.2. In the main paper, results were shown when training and testing on the collected Mechanical Turk data (around 10,000 episodes of training dialogues for training). As we collected the data in the same settings as Task 4 and 8 of our simulator, we could also consider supplementing training with simulated data as well, of which we have a larger amount (over 100,000 episodes). Note this is only for training, we will still test on the real (Mechanical Turk collected) data. Although the simulated data has less lexical variety as it is built from templates, the larger size might obtain improve results.
Results are given Table 5 when training on the combination of real and simulator data, and testing on real data. This should be compared to training on only the real data (Table 4) and only on the simulator data (Table 6). The best results are obtained from the combination of simulator and real data. The best real data only results (selecting over algorithm and training strategy) on both tasks outperform the best results using simulator data, i.e. using Cont-MemN2N with the Train AQ / TestAQ setting) 0.774 and 0.797 is obtained vs. 0.714 and 0.788 for Tasks 4 and 8 respectively. This
15
Published as a conference paper at ICLR 2017
is despite there being far fewer examples of real data compared to simulator data. Overall we obtain two main conclusions from this additional experiment: (i) real data is indeed measurably superior to simulated data for training our models; (ii) in all cases (across different algorithms, tasks and data types â be they real data, simulated data or combinations) the bot asking questions (AQ) outperforms it only answering questions and not asking them (QA). The latter reinforces the main result of the paper.
vanilla-MemN2N Cont-MemN2N Train \Test TrainQA TrainAQ Task 4: K. Veriï¬cation TestQA 0.331 0.318 TestAQ 0.313 0.375 Task 8: Triple TestQA 0.133 0.072 TestAQ 0.162 0.422 Task 4: K. Veriï¬cation TestQA 0.712 0.679 TestAQ 0.703 0.774 Task 8: Triple TestQA 0.308 0.137 TestAQ 0.234 0.797
Table 4: Mechanical Turk Task Results, using real data for training and testing.
vanilla-MemN2N Cont-MemN2N Train \Test TrainQA TrainAQ Task 4: K. Veriï¬cation TestQA 0.356 0.340 TestAQ 0.311 0.445 Task 8: Triple TestQA 0.128 0.150 TestAQ 0.174 0.487 Task 4: K. Veriï¬cation TestQA 0.733 0.704 TestAQ 0.717 0.792 Task 8: Triple TestQA 0.368 0.251 TestAQ 0.352 0.825
Table 5: Results on Mechanical Turk Tasks using a combination of real and simulated data for training, testing on real data.
vanilla-MemN2N Cont-MemN2N Train \Test TrainQA TrainAQ Task 4: K. Veriï¬cation TestQA 0.340 0.326 TestAQ 0.311 0.390 Task 8: Triple TestQA 0.120 0.067 TestAQ 0.165 0.405 Task 4: K. Veriï¬cation TestQA 0.665 0.642 TestAQ 0.648 0.714 Task 8: Triple TestQA 0.349 0.197 TestAQ 0.342 0.788
Table 6: Results on Mechanical Turk Tasks using only simulated data for training, but testing on real data.
# Additional Ofï¬ine Supervised Learning Experiments
Question Clariï¬cation Knowledge Operation Train \Test TrainQA TrainAQ TrainAQ(+FP) TrainMix Task 1: Q. Paraphrase TestAQ TestQA 0.284 0.338 0.450 0.213 0.464 0.288 0.373 0.326 Task 2: Q. Veriï¬cation TestQA 0.340 0.225 0.146 0.329 TestAQ 0.271 0.373 0.320 0.326 Task 3: Ask For Relevant K. TestQA 0.462 0.187 0.342 0.442 TestAQ 0.344 0.632 0.631 0.558 Task 4: K. Veriï¬cation TestQA 0.482 0.283 0.311 0.476 TestAQ 0.322 0.540 0.524 0.491 Train \Test TestQA Task 5: Q. Entity 0.223 0.660 0.742 0.630 TestAQ TrainQA (vanila) < 0.01 TrainAQ (vanila) < 0.01 < 0.01 TrainAQ(+FP) <0.01 Mix (vanila) Knowledge Acquisition TestQA TestAQ Task 6: Answer Entity <0.01 <0.01 < 0.01 <0.01 TestQA Task 7: Relation Entity 0.109 0.082 0.085 0.070 TestAQ <0.01 <0.01 < 0.01 <0.01 0.129 0.156 0.188 0.152 TestQA TestAQ Task 8: Triple 0.201 0.124 0.064 0.180 0.259 0.664 0.702 0.572
TestQA TestAQ Task 9: Everything <0.01 <0.01 <0.01 <0.01 <0.01 <0.01 <0.01 <0.01
Table 7: Results for ofï¬ine settings using memory networks.
TrainAQ TrainAQ(+FP) TrainMix Question Clariï¬cation Task 2: Q. Veriï¬cation TestModelAQ 0.382 0.344 0.352 Knowledge Acquisition Task 4: K. Veriï¬cation TestModelAQ 0.480 0.501 0.469
Table 8: Results for TestModelAQ settings.
16 | {
"id": "1511.06931"
} |
1612.03651 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | 6 1 0 2 c e D 2 1
] L C . s c [
1 v 1 5 6 3 0 . 2 1 6 1 : v i X r a
# Under review as a conference paper at ICLR 2017
# FASTTEXT.ZIP: COMPRESSING TEXT CLASSIFICATION MODELS
Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Herv´e J´egou & Tomas Mikolov Facebook AI Research {ajoulin,egrave,bojanowski,matthijs,rvj,tmikolov}@fb.com
# ABSTRACT
We consider the problem of producing compact architectures for text classiï¬ca- tion, such that the full model ï¬ts in a limited amount of memory. After considering different solutions inspired by the hashing literature, we propose a method built upon product quantization to store word embeddings. While the original tech- nique leads to a loss in accuracy, we adapt this method to circumvent quantization artefacts. Our experiments carried out on several benchmarks show that our ap- proach typically requires two orders of magnitude less memory than fastText while being only slightly inferior with respect to accuracy. As a result, it outper- forms the state of the art by a good margin in terms of the compromise between memory usage and accuracy.
1
# INTRODUCTION
Text classiï¬cation is an important problem in Natural Language Processing (NLP). Real world use- cases include spam ï¬ltering or e-mail categorization. It is a core component in more complex sys- tems such as search and ranking. Recently, deep learning techniques based on neural networks have achieved state of the art results in various NLP applications. One of the main successes of deep learning is due to the effectiveness of recurrent networks for language modeling and their application to speech recognition and machine translation (Mikolov, 2012). However, in other cases including several text classiï¬cation problems, it has been shown that deep networks do not convincingly beat the prior state of the art techniques (Wang & Manning, 2012; Joulin et al., 2016).
In spite of being (typically) orders of magnitude slower to train than traditional techniques based on n-grams, neural networks are often regarded as a promising alternative due to compact model sizes, in particular for character based models. This is important for applications that need to run on systems with limited memory such as smartphones.
This paper speciï¬cally addresses the compromise between classiï¬cation accuracy and the model size. We extend our previous work implemented in the fastText library1. It is based on n-gram features, dimensionality reduction, and a fast approximation of the softmax classiï¬er (Joulin et al., 2016). We show that a few key ingredients, namely feature pruning, quantization, hashing, and re- training, allow us to produce text classiï¬cation models with tiny size, often less than 100kB when trained on several popular datasets, without noticeably sacriï¬cing accuracy or speed.
We plan to publish the code and scripts required to reproduce our results as an extension of the fastText library, thereby providing strong reproducible baselines for text classiï¬ers that optimize the compromise between the model size and accuracy. We hope that this will help the engineering community to improve existing applications by using more efï¬cient models.
This paper is organized as follows. Section 2 introduces related work, Section 3 describes our text classiï¬cation model and explains how we drastically reduce the model size. Section 4 shows the effectiveness of our approach in experiments on multiple text classiï¬cation benchmarks.
# 1https://github.com/facebookresearch/fastText
1
Under review as a conference paper at ICLR 2017
2 RELATED WORK
Models for text classiï¬cation. Text classiï¬cation is a problem that has its roots in many applica- tions such as web search, information retrieval and document classiï¬cation (Deerwester et al., 1990; Pang & Lee, 2008). Linear classiï¬ers often obtain state-of-the-art performance while being scal- able (Agarwal et al., 2014; Joachims, 1998; Joulin et al., 2016; McCallum & Nigam, 1998). They are particularly interesting when associated with the right features (Wang & Manning, 2012). They usually require storing embeddings for words and n-grams, which makes them memory inefï¬cient.
Compression of language models. Our work is related to compression of statistical language models. Classical approaches include feature pruning based on entropy (Stolcke, 2000) and quanti- zation. Pruning aims to keep only the most important n-grams in the model, leaving out those with probability lower than a speciï¬ed threshold. Further, the individual n-grams can be compressed by quantizing the probability value, and by storing the n-gram itself more efï¬ciently than as a sequence of characters. Various strategies have been developed, for example using tree structures or hash functions, and are discussed in (Talbot & Brants, 2008).
Compression for similarity estimation and search. There is a large body of literature on how to compress a set of vectors into compact codes, such that the comparison of two codes approxi- mates a target similarity in the original space. The typical use-case of these methods considers an indexed dataset of compressed vectors, and a query for which we want to ï¬nd the nearest neigh- bors in the indexed set. One of the most popular is Locality-sensitive hashing (LSH) by Charikar (2002), which is a binarization technique based on random projections that approximates the cosine similarity between two vectors through a monotonous function of the Hamming distance between the two corresponding binary codes. In our paper, LSH refers to this binarization strategy2. Many subsequent works have improved this initial binarization technique, such as spectal hashing (Weiss et al., 2009), or Iterative Quantization (ITQ) (Gong & Lazebnik, 2011), which learns a rotation ma- trix minimizing the quantization loss of the binarization. We refer the reader to two recent surveys by Wang et al. (2014) and Wang et al. (2015) for an overview of the binary hashing literature.
Beyond these binarization strategies, more general quantization techniques derived from Jegou et al. (2011) offer better trade-offs between memory and the approximation of a distance estimator. The Product Quantization (PQ) method approximates the distances by calculating, in the compressed do- main, the distance between their quantized approximations. This method is statistically guaranteed to preserve the Euclidean distance between the vectors within an error bound directly related to the quantization error. The original PQ has been concurrently improved by Ge et al. (2013) and Norouzi & Fleet (2013), who learn an orthogonal transform minimizing the overall quantization loss. In our paper, we will consider the Optimized Product Quantization (OPQ) variant (Ge et al., 2013).
Softmax approximation The aforementioned works approximate either the Euclidean distance or the cosine similarity (both being equivalent in the case of unit-norm vectors). However, in the context of fastText, we are speciï¬cally interested in approximating the maximum inner product involved in a softmax layer. Several approaches derived from LSH have been recently proposed to achieve this goal, such as Asymmetric LSH by Shrivastava & Li (2014), subsequently discussed by Neyshabur & Srebro (2015). In our work, since we are not constrained to purely binary codes, we resort a more traditional encoding by employing a magnitude/direction parametrization of our vectors. Therefore we only need to encode/compress an unitary d-dimensional vector, which ï¬ts the aforementioned LSH and PQ methods well.
Neural network compression models. Recently, several research efforts have been conducted to compress the parameters of architectures involved in computer vision, namely for state-of-the- art Convolutional Neural Networks (CNNs) (Han et al., 2016; Lin et al., 2015). Some use vector quantization (Gong et al., 2014) while others binarize the network (Courbariaux et al., 2016). Denil et al. (2013) show that such classiï¬cation models are easily compressed because they are over- parametrized, which concurs with early observations by LeCun et al. (1990).
2In the literature, LSH refers to multiple distinct strategies related to the Johnson-Lindenstrauss lemma. For instance, LSH sometimes refers to a partitioning technique with random projections allowing for sublinear search via cell probes, see for instance the E2LSH variant of Datar et al. (2004).
2
# Under review as a conference paper at ICLR 2017
Some of these works both aim at reducing the model size and the speed. In our case, since the fastText classiï¬er on which our proposal is built upon is already very efï¬cient, we are primilarly interested in reducing the size of the model while keeping a comparable classiï¬cation efï¬ciency.
# 3 PROPOSED APPROACH
3.1 TEXT CLASSIFICATION
In the context of text classification, linear classifiers (Joulin et al. 2016) remain competitive with more sophisticated, deeper models, and are much faster to train. On top of standard tricks commonly used in linear text classification (Agarwal et al. 2014} Wang & Manning} 2012} Weinberger et al.| Joulin et al. use a low rank constraint to reduce the computation burden while sharing information between different classes. This is especially useful in the case of a large output space, where rare classes may have only a few training examples. In this paper, we focus on a similar model, that is, which minimizes the softmax loss £ over N documents:
N So lms BAtn), (1) n=1
n=1
where xn is a bag of one-hot vectors and yn the label of the n-th document. In the case of a large vocabulary and a large output space, the matrices A and B are big and can require gigabytes of memory. Below, we describe how we reduce this memory usage.
3.2 BOTTOM-UP PRODUCT QUANTIZATION
Product quantization is a popular method for compressed-domain approximate nearest neighbor search (Jegou et al., 2011). As a compression technique, it approximates a real-valued vector by ï¬nding the closest vector in a pre-deï¬ned structured set of centroids, referred to as a codebook. This codebook is not enumerated, since it is extremely large. Instead it is implicitly deï¬ned by its structure: a d-dimensional vector x â Rd is approximated as
k £= >> a(z), (2) i=1
where the different subquantizers g; : x +4 q;(a) are complementary in the sense that their respective centroids lie in distinct orthogonal subspaces, i.e., Vi 4 j, Vx,y, (gi(x)|qj(y)) = 0. In the original PQ, the subspaces are aligned with the natural axis, while OPQ learns a rotation, which amounts to alleviating this constraint and to not depend on the original coordinate system. Another way to see this is to consider that PQ splits a given vector x into k subvectors 2â, i = 1...k, each of dimension d/k: x = {x'...a"...a*], and quantizes each sub-vector using a distinct k-means quantizer. Each subvector xâ is thus mapped to the closest centroid amongst 2â centroids, where b is the number of bits required to store the quantization index of the subquantizer, typically b = 8. The reconstructed vector can take 2*â distinct reproduction values, and is stored in kb bits.
PQ estimates the inner product in the compressed domain as
k alyxé y=) oa(a')y. (3) i=1
This is a straightforward extension of the square L2 distance estimation of Jegou et al. (2011). In practice, the vector estimate Ëx is trivially reconstructed from the codes, i.e., from the quantization indexes, by concatenating these centroids.
The two parameters involved in PQ, namely the number of subquantizers k and the number of bits b per quantization index, are typically set to k â [2, d/2], and b = 8 to ensure byte-alignment.
Discussion. PQ offers several interesting properties in our context of text classiï¬cation. Firstly, the training is very fast because the subquantizers have a small number of centroids, i.e., 256 cen- troids for b = 8. Secondly, at test time it allows the reconstruction of the vectors with almost no
3
# Under review as a conference paper at ICLR 2017
computational and memory overhead. Thirdly, it has been successfully applied in computer vision, offering much better performance than binary codes, which makes it a natural candidate to compress relatively shallow models. As observed by S´anchez & Perronnin (2011), using PQ just before the last layer incurs a very limited loss in accuracy when combined with a support vector machine.
In the context of text classiï¬cation, the norms of the vectors are widely spread, typically with a ratio of 1000 between the max and the min. Therefore kmeans performs poorly because it optimizes an absolute error objective, so it maps all low-norm vectors to 0. A simple solution is to separate the norm and the angle of the vectors and to quantize them separately. This allows a quantization with no loss of performance, yet requires an extra b bits per vector.
Bottom-up strategy: re-training. The ï¬rst works aiming at compressing CNN models like the one proposed by (Gong et al., 2014) used the reconstruction from off-the-shelf PQ, i.e., without any re-training. However, as observed in Sablayrolles et al. (2016), when using quantization methods like PQ, it is better to re-train the layers occurring after the quantization, so that the network can re-adjust itself to the quantization. There is a strong argument arguing for this re-training strategy: the square magnitude of vectors is reduced, on average, by the average quantization error for any quantizer satisfying the Lloyd conditions; see Jegou et al. (2011) for details.
This suggests a bottom-up learning strategy where we ï¬rst quantize the input matrix, then retrain and quantize the output matrix (the input matrix being frozen). Experiments in section 4 show that it is worth adopting this strategy.
Memory savings with PQ. In practice, the bottom-up PQ strategy offers a compression factor of 10 without any noticeable loss of performance. Without re-training, we notice a drop in accuracy between 0.1% and 0.5%, depending on the dataset and setting; see Section 4 and the appendix.
# 3.3 FURTHER TEXT SPECIFIC TRICKS
The memory usage strongly depends on the size of the vocabulary, which can be large in many text classiï¬cation tasks. While it is clear that a large part of the vocabulary is useless or redundant, directly reducing the vocabulary to the most frequent words is not satisfactory: most of the frequent words, like âtheâ or âisâ are not discriminative, in contrast to some rare words, e.g., in the context of tag prediction. In this section, we discuss a few heuristics to reduce the space taken by the dictionary. They lead to major memory reduction, in extreme cases by a factor 100. We experimentally show that this drastic reduction is complementary with the PQ compression method, meaning that the combination of both strategies reduces the model size by a factor up to Ã1000 for some datasets.
Pruning the vocabulary. Discovering which word or n-gram must be kept to preserve the overall performance is a feature selection problem. While many approaches have been proposed to select groups of variables during training (Bach et al., 2012; Meier et al., 2008), we are interested in selecting a ï¬xed subset of K words and ngrams from a pre-trained model. This can be achieved by selecting the K embeddings that preserve as much of the model as possible, which can be reduced to selecting the K words and ngrams associated with the highest norms.
While this approach offers major memory savings, it has one drawback occurring in some particular cases: some documents may not contained any of the K best features, leading to a signiï¬cant drop in performance. It is thus important to keep the K best features under the condition that they cover the whole training set. More formally, the problem is to ï¬nd a subset S in the feature set V that maximizes the sum of their norms ws under the constraint that all the documents in the training set D are covered:
max SâV sâS ws s.t. |S| ⤠K, P 1S ⥠1D,
where P is a matrix such that Pds = 1 if the s-th feature is in the d-th document, and 0 otherwise. This problem is directly related to set covering problems that are NP-hard (Feige, 1998). Standard greedy approaches require the storing of an inverted index or to do multiple passes over the dataset, which is prohibitive on very large dataset (Chierichetti et al., 2010). This problem can be cast as an instance of online submodular maximization with a rank constraint (Badanidiyuru et al., 2014;
4
# Under review as a conference paper at ICLR 2017
Sogou Yahoo Yelp full 96.5 72.5 096.0 0 63.6 2 95.5 ns 63.2 62.8 907 70.5 94.5] 70.0| 62.4 94.9 69.5 62.0 2 4 8 2 4 8 2 4 8 number of bytes â Full -: PQ -+ OPQ -{: LSH,norm -O- PQ, norm -A: OPQ, norm
3
Figure 1: Accuracy as a function of the memory per vector/embedding on 3 datasets from Zhang et al. (2015). Note, an extra byte is required when we encode the norm explicitly (ânormâ).
Bateni et al., 2010). In our case, we use a simple online parallelizable greedy approach: For each document, we verify if it is already covered by a retained feature and, if not, we add the feature with the highest norm to our set of retained features. If the number of features is below k, we add the features with the highest norm that have not yet been picked.
Hashing trick & Bloom ï¬lter. On small models, the dictionary can take a signiï¬cant portion of the memory. Instead of saving it, we extend the hashing trick used in Joulin et al. (2016) to both words and n-grams. This strategy is also used in Vowpal Wabbit (Agarwal et al., 2014) in the context of online training. This allows us to save around 1-2Mb with almost no overhead at test time (just the cost of computing the hashing function).
Pruning the vocabulary while using the hashing trick requires keeping a list of the indices of the K remaining buckets. At test time, a binary search over the list of indices is required. It has a complexity of O(log(K)) and a memory overhead of a few hundreds of kilobytes. Using Bloom ï¬lters instead reduces the complexity O(1) at test time and saves a few hundred kilobytes. However, in practice, it degrades performance.
# 4 EXPERIMENTS
This section evaluates the quality of our model compression pipeline and compare it to other com- pression methods on different text classiï¬cation problems, and to other compact text classiï¬ers.
Evaluation protocol and datasets. Our experimental pipeline is as follows: we train a model using fastText with the default setting unless speciï¬ed otherwise. That is 2M buckets, a learning rate of 0.1 and 10 training epochs. The dimensionality d of the embeddings is set to powers of 2 to avoid border effects that could make the interpretation of the results more difï¬cult. As baselines, we use Locality-Sensitive Hashing (LSH) (Charikar, 2002), PQ (Jegou et al., 2011) and OPQ (Ge et al., 2013) (the non-parametric variant). Note that we use an improved version of LSH where random orthogonal matrices are used instead of random matrix projection J´egou et al. (2008). In a ï¬rst series of experiments, we use the 8 datasets and evaluation protocol of Zhang et al. (2015). These datasets contain few million documents and have at most 10 classes. We also explore the limit of quantization on a dataset with an extremely large output space, that is a tag dataset extracted from the YFCC100M collection (Thomee et al., 2016)3, referred to as FlickrTag in the rest of this paper.
5
# Under review as a conference paper at ICLR 2017
AG Amazon full 0 Lt re) Q O° . * Tritt tt 2 + Amazon polarity DBPedia oo ° tie mmr °O Q@ -2 Sogou Yahoo 0 Pe ie} enn wie 5 x + 4 Yelp full Yelp polarity 0 Fy ° et TT [e) - x + -2 x + 100kB IMB 10MB_ 100MB 100kB IMB 10MB_ 100MB O Full PQ â Pruned + Zhang et al. (2015) X Xiao & Cho (2016)
Figure 2: Loss of accuracy as a function of the model size. We compare the compressed model with different level of pruning with NPQ and the full fastText model. We also compare with Zhang et al. (2015) and Xiao & Cho (2016). Note that the size is in log scale.
4.1 SMALL DATASETS
Compression techniques. We compare three popular methods used for similarity estimation with compact codes: LSH, PQ and OPQ on the datasets released by Zhang et al. (2015). Figure 1 shows the accuracy as a function of the number of bytes used per embedding, which corresponds to the number k of subvectors in the case of PQ and OPQ. See more results in the appendix. As discussed in Section 2, LSH reproduces the cosine similarity and is therefore not adapted to un-normalized data. Therefore we only report results with normalization. Once normalized, PQ and OPQ are almost lossless even when using only k = 4 subquantizers per embedding (equivalently, bytes). We observe in practice that using k = d/2, i.e., half of the components of the embeddings, works well in practice. In the rest of the paper and if not stated otherwise, we focus on this setting. The difference between the normalized versions of PQ and OPQ is limited and depends on the dataset. Therefore we adopt the normalized PQ (NPQ) for the rest of this study, since it is faster to train.
word Entropy Norm word Entropy Norm . , the and i a to it of this 1 2 3 4 5 6 7 8 9 10 354 176 179 1639 2374 970 1775 1956 2815 3275 mediocre disappointing so-so lacks worthless dreadful drm poorly uninspired worst 1399 454 2809 1244 1757 4358 6395 716 4245 402 1 2 3 4 5 6 7 8 9 10
Table 1: Best ranked words w.r.t. entropy (left) and norm (right) on the Amazon full review dataset. We give the rank for both criteria. The norm ranking ï¬lters out words carrying little information.
# 3Data available at https://research.facebook.com/research/fasttext/
6
# Under review as a conference paper at ICLR 2017
Dataset full 64KiB 32KiB 16 KiB AG Amazon full Amazon pol. DBPedia Sogou Yahoo Yelp full Yelp pol. 65M 92.1 108M 60.0 113M 94.5 87M 98.4 73M 96.4 122M 72.1 78M 63.8 77M 95.7 91.4 58.8 93.3 98.2 96.4 70.0 63.2 95.3 90.6 56.0 92.1 98.1 96.3 69.0 62.4 94.9 89.1 52.9 89.3 97.4 95.5 69.2 58.7 93.2 Average diff. [%] 0 -0.8 -1.7 -3.5
Table 2: Performance on very small models. We use a quantization with k = 1, hashing and an extreme pruning. The last row shows the average drop of performance for different size.
Pruning. Figure 2 shows the performance of our model with different sizes. We ï¬x k = d/2 and use different pruning thresholds. NPQ offers a compression rate of Ã10 compared to the full model. As the pruning becomes more agressive, the overall compression can increase up up to Ã1, 000 with little drop of performance and no additional overhead at test time. In fact, using a smaller dictionary makes the model faster at test time. We also compare with character-level Convolutional Neural Networks (CNN) (Zhang et al., 2015; Xiao & Cho, 2016). They are attractive models for text classiï¬cation because they achieve similar performance with less memory usage than linear models (Xiao & Cho, 2016). Even though fastText with the default setting uses more memory, NPQ is already on par with CNNsâ memory usage. Note that CNNs are not quantized, and it would be worth seeing how much they can be quantized with no drop of performance. Such a study is beyond the scope of this paper. Our pruning is based on the norm of the embeddings according to the guidelines of Section 3.3. Table 1 compares the ranking obtained with norms to the ranking obtained using entropy, which is commonly used in unsupervised settings Stolcke (2000).
Extreme compression. Finally, in Table 2, we explore the limit of quantized model by looking at the performance obtained for models under 64KiB. Surprisingly, even at 64KiB and 32KiB, the drop of performance is only around 0.8% and 1.7% despite a compression rate of Ã1, 000 â 4, 000.
4.2 LARGE DATASET: FLICKRTAG
In this section, we explore the limit of compression algorithms on very large datasets. Similar to Joulin et al. (2016), we consider a hashtag prediction dataset containing 312, 116 labels. We set the minimum count for words at 10, leading to a dictionary of 1, 427, 667 words. We take 10M buckets for n-grams and a hierarchical softmax. We refer to this dataset as FlickrTag.
Output encoding. We are interested in understanding how the performance degrades if the classi- ï¬er is also quantized (i.e., the matrix B in Eq. 1) and when the pruning is at the limit of the minimum number of features required to cover the full dataset.
Model k norm retrain Acc. Size full (uncompressed) 45.4 12 GiB Input Input Input Input+Output Input+Output 128 128 128 128 128 x x x x x x 45.0 45.3 45.5 45.2 45.4 1.7 GiB 1.8 GiB 1.8 GiB 1.5 GiB 1.5 GiB
Table 3: FlickrTag: Inï¬uence of quantizing the output matrix on performance. We use PQ for quantization with an optional normalization. We also retrain the output matrix after quantizing the input one. The ânormâ refers to the separate encoding of the magnitude and angle, while âretrainâ refers to the re-training bottom-up PQ method described in Section 3.2.
7
# Under review as a conference paper at ICLR 2017
Table 3 shows that quantizing both the âinputâ matrix (i.e., A in Eq. 1) and the âoutputâ matrix (i.e., B) does not degrade the performance compared to the full model. We use embeddings with d = 256 dimensions and use k = d/2 subquantizers. We do not use any text speciï¬c tricks, which leads to a compression factor of 8. Note that even if the output matrix is not retrained over the embeddings, the performance is only 0.2% away from the full model. As shown in the Appendix, using less subquantizers signiï¬cantly decreases the performance for a small memory gain.
Model full Entropy pruning Norm pruning Max-Cover pruning #embeddings Memory Coverage [%] 2M 11.5M 12GiB 297MiB 174MiB 305MiB 179MiB 305MiB 179MiB 73.2 88.4 2M 1M 1M 2M 70.5 70.5 61.9 88.4 1M 88.4 Accuracy 45.4 32.1 30.5 41.6 35.8 45.5 43.9
Table 4: FlickrTag: Comparison of entropy pruning, norm pruning and max-cover pruning methods. We show the coverage of the test set for each method.
Pruning. Table 4 shows how the performance evolves with pruning. We measure this effect on top of a fully quantized model. The full model misses 11.6% of the test set because of missing words (some documents are either only composed of hashtags or have only rare words). There are 312, 116 labels and thus it seems reasonable to keep embeddings in the order of the million. A naive pruning with 1M features misses about 30 â 40% of the test set, leading to a signiï¬cant drop of performance. On the other hand, even though the max-coverage pruning approach was set on the train set, it does not suffer from any coverage loss on the test set. This leads to a smaller drop of performance. If the pruning is too aggressive, however, the coverage decreases signiï¬cantly.
# 5 FUTURE WORK
It may be possible to obtain further reduction of the model size in the future. One idea is to condition the size of the vectors (both for the input features and the labels) based on their frequency (Chen et al., 2015; Grave et al., 2016). For example, it is probably not worth representing the rare labels by full 256-dimensional vectors in the case of the FlickrTag dataset. Thus, conditioning the vector size on the frequency and norm seems like an interesting direction to explore in the future.
We may also consider combining the entropy and norm pruning criteria: instead of keeping the features in the model based just on the frequency or the norm, we can use both to keep a good set of features. This could help to keep features that are both frequent and discriminative, and thereby to reduce the coverage problem that we have observed.
Additionally, instead of pruning out the less useful features, we can decompose them into smaller units (Mikolov et al., 2012). For example, this can be achieved by splitting every non-discriminative word into a sequence of character trigrams. This could help in cases where training and test examples are very short (for example just a single word).
# 6 CONCLUSION
In this paper, we have presented several simple techniques to reduce, by several orders of magnitude, the memory complexity of certain text classiï¬ers without sacriï¬cing accuracy nor speed. This is achieved by applying discriminative pruning which aims to keep only important features in the trained model, and by performing quantization of the weight matrices and hashing of the dictionary.
We will publish the code as an extension of the fastText library. We hope that our work will serve as a baseline to the research community, where there is an increasing interest for comparing the performance of various deep learning text classiï¬ers for a given number of parameters. Overall, compared to recent work based on convolutional neural networks, fastText.zip is often more accurate, while requiring several orders of magnitude less time to train on common CPUs, and incurring a fraction of the memory complexity.
8
# Under review as a conference paper at ICLR 2017
# REFERENCES
Alekh Agarwal, Olivier Chapelle, Miroslav Dud´ık, and John Langford. A reliable effective terascale linear learning system. Journal of Machine Learning Research, 15(1):1111â1133, 2014.
Francis Bach, Rodolphe Jenatton, Julien Mairal, and Guillaume Obozinski. Optimization with sparsity-inducing penalties. Foundations and Trends®) in Machine Learning, 4(1):1-106, 2012.
Ashwinkumar Badanidiyuru, Baharan Mirzasoleiman, Amin Karbasi, and Andreas Krause. Stream- ing submodular maximization: Massive data summarization on the ï¬y. In SIGKDD, pp. 671â680. ACM, 2014.
Mohammad Hossein Bateni, Mohammad Taghi Hajiaghayi, and Morteza Zadimoghaddam. Sub- modular secretary problem and extensions. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, pp. 39â52. Springer, 2010.
Moses S. Charikar. Similarity estimation techniques from rounding algorithms. In STOC, pp. 380â 388, May 2002.
Welin Chen, David Grangier, and Michael Auli. Strategies for training large vocabulary neural language models. arXiv preprint arXiv:1512.04906, 2015.
Flavio Chierichetti, Ravi Kumar, and Andrew Tomkins. Max-cover in map-reduce. In International Conference on World Wide Web, 2010.
Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks: Training neural networks with weights and activations constrained to +1 or -1. arXiv preprint arXiv:1602.02830, 2016.
M. Datar, N. Immorlica, P. Indyk, and V.S. Mirrokni. Locality-sensitive hashing scheme based on p- stable distributions. In Proceedings of the Symposium on Computational Geometry, pp. 253â262, 2004.
Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman. Indexing by latent semantic analysis. Journal of the American society for information science, 1990.
Misha Denil, Babak Shakibi, Laurent Dinh, Marc-Aurelio Ranzato, and Nando et all de Freitas. Predicting parameters in deep learning. In NIPS, pp. 2148â2156, 2013.
Uriel Feige. A threshold of ln n for approximating set cover. JACM, 45(4):634â652, 1998.
Tiezheng Ge, Kaiming He, Qifa Ke, and Jian Sun. Optimized product quantization for approximate nearest neighbor search. In CVPR, June 2013.
Yunchao Gong and Svetlana Lazebnik. Iterative quantization: A procrustean approach to learning binary codes. In CVPR, June 2011.
Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional net- works using vector quantization. arXiv preprint arXiv:1412.6115, 2014.
Edouard Grave, Armand Joulin, Moustapha Ciss´e, David Grangier, and Herv´e J´egou. Efï¬cient softmax approximation for gpus. arXiv preprint arXiv:1609.04309, 2016.
Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In ICLR, 2016.
Herv´e J´egou, Matthijs Douze, and Cordelia Schmid. Hamming embedding and weak geometric consistency for large scale image search. In ECCV, October 2008.
Herv´e Jegou, Matthijs Douze, and Cordelia Schmid. Product quantization for nearest neighbor search. IEEE Trans. PAMI, January 2011.
Thorsten Joachims. Text categorization with support vector machines: Learning with many relevant features. Springer, 1998.
9
# Under review as a conference paper at ICLR 2017
Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for efï¬cient text classiï¬cation. arXiv preprint arXiv:1607.01759, 2016.
Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. NIPS, 2:598â605, 1990.
Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. Neural networks with few multiplications. arXiv preprint arXiv:1510.03009, 2015.
Andrew McCallum and Kamal Nigam. A comparison of event models for naive bayes text classiï¬- cation. In AAAI workshop on learning for text categorization, 1998.
Lukas Meier, Sara Van De Geer, and Peter B¨uhlmann. The group lasso for logistic regression. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70(1):53â71, 2008.
Tomas Mikolov. Statistical language models based on neural networks. In PhD thesis. VUT Brno, 2012.
Tomas Mikolov, Ilya Sutskever, Anoop Deoras, Hai-Son Le, Stefan Kombrink, and J Cernocky. Subword language modeling with neural networks. preprint, 2012.
Behnam Neyshabur and Nathan Srebro. On symmetric and asymmetric lshs for inner product search. In ICML, pp. 1926â1934, 2015.
Mohammad Norouzi and David Fleet. Cartesian k-means. In CVPR, June 2013.
Bo Pang and Lillian Lee. Opinion mining and sentiment analysis. Foundations and trends in infor- mation retrieval, 2008.
Alexandre Sablayrolles, Matthijs Douze, Herv´e J´egou, and Nicolas Usunier. How should we evalu- ate supervised hashing? arXiv preprint arXiv:1609.06753, 2016.
Jorge S´anchez and Florent Perronnin. High-dimensional signature compression for large-scale im- age classiï¬cation. In CVPR, 2011.
Anshumali Shrivastava and Ping Li. Asymmetric LSH for sublinear time maximum inner product search. In NIPS, pp. 2321â2329, 2014.
Andreas Stolcke. Entropy-based pruning of backoff language models. arXiv preprint cs/0006025, 2000.
David Talbot and Thorsten Brants. Randomized language models via perfect hash functions. In ACL, 2008.
Bart Thomee, David A Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia Li. Yfcc100m: The new data in multimedia research. In Communica- tions of the ACM, 2016.
Jingdong Wang, Heng Tao Shen, Jingkuan Song, and Jianqiu Ji. Hashing for similarity search: A survey. arXiv preprint arXiv:1408.2927, 2014.
Jun Wang, Wei Liu, Sanjiv Kumar, and Shih-Fu Chang. Learning to hash for indexing big data - A survey. CoRR, abs/1509.05472, 2015.
Sida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topic classiï¬cation. In ACL, 2012.
Kilian Q Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. Feature hashing for large scale multitask learning. In ICML, 2009.
Yair Weiss, Antonio Torralba, and Rob Fergus. Spectral hashing. In NIPS, December 2009.
Yijun Xiao and Kyunghyun Cho. Efï¬cient character-level document classiï¬cation by combining convolution and recurrent layers. arXiv preprint arXiv:1602.00367, 2016.
Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text clas- siï¬cation. In NIPS, 2015.
10
# Under review as a conference paper at ICLR 2017
# APPENDIX
In the appendix, we show some additional results. The model used in these experiments only had 1M ngram buckets. In Table 5, we show a thorough comparison of LSH, PQ and OPQ on 8 different datasets. Table 7 summarizes the comparison with CNNs in terms of accuracy and size. Table 8 show a thorough comparison of the hashing trick and the Bloom ï¬lters.
Quant. m r o k n AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. Yelp p. full full,nodict 92.1 36M 59.8 97M 94.5 104M 98.4 67M 96.3 47M 92.1 34M 59.9 78M 94.5 72 83M 98.4 56M 96.3 42M 72.2 120M 63.7 56M 95.7 53M 91M 63.6 48M 95.6 46M LSH PQ OPQ LSH PQ OPQ 8 8 8 8 8 8 x x x 88.7 8.5M 51.3 20M 90.3 91.7 8.5M 59.3 20M 94.4 91.9 8.5M 59.3 20M 94.4 91.9 9.5M 59.4 22M 94.5 92.0 9.5M 59.8 22M 94.5 92.1 9.5M 59.9 22M 94.5 21M 92.7 14M 94.2 11M 54.8 21M 97.4 14M 96.1 11M 71.3 21M 96.9 14M 95.8 11M 71.4 24M 97.8 16M 96.2 12M 71.6 24M 98.4 16M 96.3 12M 72.1 24M 98.4 16M 96.3 12M 72.2 23M 56.7 12M 92.2 12M 23M 62.8 12M 95.4 12M 23M 62.5 12M 95.4 12M 26M 63.4 14M 95.6 13M 26M 63.7 14M 95.6 13M 26M 63.6 14M 95.6 13M LSH PQ OPQ LSH PQ OPQ 4 4 4 4 4 4 x x x 88.3 4.3M 50.5 9.7M 88.9 91.6 4.3M 59.2 9.7M 94.4 91.7 4.3M 59.0 9.7M 94.4 92.1 5.3M 59.2 13M 94.4 92.1 5.3M 59.8 13M 94.5 92.2 5.3M 59.8 13M 94.5 11M 91.6 7.0M 94.3 5.3M 54.6 11M 96.3 7.0M 96.1 5.3M 71.0 11M 96.9 7.0M 95.6 5.3M 71.2 13M 97.7 8.8M 96.2 6.6M 71.1 13M 98.4 8.8M 96.3 6.6M 72.0 13M 98.3 8.8M 96.3 6.6M 72.1 12M 56.5 6.0M 92.9 5.7M 12M 62.2 6.0M 95.4 5.7M 12M 62.6 6.0M 95.4 5.7M 15M 63.1 7.4M 95.5 7.2M 15M 63.6 7.5M 95.6 7.2M 15M 63.7 7.5M 95.6 7.2M LSH PQ OPQ LSH PQ OPQ 2 2 2 2 2 2 x x x 87.7 2.2M 50.1 4.9M 88.9 5.2M 90.6 3.5M 93.9 2.7M 51.4 5.7M 56.6 3.0M 91.3 2.9M 91.1 2.2M 58.7 4.9M 94.4 5.2M 87.1 3.6M 95.3 2.7M 69.5 5.7M 62.1 3.0M 95.4 2.9M 91.4 2.2M 58.2 4.9M 94.3 5.2M 91.6 3.6M 94.2 2.7M 69.6 5.7M 62.1 3.0M 95.4 2.9M 91.8 3.2M 58.6 7.3M 94.3 7.8M 97.1 5.3M 96.1 4.0M 69.7 8.6M 62.7 4.5M 95.5 4.3M 91.9 3.2M 59.6 7.3M 94.5 7.8M 98.1 5.3M 96.3 4.0M 71.3 8.6M 63.4 4.5M 95.6 4.3M 92.1 3.2M 59.5 7.3M 94.5 7.8M 98.1 5.3M 96.2 4.0M 71.5 8.6M 63.4 4.5M 95.6 4.3M
Table 5: Comparison between standard quantization methods. The original model has a dimension- ality of 8 and 2M buckets. Note that all of the methods are without dictionary.
k co AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. Yelp p. full, nodict full 8 full 4 full 2 92.1 34M 59.8 78M 94.5 83M 98.4 56M 96.3 42M 72.2 91M 63.7 48M 95.6 46M 92.0 9.5M 59.8 22M 94.5 24M 98.4 16M 96.3 12M 72.1 26M 63.7 14M 95.6 13M 92.1 5.3M 59.8 13M 94.5 13M 98.4 8.8M 96.3 6.6M 72 15M 63.6 7.5M 95.6 7.2M 91.9 3.2M 59.6 7.3M 94.5 7.8M 98.1 5.3M 96.3 4.0M 71.3 8.6M 63.4 4.5M 95.6 4.3M 8 8 8 8 200K 92.0 2.5M 59.7 2.5M 94.3 2.5M 98.5 2.5M 96.6 2.5M 71.8 2.5M 63.3 2.5M 95.6 2.5M 100K 91.9 1.3M 59.5 1.3M 94.3 1.3M 98.5 1.3M 96.6 1.3M 71.6 1.3M 63.4 1.3M 95.6 1.3M 50K 91.7 645K 59.7 645K 94.3 644K 98.5 645K 96.6 645K 71.5 645K 63.2 645K 95.6 644K 10K 91.3 137K 58.6 137K 93.2 137K 98.5 137K 96.5 137K 71.3 137K 63.3 137K 95.4 137K 4 4 4 4 200K 92.0 1.8M 59.7 1.8M 94.3 1.8M 98.5 1.8M 96.6 1.8M 71.7 1.8M 63.3 1.8M 95.6 1.8M 100K 91.9 889K 59.5 889K 94.4 889K 98.5 889K 96.6 889K 71.7 889K 63.4 889K 95.6 889K 50K 91.7 449K 59.6 449K 94.3 449K 98.5 450K 96.6 449K 71.4 450K 63.2 449K 95.5 449K 98K 98K 10K 91.5 98K 58.6 98K 93.2 98K 98.5 96.5 98K 71.2 98K 63.3 98K 95.4 2 2 2 2 200K 91.9 1.4M 59.6 1.4M 94.3 1.4M 98.4 1.4M 96.5 1.4M 71.5 1.4M 63.2 1.4M 95.5 1.4M 100K 91.6 693K 59.5 693K 94.3 693K 98.4 694K 96.6 693K 71.1 694K 63.2 693K 95.6 693K 50K 91.6 352K 59.6 352K 94.3 352K 98.4 352K 96.5 352K 71.1 352K 63.2 352K 95.6 352K 78K 79K 10K 91.3 78K 58.5 78K 93.2 78K 98.4 96.5 78K 70.8 78K 63.2 78K 95.3
Table 6: Comparison with different quantization and level of pruning. âcoâ is the cut-off parameter of the pruning.
11
# Under review as a conference paper at ICLR 2017
Dataset Zhang et al. (2015) Xiao & Cho (2016) AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. Yelp p. 90.2 59.5 94.5 98.3 95.1 70.5 61.6 94.8 108M 10.8M 10.8M 108M 108M 108M 108M 108M 91.4 59.2 94.1 98.6 95.2 71.4 61.8 94.5 80M 1.6M 1.6M 1.2M 1.6M 80M 1.4M 1.2M 91.9 59.6 94.3 98.5 96.5 71.7 63.3 95.5 889K 449K 449K 98K 98K 889K 98K 449K
Table 7: Comparison between CNNs and fastText with and without quantization. The numbers for Zhang et al. (2015) are reported from Xiao & Cho (2016). Note that for the CNNs, we report the size of the model under the assumption that they use ï¬oat32 storage. For fastText(+PQ) we report the memory used in RAM at test time.
m o o l Quant. B
full,nodict NPQ NPQ NPQ NPQ NPQ NPQ NPQ NPQ x x x x co AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. Yelp p. 92.1 34M 59.8 78M 94.5 83M 98.4 56M 96.3 42M 72.2 91M 63.7 48M 95.6 46M 200K 91.9 1.4M 59.6 1.4M 94.3 1.4M 98.4 1.4M 96.5 1.4M 71.5 1.4M 63.2 1.4M 95.5 1.4M 200K 92.2 830K 59.3 830K 94.1 830K 98.4 830K 96.5 830K 70.7 830K 63.0 830K 95.5 830K 100K 91.6 693K 59.5 693K 94.3 693K 98.4 694K 96.6 693K 71.1 694K 63.2 693K 95.6 693K 100K 91.8 420K 59.1 420K 93.9 420K 98.4 420K 96.5 420K 70.6 420K 62.8 420K 95.3 420K 50K 91.6 352K 59.6 352K 94.3 352K 98.4 352K 96.5 352K 71.1 352K 63.2 352K 95.6 352K 50K 91.5 215K 58.8 215K 93.6 215K 98.3 215K 96.5 215K 70.1 215K 62.7 215K 95.1 215K 78K 10K 91.3 51K 10K 90.8 78K 58.5 51K 56.8 78K 93.2 51K 91.7 78K 98.4 51K 98.1 79K 96.5 51K 96.1 78K 70.8 51K 68.7 78K 63.2 51K 61.7 78K 95.3 51K 94.5
Table 8: Comparison with and without Bloom ï¬lters. For NPQ, we set d = 8 and k = 2.
12
# Under review as a conference paper at ICLR 2017
Model k norm retrain Acc. Size full 45.4 12G 128 Input 128 Input 128 Input 128 Input+Output 128 Input+Output 128 Input+Output, co=2M Input+Output, n co=1M 128 x x x x x x x x x x 45.0 45.3 45.5 45.2 45.4 45.5 43.9 1.7G 1.8G 1.8G 1.5G 1.5G 305M 179M Input Input Input Input+Output Input+Output Input+Output, co=2M Input+Output, co=1M Input+Output, co=2M Input+Output, co=1M 64 64 64 64 64 64 64 64 64 x x x x x x x x x x x 44.0 44.7 44.9 44.6 44.8 42.5 39.9 45.0 43.4 1.1G 1.1G 1.1G 784M 784M 183M 118M 183M 118M x x
Table 9: FlickrTag: Comparison for a large dataset of (i) different quantization methods and param- eters, (ii) with or without re-training.
13 | {
"id": "1510.03009"
} |
1612.03801 | DeepMind Lab | DeepMind Lab is a first-person 3D game platform designed for research and
development of general artificial intelligence and machine learning systems.
DeepMind Lab can be used to study how autonomous artificial agents may learn
complex tasks in large, partially observed, and visually diverse worlds.
DeepMind Lab has a simple and flexible API enabling creative task-designs and
novel AI-designs to be explored and quickly iterated upon. It is powered by a
fast and widely recognised game engine, and tailored for effective use by the
research community. | http://arxiv.org/pdf/1612.03801 | Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich Küttler, Andrew Lefrancq, Simon Green, Víctor Valdés, Amir Sadik, Julian Schrittwieser, Keith Anderson, Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaffney, Helen King, Demis Hassabis, Shane Legg, Stig Petersen | cs.AI | 11 pages, 8 figures | null | cs.AI | 20161212 | 20161213 | 6 1 0 2 c e D 3 1
] I A . s c [
2 v 1 0 8 3 0 . 2 1 6 1 : v i X r a
# DeepMind Lab
Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich Küttler, Andrew Lefrancq, Simon Green, VÃctor Valdés, Amir Sadik, Julian Schrittwieser, Keith Anderson, Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaï¬ney, Helen King, Demis Hassabis, Shane Legg and Stig Petersen
November 8, 2021
# Abstract
DeepMind Lab is a ï¬rst-person 3D game platform designed for research and development of general artiï¬cial intelligence and machine learning systems. DeepMind Lab can be used to study how autonomous artiï¬cial agents may learn complex tasks in large, partially observed, and visually diverse worlds. DeepMind Lab has a simple and ï¬exible API enabling creative task-designs and novel AI-designs to be explored and quickly iterated upon. It is powered by a fast and widely recognised game engine, and tailored for eï¬ective use by the research community.
# Introduction
General intelligence measures an agentâs ability to achieve goals in a wide range of environments (Legg and Hutter, 2007). The only known examples of general- purpose intelligence arose from a combination of evolution, development, and learn- ing, grounded in the physics of the real world and the sensory apparatus of animals. An unknown, but potentially large, fraction of animal and human intelligence is a direct consequence of the perceptual and physical richness of our environment, and is unlikely to arise without it (e.g. Locke, 1690; Hume, 1739). One option is to di- rectly study embodied intelligence in the real world itself using robots (e.g. Brooks, 1990; Metta et al., 2008). However, progress on that front will always be hindered by the too-slow passing of real time and the expense of the physical hardware involved. Realistic virtual worlds on the other hand, if they are suï¬ciently detailed, can get the best of both, combining perceptual and physical near-realism with the speed and ï¬exibility of software.
Previous eï¬orts to construct realistic virtual worlds as platforms for AI research have been stymied by the considerable engineering involved. To ï¬ll the gap, we present DeepMind Lab. DeepMind Lab is a ï¬rst-person 3D game platform built on top of id softwareâs Quake III Arena (id software, 1999) engine. The world is ren- dered with rich science ï¬ction-style visuals. Actions are to look around and move in 3D. Example tasks include navigation in mazes, collecting fruit, traversing dangerous passages and avoiding falling oï¬ cliï¬s, bouncing through space using launch pads to move between platforms, laser tag, quickly learning and remembering random pro- cedurally generated environments, and tasks inspired by Neuroscience experiments. DeepMind Lab is already a major research platform within DeepMind. In particular,
1
it has been used to develop asynchronous methods for reinforcement learning (Mnih et al., 2016), unsupervised auxiliary tasks (Jaderberg et al., 2016), and to study navigation (Mirowski et al., 2016).
DeepMind Lab may be compared to other game-based AI research platforms emphasising pixels-to-actions autonomous learning agents. The Arcade Learning Environment (Atari) (Bellemare et al., 2012), which we have used extensively at DeepMind, is neither 3D nor ï¬rst-person. Among 3D platforms for AI research, DeepMind Lab is comparable to others like VizDoom (Kempka et al., 2016) and Minecraft (Johnson et al., 2016; Tessler et al., 2016). However, it pushes the envelope beyond what is possible in those platforms. In comparison, DeepMind Lab has considerably richer visuals and more naturalistic physics. The action space allows for ï¬ne-grained pointing in a fully 3D world. Compared to VizDoom, DeepMind Lab is more removed from its origin in a ï¬rst-person shooter genre video game. This work is diï¬erent and complementary to other recent projects which run as plugins to access internal content in the Unreal engine (Qiu and Yuille, 2016; Lerer et al., 2016). Any of these systems can be used to generate static datasets for computer vision as described e.g., in Mahendran et al. (2016); Richter et al. (2016).
Artiï¬cial general intelligence (AGI) research in DeepMind Lab emphasises 3D vi- sion from raw pixel inputs, ï¬rst-person (egocentric) viewpoints, ï¬ne motor dexterity, navigation, planning, strategy, time, and fully autonomous agents that must learn for themselves what tasks to perform by exploration of their environment. All these factors make learning diï¬cult. Each are considered frontier research questions on their own. Putting them all together in one platform, as we have, is a signiï¬cant challenge for the ï¬eld.
# DeepMind Lab Research Platform
DeepMind Lab is built on top of id softwareâs Quake III Arena (id software, 1999) engine using the ioquake3 (Nussel et al., 2016) version of the codebase, which is actively maintained by enthusiasts in the open source community. DeepMind Lab also includes tools from q3map2 (GtkRadiant, 2016) and bspc (bspc, 2016) for level generation. The bot scripts are based on code from the OpenArena (OpenArena, 2016) project.
# Tailored for machine learning
A custom set of assets were created to give the platform a unique and stylised look and feel, with a focus on rich visuals tailored for machine learning.
A reinforcement learning API has been built on top of the game engine, providing agents with complex observations and accepting a rich set of actions.
The interaction with the platform is lock-stepped, with the engine stepped for- ward one simulation step (or multiple with repeated actions, if desired) at a time, according to a user-speciï¬ed frame rate. Thus, the game is eï¬ectively paused after an observation is provided until an agent provides the next action(s) to take.
# Observations
At each step, the engine provides reward, pixel-based observations and, optionally, velocity information (ï¬gure 1):
2
? ° kK cabal Agent Velocity
Figure 1: Observations available to the agent. In our experience, reward and pixels are suï¬cient to train an agent, whereas depth and velocity information can be useful for further analysis.
rN rotate up/down rotate left/right â-â.. â back strafe â Â¥
Figure 2: The action space includes movement in three dimensions and look direction around two axes.
1. The reward signal is a scalar value that is eï¬ectively the score of each level.
2. The platform provides access to the raw pixels as rendered by the game engine from the playerâs ï¬rst-person perspective, formatted as RGB pixels. There is also an RGBD format, which additionally exposes per-pixel depth values, mimicking the range sensors used in robotics and biological stereo-vision.
3. For certain research applications the agentâs translational and angular velocities may be useful. These are exposed as two separate three-dimensional vectors.
# Actions
Agents can provide multiple simultaneous actions to control movement (forward/back, strafe left/right, crouch, jump), looking (up/down, left/right) and tagging (in laser tag levels with opponent bots), as illustrated in ï¬gure 2.
3
# Example levels
Figures 7 and 8 show a gallery of screen shots from the ï¬rst-person perspective of the agent. The levels can be divided into four categories:
1. Simple fruit gathering levels with a static map (seekavoid_arena_01 and stairway_to_melon). The goal of these levels is to collect apples (small posi- tive reward) and melons (large positive reward) while avoiding lemons (small negative reward).
2. Navigation levels with a static map layout (nav_maze_static_0{1, 2, 3} and nav_maze_random_goal_0{1, 2, 3}). These levels test the agentâs ability to ï¬nd their way to a goal in a ï¬xed maze that remains the same across episodes. The starting location is random. In the random goal variant, the location of the goal changes in every episode. The optimal policy is to ï¬nd the goalâs location at the start of each episode and then use long-term knowledge of the maze layout to return to it as quickly as possible from any location. The static variant is simpler in that the goal location is always ï¬xed for all episodes and only the agentâs starting location changes so the optimal policy does not require the ï¬rst step of exploring to ï¬nd the current goal location. The speciï¬c layouts are shown in ï¬gure 3.
3. Procedurally-generated navigation levels requiring eï¬ective exploration of a new maze generated on-the-ï¬y at the start of each episode (random_maze). These levels test the agentâs ability to explore a totally new environment. The optimal policy would begin by exploring the maze to rapidly learn its layout and then exploit that knowledge to repeatedly return to the goal as many times as possible before the end of the episode (three minutes).
4. Laser-tag levels requiring agents to wield laser-like science ï¬ction gadgets to tag bots controlled by the gameâs in-built AI (lt_horseshoe_color, lt_chasm, lt_hallway_slope, and lt_space_bounce_hard). A reward of 1 is delivered whenever the agent tags a bot by reducing its shield to 0. These levels approx- imate the usual gameplay from Quake III Arena. In lt_hallway_slope there is a sloped arena, requiring the agent to look up and down. In lt_chasm and lt_space_bounce_hard there are pits that the agent must jump over and avoid falling into. In lt_horseshoe_color and lt_space_bounce_hard, the colours and textures of the bots are randomly generated at the start of each episode. This prevents agents from relying on colour for bot detection. These levels test aspects of ï¬ne-control (for aiming), planning (to anticipate where bots are likely to move), strategy (to control key areas of the map such as gadget spawn points), and robustness to the substantial visual complexity arising from the large numbers of independently moving objects (gadget projectiles and bots).
# Technical Details
The original game engine is written in C and, to ensure compatibility with future changes to the engine, it has only been modiï¬ed where necessary. DeepMind Lab provides a simple C API and ships with Python bindings.
4
Figure 3: Top-down views of static maze levels. Left: nav_maze_static_01, middle: nav_maze_static_02 and right: nav_maze_static_03.
The platform includes an extensive level API, written in Lua, to allow custom level creation and mechanics. This approach has resulted in a highly ï¬exible platform with minimal changes to the original game engine.
DeepMind Lab supports Linux and has been tested on several major distributions.
# API for agents and humans
The engine can be run either in a window, or it can be run headless for higher performance and support for non-windowed environments like a remote terminal. Rendering uses OpenGL and can make use of either a GPU or a software renderer. A DeepMind Lab instance is initialised with the userâs settings for level name, screen resolution and frame rate. After initialisation a simple RL-style API is fol- lowed to interact with the environment, as per ï¬gure 4.
1 # Construct and start the environment . 2 lab = deepmind_lab . Lab ( â seekavoid_arena_01 â , [ â RGB_INTERLACED â ]) 3 lab . reset () 4 5 # Create all - zeros vector for actions . 6 action = np . zeros ([7] , dtype = np . intc ) 7 8 # Advance the environment 4 frames while executing the action . 9 reward = env . step ( action , num_steps =4) 10 11 # Retrieve the observations of the environment in its new state . 12 obs = env . observations () 13 rgb_i = obs [ â RGB_INTERLACED â] 14 assert rgb_i . shape == (240 , 320 , 3)
# Figure 4: Python API example.
# Level generation
Levels for DeepMind Lab are Quake III Arena levels. They are packaged into .pk3 ï¬les (which are ZIP ï¬les) and consist of a number of components, including level geometry, navigation information and textures.
DeepMind Lab includes tools to generate maps from .map ï¬les. These can be cumbersome to edit by hand, but a variety of level editors are freely available, e.g.
5
GtkRadiant (GtkRadiant, 2016). In addition to built-in and user-provided levels, the platform oï¬ers Text Levels, which are simple, human-readable text ï¬les, to specify walls, spawn points and other game mechanics as shown in the example in ï¬gure 5. Refer to ï¬gure 6 for a render of the generated level.
1 map = [[ 2 ************** ******* 3 * *** * 4 ** *** I 5 ***** * 6 ***** *** ******* 7 ***** 8 ***** ****** 9 ****** H ******* 10 * I P * 11 ************** 12 ]]
Figure 5: Example text level speciï¬cation, where â*â is a wall piece, âPâ is a spawn point and âHâ and âIâ are doors.
Figure 6: A level with the layout generated from the text in ï¬gure 5.
In the Lua-based level API each level can be customised further with logic for bots, item pickups, custom observations, level restarts, reward schemes, in-game messages and many other aspects.
# Results and Performance
Tables 1 and 2 show the platformâs performance at diï¬erent resolutions for two typical levels included with the platform. The frame rates listed were computed by connecting an agent performing random actions via the Python API. This agent has insigniï¬cant overhead so the results are dominated by engine simulation and rendering times.
6
The benchmarks were run on a Linux desktop with a 6-core Intel Xeon 3.50GHz CPU and an NVIDIA Quadro K600 GPU.
CPU RGB RGBD RGB RGBD 84 x 84 199.7 160 x 120 86.8 320 x 240 27.3
Table 1: Frame rate (frames/second) on nav_maze_static_01 level.
CPU RGB RGBD RGB RGBD 84 x 84 286.7 160 x 120 237.7 320 x 240 82.2
Table 2: Frame rate (frames/second) on lt_space_bounce_hard level.
Machine learning results from early versions of the DeepMind Lab platform can be found in Mnih et al. (2016); Jaderberg et al. (2016); Mirowski et al. (2016).
# Conclusion
DeepMind Lab enables research in a 3D world with rich science ï¬ction visuals and game-like physics. DeepMind Lab facilitates creative task development. A wide range of environments, tasks, and intelligence tests can be built with it. We are excited to see what the research community comes up with.
# Acknowledgements
This work would not have been possible without the support of DeepMind and our many colleagues there who have helped mature the platform. In particular we would like to thank Thomas Köppe, Hado van Hasselt, Volodymyr Mnih, Dharshan Ku- maran, Timothy Lillicrap, Raia Hadsell, Andrea Banino, Piotr Mirowski, Antonio Garcia, Timo Ewalds, Colin Murdoch, Chris Apps, Andreas Fidjeland, Max Jader- berg, Wojtek Czarnecki, Georg Ostrovski, Audrunas Gruslys, David Reichert, Tim Harley and Hubert Soyer.
7
# References
Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Ar- tiï¬cial Intelligence Research, 2012.
Rodney A Brooks. Elephants donât play chess. Robotics and autonomous systems, 6 (1):3â15, 1990.
bspc. bspc, 2016. URL https://github.com/TTimo/bspc.
GtkRadiant. Gtkradiant, 2016. URL http://icculus.org/gtkradiant/.
David Hume. Treatise on human nature. 1739.
id software. Quake3, 1999. URL https://github.com/id-Software/ Quake-III-Arena.
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsu- pervised auxiliary tasks. arXiv preprint arXiv:1611.05397, 2016.
Matthew Johnson, Katja Hofmann, Tim Hutton, and David Bignell. The malmo platform for artiï¬cial intelligence experimentation. In International joint confer- ence on artiï¬cial intelligence (IJCAI), 2016.
MichaÅ Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech JaÅkowski. Vizdoom: A doom-based ai research platform for visual reinforcement learning. arXiv preprint arXiv:1605.02097, 2016.
Shane Legg and Marcus Hutter. Universal intelligence: A deï¬nition of machine intelligence. Minds and Machines, 17(4):391â444, 2007.
Adam Lerer, Sam Gross, and Rob Fergus. Learning physical intuition of block towers by example. arXiv preprint arXiv:1603.01312, 2016.
John Locke. An essay concerning human understanding. 1690.
A Mahendran, H Bilen, JF Henriques, and A Vedaldi. Researchdoom and cocodoom: Learning computer vision with games. arXiv preprint arXiv:1610.02431, 2016.
Giorgio Metta, Giulio Sandini, David Vernon, Lorenzo Natale, and Francesco Nori. The icub humanoid robot: an open platform for research in embodied cognition. In Proceedings of the 8th workshop on performance metrics for intelligent systems, pages 50â56. ACM, 2008.
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andy Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, et al. Learning to navigate in complex environments. arXiv preprint arXiv:1611.03673, 2016.
Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timo- thy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783, 2016.
8
Ludwig Nussel, Thilo Schulz, Tim Angus, Tony J White, and Zachary J Slater. ioquake3, 2016. URL https://github.com/ioquake/ioq3.
OpenArena. The openarena project, 2016. URL http://www.openarena.ws.
Weichao Qiu and Alan Yuille. Unrealcv: Connecting computer vision to unreal engine. arXiv preprint arXiv:1609.01326, 2016.
Stephan R Richter, Vibhav Vineet, Stefan Roth, and Vladlen Koltun. Playing for data: Ground truth from computer games. In European Conference on Computer Vision, pages 102â118. Springer, 2016.
Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J Mankowitz, and Shie Mannor. A deep hierarchical approach to lifelong learning in minecraft. arXiv preprint arXiv:1604.07255, 2016.
9
It_chasm It_hallway_slope It_space_bounce_hard nav_maze*01
Figure 7: Example images from the agentâs egocentric viewpoint from several example DeepMind Lab levels.
10
nav_maze*02 nav_maze*03 stairway_to_melon
Figure 8: Example images from the agentâs egocentric viewpoint from several example DeepMind Lab levels.
11 | {
"id": "1605.02097"
} |
1612.03969 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | 2017
7 1 0 2
y a M 0 1 ] L C . s c [
3 v 9 6 9 3 0 . 2 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# TRACKING THE WORLD STATE WITH RECURRENT ENTITY NETWORKS
# Mikael Henaff1,2, Jason Weston1, Arthur Szlam1, Antoine Bordes1 and Yann LeCun1,2
1Facebook AI Research 2Courant Institute, New York University {mbh305}@nyu.edu, {jase,aszlam,abordes,yann}@fb.com
# ABSTRACT
We introduce a new model, the Recurrent Entity Network (EntNet). It is equipped with a dynamic long-term memory which allows it to maintain and update a rep- resentation of the state of the world as it receives new data. For language un- derstanding tasks, it can reason on-the-ï¬y as it reads text, not just when it is required to answer a question or respond as is the case for a Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a ï¬xed size memory and can learn to perform location and content-based read and write operations. However, unlike those models it has a simple parallel architecture in which several memory locations can be updated simultaneously. The EntNet sets a new state-of-the-art on the bAbI tasks, and is the ï¬rst method to solve all the tasks in the 10k training examples setting. We also demonstrate that it can solve a reasoning task which requires a large number of supporting facts, which other methods are not able to solve, and can generalize past its training horizon. It can also be practically used on large scale datasets such as Childrenâs Book Test, where it obtains competitive performance, reading the story in a single pass.
# INTRODUCTION
The essence of intelligence is the ability to predict. An intelligent agent must be able to predict unobserved facts about their environment from limited percepts (visual, auditory, textual, or other- wise), combined with their knowledge of the past. In order to reason and plan, they must be able to predict how an observed event or action will affect the state of the world. Arguably, the ability to maintain an estimate of the current state of the world, combined with a forward model of how the world evolves, is a key feature of intelligent agents.
A natural way for an agent to represent the world is to maintain a set of high-level concepts or entities together with their properties, which are updated as new information is received. For example, if a percept is the textual description of an event, such as âJohn walks out of the kitchenâ, the agent should learn to update its estimate of Johnâs location, as well as the list (and number) of people present in each room. If John was carrying a bag, the location of the bag and the list of objects in the kitchen must also be updated. When we read a story, each sentence we read or hear causes us to update our internal representation of the current state of the world within the story. The ï¬ow of the story is captured by the evolution of this state of the world.
At any given time, an agent typically receives limited information about the state of the world, and should therefore be able to infer new information through partial observation. In this paper, we investigate this problem through a simple story understanding scenario, in which the agent is given a sequence of textual statements and events, and then given another series of statements about the ï¬nal state of the world. If the second series of statements is given in the form of questions about the ï¬nal state of the world together with their correct answers, the agent should be able to learn from them and its performance can be measured by the accuracy of its answers.
1
Published as a conference paper at ICLR 2017
Even with this weak form of supervision, the system may learn basic dynamical constraints about the world. For example, it may learn that a person or object cannot be in two locations at the same time, or may learn simple update rules such as incrementing and decrementing the number of persons or objects in a room. It may also learn basic rules of approximate (logical) inference, such as the fact that objects belonging to the same category tend to have similar properties (light objects can be carried over from rooms to rooms for instance).
We propose to handle this scenario with a new kind of memory-augmented neural network that uses a distributed memory and processor architecture: the Recurrent Entity Network (EntNet). The model consists of a ï¬xed number of dynamic memory cells, each containing a vector key wj and a vector value (or content) hj. Each cell is associated with its own âprocessorâ, a simple gated recurrent network that may update the cell value given an input. If each cell learns to represent a concept or entity in the world, one can imagine a gating mechanism that, based on the key and con- tent of the memory cells, will only modify the cells that concern the entities mentioned in the input. In the current version of the model, there is no direct interaction between the memory cells, hence the system can be seen as multiple identical processors functioning in parallel, with distributed lo- cal memory. Alternatively, the EntNet can be seen as a bank of gated RNNs (all sharing the same parameters), whose hidden states correspond to latent concepts and attributes, and whose parame- ters describe the laws of the world according to which the attributes of objects are updated. The sharing of these parameters reï¬ects an invariance of these laws across object instances, similarly to how the weight tying scheme in a CNN reï¬ects an invariance of image statistics across locations. Their hidden state is updated only when new information relevant to their concept is received, and remains otherwise unchanged. The keys used in the addressing/gating mechanism also correspond to concepts or entities, but are modiï¬ed only during learning, not during inference.
The EntNet is able to solve all 20 bAbI question-answering tasks (Weston et al., 2015), a popular benchmark of story understanding, which to our knowledge sets a new state-of-the-art. Our experi- ments also indicate that the model indeed maintains an internal representation of the simpliï¬ed world in which the stories take place, and that the model does not limit itself to storing the aspects of the world required to answer a speciï¬c question. We also introduce a new reasoning task which, unlike the bAbI tasks, requires a model to use a large number of supporting facts to answer the question, and show that the EntNet outperforms both LSTMs and Memory Networks (Sukhbaatar et al., 2015) by a signiï¬cant margin. It is also able to generalize to sequences longer than those seen during training. Finally, our model also obtains competitive results on the Childrens Book Test (Hill et al., 2016), and performs best among models that read the text in a single pass before receiving knowledge of the question.
# 2 MODEL
Our model is designed to process data in sequential form, and consists of three main parts: an input encoder, a dynamic memory and an output layer, which we now describe in detail. We developed it in the context of question answering on short stories where the inputs are word sequences, but the model could be adapted to many other contexts.
2.1 INPUT ENCODER
The encoding layer summarizes an element of the input sequence with a vector of ï¬xed length. Typically the input element at time t is a sequence of words, e.g. a sentence or window of words. One is free to choose the encoding module to be any standard sequence encoder, which is an active area of research. Typical choices include a bag-of-words (BoW) representation or the ï¬nal state of a recurrent neural net (RNN) run over the sequence. In this work, we use a simple encoder consisting of a learned multiplicative mask followed by a summation. More precisely, let the input at time t be a sequence of words with embeddings {e1, ..., ek}. The vector representation of this input is then:
st = X i fi â ei (1)
The same set of vectors {f1, ..., fk} are used at each time step and are learned jointly with the other parameters of the model. Note that the model can choose to adopt a standard BoW representation
2
Published as a conference paper at ICLR 2017
e | key |â-(i3)â+ i update |) â+, update gate i hy memory slot 2 | key tig â* update update i gate i om memory slot
Figure 1: Diagram of the Recurrent Entity Networkâs dynamic memory. Update equations 1 and 2 are represented by the module fθ, where θ is the set of trainable parameters. Equations 3 and 4 are represented by the gate, since they fullï¬ll a similar function.
by setting all weights in the multiplicative mask to 1, or can choose a positional encoding model as used in (Sukhbaatar et al., 2015).
2.2 DYNAMIC MEMORY
The dynamic memory is a gated recurrent network with a (partially) block structured weight tying scheme. We divide the hidden states of the network into blocks h1, ..., hm; the full hidden state is the concatenation of the hj. In the experiments below, m is of the order of 5 to 20, and each block hj is of the order of 20 to 100 units.
At each time step t, the content of the hidden states {hj} (which we will call the jth memory) are updated using a set of key vectors {wj} and the encoded input st. In its most general form, the update equations of our model are given by:
t wj) (2)
t hj + sT gj â Ï(sT Ëhj â Ï(U hj + V wj + W st) hj â hj + gj â Ëhj hj ||hj||
(3)
(4)
hj â (5)
Here Ï represents a sigmoid, gj is a gating function which determines how much the jth memory should be updated, and Ëhj is the new candidate value of the memory to be combined with the existing memory hj. The function Ï can be chosen from any number of activation functions, in our experiments we use either parametric ReLU non-linearities (He et al., 2015) or the identity. The matrices U, V, W are typically trainable parameters of the model, and are shared between all the blocks. They can also be ï¬xed to certain values, such as the identity or zero, to yield a simpler model which we use in some of our experiments.
3
Published as a conference paper at ICLR 2017
The gating function gj contains two terms: a âcontentâ term sT t hj which causes the gate to open for memory slots whose content matches the input, and a âlocationâ term sT t wj which causes the gate to open for memory slots whose key matches the input. The ï¬nal normalization step allows the model to forget previous information. To see this, note that since the memories lie on the unit sphere, all information is contained in their phase. Adding any vector to a given memory (other than the memory itself) will decrease the cosine distance between the original memory and the updated one. Therefore, as new information is added, old information is forgotten.
2.3 OUTPUT MODULE
Whenever the model is required to produce an output, it is presented with a query vector q. Speciï¬- cally, the output is computed using the following equations:
pj = Softmax(qT hj) u = X j pjhj y = RÏ(q + Hu) (6)
The matrices H and R are additional trainable parameters of the model. The output module can be viewed as a one-hop Memory Network (Sukhbaatar et al., 2015) with an additional non-linearity Ï between the internal state and the decoder matrix. If the memory slots correspond to speciï¬c words (as we will describe in the following section) which contain the answer, p can be viewed as a distribution over potential answers and can be used to make a prediction directly or fed into a loss function, removing the need for the last two steps.
The entire model (all three components described above) is trained via backpropagation through time, receiving gradients from any time steps where the reader is required to produce an output, which are then propagated through the unrolled network.
# 3 MOTIVATING EXAMPLE OF OPERATION
We now describe a motivating example of how our model can perform reasoning on-the-ï¬y as it is ingesting input sequences. Let us suppose our model is reading a story, so the inputs are natural language sentences, and then it is required to answer questions about the story it has just read.
Our model is free to learn the key vectors wj for each memory j. One choice the model could make is to associate a single memory (via the key) with each entity in the story. The memory slot corresponding to a person could encode that personâs location, the objects they are carrying, or the people they are with, depending on what information is relevant for the task at hand. As new information is received indicating that objects are acquired or discarded, or the person changes location, their memory slot will change accordingly. Similarly useful updates can be made for memories corresponding to object and location entities as well.
In fact, we could encode this choice of memories directly into our model, which we consider as a type of prior knowledge. By tying the weights of the key vectors with the embeddings of speciï¬c words, we can encourage the model to record information about certain words occuring in the text which we believe to be important. For example, given a list of named entities (which could be produced by a standard tagger), we could make the model have a separate memory slot for each entity. We consider this âtiedâ variant in our experiments. Since the list of entities is independent of the training data, this variant can handle entities not seen in the training set, as long as their embeddings can be initialized in a reasonable way (such as pre-training on a larger corpus).
Now, consider that the model reads the following two sentences, and the desired behavior of the gating function and update function at each memory as they are seen:
⢠Mary picked up the ball.
Mary went to the garden.
4
Published as a conference paper at ICLR 2017
As the ï¬rst sentence st is ingested, and assuming memories encode entities, we would like the gates of the memories corresponding to both âMaryâ and âballâ to activate. This is possible due to the location addressing term sT t wj which uses the key wj . We expect that a well trained model would learn to do this. The model would hence modify both the entry corresponding to âMaryâ to indicate that she is now carrying the ball, and also the entry corresponding to âballâ, to indicate that it is being carried by Mary. When the second sentence is seen, we would like the model to again modify the âMaryâ entry to indicate that she is now in the garden, and also modify the âballâ entry to reï¬ect its new location as well. Assuming the information for âMaryâ is contained in the âballâ memory as described before, the gate corresponding to âballâ can activate due to the content addressing term sT t hj, even though the word âballâ does not occur in the second sentence. As before, the gate corresponding to the âMaryâ entry can open due to the second term.
If the gating function and update function have weights such that the steps above are executed, then the memory will be in a state where questions such as âWhere is the ball?â or âWhere is Mary?â can be answered from the values of relevant memories, without the need for further complex reasoning.
# 4 RELATED WORK
The EntNet is related to gated recurrent models such as the LSTM (Hochreiter & Schmidhuber, 1997) and GRU (Cho et al., 2014), which also use gates to ï¬x or modify the information stored in the hidden state. However, these models use scalar memory cells with full interactions between them, whereas ours has separate memory slots which could be seen as groups of hidden units with tied weights in the gating and update functions. Another important difference is the content-based matching term between the input and hidden state, which is not present in these models.
Our model also shares some similarities with the DNC/NTM framework of (Graves et al., 2014; 2016). There, as in our model, a block of hidden states acts as a set of read-writeable memories. On the other hand, the DNC has a relatively sophisticated controller network (such as an LSTM) which reads an input and outputs a number of interface vectors (such as keys and weightings) which are then combined via a softmax to read from and write to the external memory matrix. In contrast, our model can be viewed as a set of separate recurrent models whose hidden states store the memory slots. These hidden states are either ï¬xed by the gates, or modiï¬ed through a simple RNN-style update. The bulk of the reasoning is thus performed by these parallel recurrent models, rather than through a central controller. Moreover, instead of using a softmax, our model uses an independent gate for writing to each memory.
Our model is similar to a Memory Network and its variants (Weston et al., 2014; Sukhbaatar et al., 2015; Chandar et al., 2016; Miller et al., 2016) in the way it produces an output using a softmax over blocks of hidden states, and our encoding layer is inspired by techniques used in those works. How- ever, Memory Networks explicitly store the entire input sequence in memory, and then sequentially update a controllerâs hidden state via a softmax gating over the memories. In contrast, our model keeps a ï¬xed number of blocks of hiddens as memories and updates each block with an independent gated RNN. The Dynamic Memory Network of (Xiong et al., 2016) also performs updates via a re- current model, however it links memories to input tokens and updates them sequentially rather than in parallel.
The weight tying scheme and the parallel gated RNNs recall the gated graph network of (Li et al., 2015). If we interpret our work in that context, the âgraphâ is just a set of vertices with no edges; our gating mechanism is also somewhat different than the one they use. The CommNN model of (Sukhbaatar et al., 2016), the Interaction Network of (?), the Neural Physics Engine of (?) and the model of (?) also use a set of parallel recurrent models with tied weights, but differ from our model in their use of inter-network communication and the lack of a gating mechanism.
Finally, there is another class of recent models that have a writeable memory arranged as (un- bounded) stacks, linked lists or queues (Joulin & Mikolov, 2015; Grefenstette et al., 2015). Our model is different from these in that we use a key-value pair array instead of a stack, and in the experiments in this work, the array is of ï¬xed size.
5
Published as a conference paper at ICLR 2017
Model MemN2N 0.09 LSTM EntNet T = 10 T = 20 0.633 0.157 0 T = 40 0.896 0.226 0 0 0 T Error 20 0 30 0 40 0 50 0.01 60 0.03 70 0.05 80 0.08 (a) (b)
Table 1: a) Error of different models on the World Model Task. b) Generalization of an EntNet trained up to T = 20. All errors range from 0 to 1.
# 5 EXPERIMENTS
In this section we evaluate our model on three different datasets. Training details common to all experiments can be found in Appendix A.
5.1 SYNTHETIC WORLD MODEL TASK
We ï¬rst study our modelâs properties on a toy task designed to measure the ability to keep a world model in memory. In this task two agents are initially placed randomly on an 10Ã10 grid, and at each time step a randomly chosen agent either changes direction or moves ahead. After a certain number of time steps, the model is required to provide the locations of each of the agents, thus revealing its internal world model (details can be found in Appendix B). This task is challenging because the model must combine up to T â 2 supporting facts in order to answer the question correctly, and must also keep the locations of both agents in memory and update them at different times.
We compared the performance of a MemN2N, LSTM and EntNet. For the MemN2N, we set the number of hops equal to T â2 and the embedding dimension to d = 20. The EntNet had embedding dimension d = 20 and 5 memory slots, and the LSTM had 50 hidden units which resulted in it having signiï¬cantly more parameters than the other two models. For each model, we repeated the experi- ment with 5 different initializations and reported the best performance. All models were trained with ADAM (Kingma & Ba, 2014) with initial learning rates set by grid search over {0.1, 0.01, 0.001} and divided by 2 every 10,000 updates. Table 1a shows the results. The MemN2N has the worst performance, which degrades quickly as the length of the sequence increases. The LSTM performs better, but still loses accuracy as the length of the sequence increases. In contrast, the EntNet is able to solve the task in all cases.
The ability to generalize to sequences longer than those seen during training is a desirable property, which suggests that the network has learned the dynamics of the world it is trying to model. It also means the model can be trained less expensively. To study this, we trained an EntNet on variable length sequences between 1 and 20, and evaluated it on different length sequences longer than 20. Results are shown in Table 1b. We see that the model is able to achieve good performance several times past its training horizon.
5.2 BABI TASKS
We next evaluate our model on the bAbI tasks, which are a collection of 20 synthetic question- answering datasets ï¬rst introduced in (Weston et al., 2015) designed to test a wide variety of rea- soning abilities. They have since become a benchmark for memory-augmented neural networks and most of the related methods described in Section 4 have been tested on them. Performance is mea- sured using two metrics: the average error across all tasks, and the number of failed tasks (more than 5% error). We used version 1.2 of the dataset with 10k samples. 1
Training Details We used a similar training setup as (Sukhbaatar et al., 2015). All models were trained with ADAM using a learning rate of η = 0.01, which was divided by 2 every 25 epochs until 200 epochs were reached. Copying previous works (Sukhbaatar et al., 2015; Xiong et al., 2016), the capacity of the memory was limited to the most recent 70 sentences, except for task 3 which was limited to 130 sentences. Due to the high variance in model performance for some tasks, for
1Code to reproduce these experiments can be found at
https://github.com/facebook/MemNN/tree/master/EntNet-babi.
6
Published as a conference paper at ICLR 2017
Table 2: Results on bAbI Tasks with 10k training samples.
Task 1: 1 supporting fact 2: 2 supporting facts 3: 3 supporting facts 4: 2 argument relations 5: 3 argument relations 6: yes/no questions 7: counting 8: lists/sets 9: simple negation 10: indeï¬nite knowledge 11: basic coreference 12: conjunction 13: compound coreference 14: time reasoning 15: basic deduction 16: basic induction 17: positional reasoning 18: size reasoning 19: path ï¬nding 20: agentâs motivation 31.5 54.5 43.9 0 0.8 17.1 17.8 13.8 16.4 16.6 15.2 8.9 7.4 24.2 47.0 53.6 25.5 2.2 4.3 1.5 4.4 27.5 71.3 0 1.7 1.5 6.0 1.7 0.6 19.8 0 6.2 7.5 17.5 0 49.6 1.2 0.2 39.5 0 0 0.3 2.1 0 0.8 0.1 2.0 0.9 0.3 0 0.0 0 0 0.2 0 51.8 18.6 5.3 2.3 0 0 0.4 1.8 0 0.8 0 0.6 0.3 0.2 0.2 0 0 0 0.4 0 55.1 12.0 0.8 3.9 0 0 0.3 1.1 0 0.5 0 2.4 0.0 0.0 0 0.0 0.2 0 0.2 0 45.3 4.2 2.1 0.0 0 0 0.1 4.1 0 0.3 0.2 0 0.5 0.1 0.6 0.3 0 1.3 0 0 0.2 0.5 0.3 2.3 0 Failed Tasks (> 5% error): Mean Error: 16 20.1 9 12.8 3 4.2 2 3.8 1 2.8 0 0.5
# NTM D-NTM MemN2N DNC DMN+ EntNet
each task we conducted 10 runs with different initializations and picked the best model based on performance on the validation set, as it has been done in previous work. In all experiments, our model had embedding dimension size d = 100 and 20 memory slots.
In Table 2 we compare our model to various other state-of-the-art models in the literature: the larger MemN2N reported in the appendix of (Sukhbaatar et al., 2015), the Dynamic Memory Network of (Xiong et al., 2016), the Dynamic Neural Turing Machine (Gulcehre et al., 2016), the Neural Turing Machine (Graves et al., 2014) and the Differentiable Neural Computer (Graves et al., 2016). Our model is able to solve all the tasks, outperforming the other models in terms of both the number of solved tasks and the average error.
To analyze what kind of representations our model can learn, we conducted an additional experi- ment on Task 2 using a simple BoW sentence encoding and key vectors which were tied to entity embeddings. This was designed to make the model more interpretable, since the weight tying forces memory slots to encode information about speciï¬c entities. 2 After training, we ran the model over a story and computed the cosine distance between Ï(Hhj) and each row ri of the decoder matrix R. This gave us a score which measures the afï¬nity between a given memory slot and each word in the vocabulary. Table 3 shows the nearest neighboring words for each memory slot (which itself corresponds to an entity). We see that the model has indeed stored locations of all of the objects and characters in its memory slots which reï¬ect the ï¬nal state of the story. In particular, it has the correct answer readily stored in the memory slot of the entity being inquired about (the milk). It also has correct location information about all other non-location entities stored in the appropriate memory slots. Note that it does not store useful or correct information in the memory slots corresponding to
2For most tasks including this one, tying key vectors did not signiï¬cantly change performance, although it hurt in a few cases (see Appendix C). Therefore we did not apply it in Table 2
7
Published as a conference paper at ICLR 2017
Table 3: On the left, the networkâs ï¬nal âworld modelâ after reading the story on the right. First and second nearest neighbors from each memory slot are shown, along with their cosine distance.
Key 1-NN 2-NN Story hallway (0.135) football garden (0.111) milk kitchen (0.501) john garden (0.442) mary hallway (0.394) sandra daniel hallway (0.689) bedroom hallway (0.367) kitchen (0.483) kitchen garden (0.281) garden hallway (0.475) hallway dropped (0.056) took (0.011) dropped (0.027) took (0.034) kitchen (0.121) to (0.076) dropped (0.075) daniel (0.029) where (0.026) left (0.060) mary got the milk there john moved to the bedroom sandra went back to the kitchen mary travelled to the hallway john got the football there john went to the hallway john put down the football mary went to the garden john went to the kitchen sandra travelled to the hallway daniel went to the hallway mary discarded the milk where is the milk ? answer: garden
locations, most likely because this task does not contain questions about locations (such as âwho is in the kitchen?â).
5.3 CHILDRENâS BOOK TEST (CBT)
We next evaluated our model on the Childrenâs Book Test (Hill et al., 2016), which is a semantic language modeling (sentence completion) benchmark built from childrenâs books that are freely available from Project Gutenberg 3. Models are required to read 20 consecutive sentences from a given story and use this context to ï¬ll in a missing word from the 21st sentence. More speciï¬cally, each sample consists of a tuple (S, q, C, a) where S is the story consisting of 20 sentences, Q is the 21st sentence with one word replaced by a special blank token, C is a set of 10 candidate answers of the same type as the missing word (for example, common nouns or named entities), and a is the true answer (which is always contained in C).
It was shown in (Hill et al., 2016) that methods with limited memory such as LSTMs perform well on more frequent, syntax based words such as prepositions and verbs, being similar to human per- formance, but poorly relative to humans on more semantically meaningful words such as named entities and common nouns. Therefore, most recent methods have been evaluated on the Named En- tity and Common Noun subtasks, since they better test the ability of a model to make use of wider contextual information.
Training Details We adopted the same window memory approach used in (Hill et al., 2016), where each input corresponds to a window of text from {w(iâbâ1/2)...wi...w(i+(bâ1)/2)} centered at a can- didate wi â C. In our experiments we set b = 5. All models were trained using standard stochastic gradient descent (SGD) with a ï¬xed learning rate of 0.001. We used separate input encodings for the update and gating functions, and applied a dropout rate of 0.5 to the word embedding dimensions. Key embeddings were tied to the embeddings of the candidate words, resulting in 10 hidden blocks, one per member of C. Due to the weight tying, we did not need a decoder matrix and used the distribution over candidates to directly produce a prediction, as described in Section 3.
We found that a simpler version of the model worked best, with U = V = 0, W = I and Ï equal to the identity. We also removed the normalization step in this simpliï¬ed model, which we found to hurt performance. This can be explained by the fact that the maximum frequency baseline model in (Hill et al., 2016) has performance which is signiï¬cantly higher than random, and including the normalization step hides this useful frequency-based information.
Results We draw a distinction between two setups: the single-pass setup, where the model must read the story and query in order and immediately produce an output, and the multi-pass setup, where the model can use the query to perform attention over the story. The ï¬rst setup is more challenging
# 3www.gutenberg.org
8
Published as a conference paper at ICLR 2017
Table 4: Accuracy on CBT test set. Single-pass models encode the document before seeing the query, multi-pass models have access to the query at read time.
Model Kneser-Ney Language Model + cache LSTMs (context + query) Window LSTM EntNet (general) EntNet (simple) 0.439 0.418 0.436 0.484 0.616 0.577 0.560 0.582 0.540 0.588 MemNN MemNN + self-sup. Attention Sum Reader (Kadlec et al., 2016) Gated-Attention Reader (Bhuwan Dhingra & Salakhutdinov, 2016) EpiReader (Trischler et al., 2016) AoA Reader (Cui et al., 2016) NSE Adaptive Computation (Munkhdalai & Yu, 2016) 0.493 0.666 0.686 0.690 0.697 0.720 0.732 0.554 0.630 0.634 0.639 0.674 0.694 0.714
# Named Entities Common Nouns
# Single Pass
# Multi Pass
because the model does not know beforehand which query it will be presented with, and must learn to retain information which is useful for a wide variety of potential queries. For this reason it can be viewed as a test of the modelâs ability to construct a general-purpose representation of the current state of the story. The second setup leverages all available information, and allows the model to use knowledge of which question will be asked when it reads the story.
In Table 4, we show the performance of the general EntNet, the simpliï¬ed EntNet, as well as other single-pass models taken from (Hill et al., 2016). The general EntNet performs better than the LSTMs and n-gram model on the Named Entities Task, but lags behind on the Common Nouns task. The simpliï¬ed EntNet outperforms all other single-pass models on both tasks, and also per- forms better than the Memory Network which does not use the self-supervision heuristic. However, there is still a performance gap when compared to more sophisticated machine comprehension mod- els, many of which perform multiple layers of attention over the story using query knowledge. The fact that the simpliï¬ed EntNet is able to obtain decent performance is encouraging since it indicates that the model is able to build an internal representation of the story which it can then use to answer a relatively diverse set of queries.
# 6 CONCLUSION
Two closely related challenges in artiï¬cial intelligence are designing models which can maintain an estimate of the state of a world with complex dynamics over long timescales, and models which can predict the forward evolution of the state of the world from partial observation. In this paper, we introduced the Recurrent Entity Network, a new model that makes a promising step towards the ï¬rst goal. Our model is able to accurately track the world state while reading text stories, which enables it to set a new state-of-the-art on the bAbI tasks, the competitive benchmark of story understanding, by being the ï¬rst model to solve them all. We also showed that our model is able to capture simple dynamics over long timescales, and is able to perform competitively on a real-world dataset.
Although our model was able to solve all the bAbI tasks using 10k training samples, we found that performance dropped considerably when using only 1k samples (see Appendix). Most recent work on the bAbI tasks has focused on the 10k samples setting, and we would like to emphasize that solving them in the 1k samples setting remains an open problem which will require improving the sample efï¬ciency of reasoning models, including ours.
Recent works have made some progress towards the second goal of forward modeling, for instance in capturing simple physics (Lerer et al., 2016), predicting future frames in video (Mathieu et al., 2015) or responses in dialog (Weston, 2016). Although we have only applied our model to tasks
9
Published as a conference paper at ICLR 2017
with textual inputs in this work, the architecture is general and future work should investigate how to combine the EntNetâs tracking abilities with such predictive models.
# REFERENCES
Bhuwan Dhingra, Hanxiao Liu, William Cohen and Salakhutdinov, Ruslan. attention readers text comprehension. http://arxiv.org/abs/1606.01549. for CoRR, abs/1606.01549, 2016. Gated- URL
Chandar, Sarath, Ahn, Sungjin, Larochelle, Hugo, Vincent, Pascal, Tesauro, Gerald, and Bengio, Yoshua. Hierarchical memory networks. arXiv preprint arXiv:1605.07427, 2016.
On In Pro- the properties of neural machine translation: ceedings of SSST@EMNLP 2014, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, Doha, Qatar, 25 October 2014, pp. 103â111, 2014. URL http://aclweb.org/anthology/W/W14/W14-4012.pdf.
Collobert, Ronan, Kavukcuoglu, Koray, and Farabet, Clment. Torch7: A matlab-like environment for machine learning, 2011.
Cui, Yiming, Chen, Zhipeng, Wei, Si, Wang, Shijin, Liu, Ting, and Hu, Guoping. Attention- over-attention neural networks for reading comprehension. CoRR, abs/1607.04423, 2016. URL http://arxiv.org/abs/1607.04423.
Graves, Alex, Wayne, Greg, and Dnihelka, Ivo. Neural Turing Machines, September 2014. URL http://arxiv.org/abs/1410.5401.
Graves, Alex, Wayne, Greg, Reynolds, Malcolm, Harley, Tim, Danihelka, Ivo, Grabska-Barwi´nska, Agnieszka, Colmenarejo, Sergio G´omez, Grefenstette, Edward, Ramalho, Tiago, Agapiou, John, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 2016.
Grefenstette, Edward, Hermann, Karl Moritz, Suleyman, Mustafa, and Blunsom, Phil. Learning to transduce with unbounded memory. In Advances in Neural Information Processing Systems, pp. 1828â1836, 2015.
Gulcehre, Caglar, Chandar, Sarath, Cho, Kyunghyun, and Bengio, Yoshua. Dynamic neural tur- ing machines with soft and hard addressing schemes. CoRR, abs/1607.00036, 2016. URL http://arxiv.org/abs/1607.00036.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectiï¬ers: Surpass- ing human-level performance on imagenet classiï¬cation. CoRR, abs/1502.01852, 2015.
Hill, Felix, Bordes, Antoine, Chopra, Sumit, and Weston, Jason. The goldilocks principle: Read- ing childrenâs books with explicit memory representations. In Proceedings of the International Conference on Learning Representations. 2016.
Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short-term memory. Neural Comput., 9(8): doi: 10.1162/neco.1997.9.8.1735. URL 1735â1780, November 1997. http://dx.doi.org/10.1162/neco.1997.9.8.1735. ISSN 0899-7667.
Joulin, Armand and Mikolov, Tomas. Inferring algorithmic patterns with stack-augmented recurrent nets. arXiv preprint arXiv:1503.01007, 2015.
Kadlec, Rudolf, Schmid, Martin, Bajgar, Ondrej, and Kleindienst, Text under- Jan. CoRR, abs/1603.01547, 2016. URL standing with the attention sum reader network. http://arxiv.org/abs/1603.01547.
Kingma, Diederik P. and Ba, Jimmy. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. URL http://arxiv.org/abs/1412.6980.
10
Published as a conference paper at ICLR 2017
intuition of block tow- In Proceedings of the 33nd International Conference on Machine Learn- ers by example. ing, ICML 2016, New York City, NY, USA, June 19-24, 2016, pp. 430â438, 2016. URL http://jmlr.org/proceedings/papers/v48/lerer16.html.
Li, Yujia, Tarlow, Daniel, Brockschmidt, Marc, and Zemel, Richard S. Gated graph sequence neural networks. CoRR, abs/1511.05493, 2015. URL http://arxiv.org/abs/1511.05493.
Mathieu, Micha¨el, Couprie, Camille, prediction beyond mean square http://arxiv.org/abs/1511.05440. and LeCun, Yann. CoRR, Deep multi-scale video URL error. abs/1511.05440, 2015.
Miller, Alexander, Fisch, Adam, Dodge, Jesse, Karimi, Amir-Hossein, Bordes, Antoine, and We- arXiv preprint ston, Jason. Key-value memory networks for directly reading documents. arXiv:1606.03126, 2016.
Munkhdalai, Tsendsuren and Yu, Hong. ral networks language comprehension. https://arxiv.org/abs/1610.06454. for Reasoning with memory augmented neu- URL CoRR, abs/1610.06454, 2016.
End- In Cortes, C., Lawrence, N. D., Lee, D. D., to-end memory networks. Information Pro- Sugiyama, M., cessing Systems URL 2015. http://papers.nips.cc/paper/5846-end-to-end-memory-networks.pdf.
Sukhbaatar, Sainbayar, communication with http://arxiv.org/abs/1605.07736. Szlam, Arthur, backpropagation. and Fergus, Rob. CoRR, abs/1605.07736, Learning multiagent URL 2016.
Trischler, Adam, Ye, Zheng, Yuan, Xingdi, guage comprehension with the epireader. http://arxiv.org/abs/1606.02270. and Suleman, Kaheer. CoRR, abs/1606.02270, 2016. Natural lan- URL
Weston, Jason. Dialog-based language learning. CoRR, abs/1604.06045, 2016. URL http://arxiv.org/abs/1604.06045.
Weston, Jason, Chopra, Sumit, and Bordes, Antoine. Memory networks. CoRR, abs/1410.3916, 2014. URL http://arxiv.org/abs/1410.3916.
Weston, Jason, Bordes, Antoine, Chopra, Sumit, and Mikolov, Tomas. Towards ai-complete question answering: A set of prerequisite toy tasks. CoRR, abs/1502.05698, 2015. URL http://arxiv.org/abs/1502.05698.
Xiong, Caiming, Merity, Stephen, and Socher, Richard. Dynamic memory networks for visual and textual question answering. In ICML, 2016.
# A TRAINING DETAILS
All models were implemented using Torch (Collobert et al., 2011). In all experiments, we initialized our model by drawing weights from a Gaussian distribution with mean zero and standard deviation 0.1, except for the PReLU slopes and encoder weights which were initialized to 1. Note that the PReLU initialization is related to two of the heuristics used in (Sukhbaatar et al., 2015), namely starting training with a purely linear model, and adding non-linearities to half of the hidden units. Our initialization allows the model to choose when and how much to enter the non-linear regime. Initializing the encoder weights to 1 corresponds to beginning with a BoW encoding, which the model can then choose to modify. The initial values of the memory slots were initialized to the key values, which we found to help performance. Optimization was done with SGD or ADAM using minibatches of size 32, and gradients with norm greater than 40 were clipped to 40. A null symbol whose embedding was constrained to be zero was used to pad all sentences or windows to a ï¬xed size.
11
Published as a conference paper at ICLR 2017
# B DETAILS OF WORLD MODEL EXPERIMENTS
Two agents are initially placed at random on a 10 à 10 grid with 100 distinct locations {(1, 1), (1, 2), ...(9, 10), (10, 10)}. At each time step an agent is chosen at random. There are two types of actions: the agent can face a given direction, or can move a number of steps ahead. Actions are sampled until a legal action is found by either choosing to change direction or move with equal probability. If they change direction, the direction is chosen between north, south, east and west with equal probability. If they move, the number of steps is randomly chosen between 1 and 5. A legal action is one which does not place the agent off the grid. Stories are given to the network in textual form, an example of which is below. The ï¬rst action after each agent is placed on the grid is to face a given direction. Therefore, the maximum number of actions made by one agent is T â 2. The network learns word embeddings for all words in the vocabulary such as locations, agent identiï¬ers and actions. At question time, the model must predict the correct answer (which will always be a location) from all the tokens in the vocabulary.
agent1 is at (2,8) agent1 faces-N agent2 is at (9,7) agent2 faces-N agent2 moves-2 agent2 faces-E agent2 moves-1 agent1 moves-1 agent2 faces-S agent2 moves-5 Q1: where is agent1 ? Q2: where is agent2 ? A1: (2,9) A2: (10,4)
# C ADDITIONAL RESULTS ON BABI TASKS
We provide some additional experiments on the bAbI tasks, in order to better understand the inï¬u- ence of architecture, weight tying, and amount of training data. Table 5 shows results when a simple BoW encoding is used for the inputs. Here, the EntNet still performs better than a MemN2N which uses the same encoding scheme, indicating that the architecture has an important effect. Tying the key vectors to entities did not help, and hurt performance for some tasks. Table 6 shows results when using only 1k training samples. In this setting, the EntNet performs worse than the MemN2N.
Table 7 shows results for the EntNet and the DNC when models are trained on all tasks jointly. We report results for the mean performance across different random seeds (20 for the DNC, 5 for the EntNet), as well as the performance for the single best seed (measured by validation error). The DNC results for mean performance were taken from the appendix of Graves et al. (2016). The DNC has better performance in terms of the best seed, but also exhibits high variation across seeds, indicating that many different runs are required to achieve good performance. The EntNet exhibits less variation across runs and is able to solve more tasks consistently. Note that Table 2 reports DNC results with joint training, since results when training on each task separately were not available.
12
Published as a conference paper at ICLR 2017
Table 5: Error rates on bAbI Tasks with inputs are encoded using BoW. âTiedâ refers to the case where key vectors are tied with entity embeddings.
Task MemN2N EntNet-tied EntNet 1: 1 supporting fact 2: 2 supporting facts 3: 3 supporting facts 4: 2 argument relations 5: 3 argument relations 6: yes/no questions 7: counting 8: lists/sets 9: simple negation 10: indeï¬nite knowledge 11: basic coreference 12: conjunction 13: compound coreference 14: time reasoning 15: basic deduction 16: basic induction 17: positional reasoning 18: size reasoning 19: path ï¬nding 20: agentâs motivation 0 0.6 7 32.6 10.2 0.2 10.6 2.6 0.3 0.5 0 0 0 0.1 11.4 52.9 39.3 40.5 74.4 0 0 3.0 9.6 33.8 1.7 0 0.5 0.1 0 0 0.3 0 0.2 6.2 12.5 46.5 40.5 44.2 75.1 0 0 1.2 9.0 31.8 3.5 0 0.5 0.3 0 0 0 0 0.4 0.1 12.1 0 40.5 45.7 74.0 0 Failed Tasks (> 5%): Mean Error: 9 15.6 8 13.7 6 10.9
13
Published as a conference paper at ICLR 2017
Table 6: Results on bAbI Tasks with 1k samples.
Task 1: 1 supporting fact 2: 2 supporting facts 3: 3 supporting facts 4: 2 argument relations 5: 3 argument relations 6: yes/no questions 7: counting 8: lists/sets 9: simple negation 10: indeï¬nite knowledge 11: basic coreference 12: conjunction 13: compound coreference 14: time reasoning 15: basic deduction 16: basic induction 17: positional reasoning 18: size reasoning 19: path ï¬nding 20: agentâs motivation 0 8.3 40.3 2.8 13.1 7.6 17.3 10.0 13.2 15.1 0.9 0.2 0.4 1.7 0 1.3 51.0 11.1 82.8 0 0.7 56.4 69.7 1.4 4.6 30.0 22.3 19.2 31.5 15.6 8.0 0.8 9.0 62.9 57.8 53.2 46.4 8.8 90.4 2.6 Failed Tasks (> 5%): Mean Error: 11 13.9 15 29.6
# MemN2N EntNet
14
Published as a conference paper at ICLR 2017
Table 7: Results on bAbI Tasks with 10k samples and joint training on all tasks.
All Seeds Best Seed DNC EntNet 0 0.4 1.8 0 0.8 0 0.6 0.3 0.2 0.2 0 0 0 0.4 0 55.1 12.0 0.8 3.9 0 2 3.8 Task 1: 1 supporting fact 2: 2 supporting facts 3: 3 supporting facts 4: 2 argument relations 5: 3 argument relations 6: yes/no questions 7: counting 8: lists/sets 9: simple negation 10: indeï¬nite knowledge 11: basic coreference 12: conjunction 13: compound coreference 14: time reasoning 15: basic deduction 16: basic induction 17: positional reasoning 18: size reasoning 19: path ï¬nding 20: agentâs motivation Failed Tasks (> 5%): Mean Error: DNC 9.0 ± 12.6 39.2 ± 20.5 39.6 ± 16.4 0.4 ± 0.7 1.5 ± 1.0 6.9 ± 7.5 9.8 ± 7.0 5.5 ± 5.9 7.7 ± 8.3 9.6 ± 11.4 3.3 ± 5.7 5.0 ± 6.3 3.1 ± 3.6 11.0 ± 7.5 27.2 ± 20.1 53.6 ± 1.9 32.4 ± 8.0 4.2 ± 1.8 64.6 ± 37.4 0.0 ± 0.1 11.2 ± 5.4 16.7 ± 7.6 EntNet 0 ± 0.1 15.3 ± 15.7 29.3 ± 26.3 0.1 ± 0.1 0.4 ± 0.3 0.6 ± 0.8 1.8 ± 1.1 1.5 ± 1.2 0 ± 0.1 0.1 ± 0.2 0.2 ± 0.2 0 ± 0 0 ± 0.1 7.3 ± 4.5 3.6 ± 8.1 53.3 ± 1.2 8.8 ± 3.8 1.3 ± 0.9 70.4 ± 6.1 0 ± 0 5 ± 1.2 9.7 ± 2.6 0.1 2.8 10.6 0 0.4 0.3 0.8 0.1 0 0 0 0 0 3.6 0 52.1 11.7 2.1 63.0 0 4 7.38
15 | {
"id": "1503.01007"
} |
1612.03144 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | 7 1 0 2
r p A 9 1 ] V C . s c [ 2 v 4 4 1 3 0 . 2 1 6 1 : v i X r a
# Feature Pyramid Networks for Object Detection
Tsung-Yi Lin1,2, Piotr Doll´ar1, Ross Girshick1, Kaiming He1, Bharath Hariharan1, and Serge Belongie2
1Facebook AI Research (FAIR) 2Cornell University and Cornell Tech
# Abstract
Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid rep- resentations, in part because they are compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to con- struct feature pyramids with marginal extra cost. A top- down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows signiï¬cant improvement as a generic feature extrac- tor in several applications. Using FPN in a basic Faster R-CNN system, our method achieves state-of-the-art single- model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model en- tries including those from the COCO 2016 challenge win- ners. In addition, our method can run at 6 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.
predict] predict (a) Featurized image pyramid (b) Single feature map predict] predict â> [predict predict] i (c) Pyramidal feature hierarchy (d) Feature Pyramid Network
Figure 1. (a) Using an image pyramid to build a feature pyramid. Features are computed on each of the image scales independently, which is slow. (b) Recent detection systems have opted to use only single scale features for faster detection. (c) An alternative is to reuse the pyramidal feature hierarchy computed by a ConvNet as if it were a featurized image pyramid. (d) Our proposed Feature Pyramid Network (FPN) is fast like (b) and (c), but more accurate. In this ï¬gure, feature maps are indicate by blue outlines and thicker outlines denote semantically stronger features.
# 1. Introduction
Recognizing objects at vastly different scales is a fun- damental challenge in computer vision. Feature pyramids built upon image pyramids (for short we call these featur- ized image pyramids) form the basis of a standard solution [1] (Fig. 1(a)). These pyramids are scale-invariant in the sense that an objectâs scale change is offset by shifting its level in the pyramid. Intuitively, this property enables a model to detect objects across a large range of scales by scanning the model over both positions and pyramid levels. Featurized image pyramids were heavily used in the era of hand-engineered features [5, 25]. They were so critical that object detectors like DPM [7] required dense scale sampling to achieve good results (e.g., 10 scales per octave). For recognition tasks, engineered features have
largely been replaced with features computed by deep con- volutional networks (ConvNets) [19, 20]. Aside from being capable of representing higher-level semantics, ConvNets are also more robust to variance in scale and thus facilitate recognition from features computed on a single input scale [15, 11, 29] (Fig. 1(b)). But even with this robustness, pyra- mids are still needed to get the most accurate results. All re- cent top entries in the ImageNet [33] and COCO [21] detec- tion challenges use multi-scale testing on featurized image pyramids (e.g., [16, 35]). The principle advantage of fea- turizing each level of an image pyramid is that it produces a multi-scale feature representation in which all levels are semantically strong, including the high-resolution levels.
Nevertheless, featurizing each level of an image pyra- mid has obvious limitations. Inference time increases con- siderably (e.g., by four times [11]), making this approach impractical for real applications. Moreover, training deep
1
networks end-to-end on an image pyramid is infeasible in terms of memory, and so, if exploited, image pyramids are used only at test time [15, 11, 16, 35], which creates an inconsistency between train/test-time inference. For these reasons, Fast and Faster R-CNN [11, 29] opt to not use fea- turized image pyramids under default settings.
However, image pyramids are not the only way to com- pute a multi-scale feature representation. A deep ConvNet computes a feature hierarchy layer by layer, and with sub- sampling layers the feature hierarchy has an inherent multi- scale, pyramidal shape. This in-network feature hierarchy produces feature maps of different spatial resolutions, but introduces large semantic gaps caused by different depths. The high-resolution maps have low-level features that harm their representational capacity for object recognition.
The Single Shot Detector (SSD) [22] is one of the ï¬rst attempts at using a ConvNetâs pyramidal feature hierarchy as if it were a featurized image pyramid (Fig. 1(c)). Ideally, the SSD-style pyramid would reuse the multi-scale feature maps from different layers computed in the forward pass and thus come free of cost. But to avoid using low-level features SSD foregoes reusing already computed layers and instead builds the pyramid starting from high up in the net- work (e.g., conv4 3 of VGG nets [36]) and then by adding several new layers. Thus it misses the opportunity to reuse the higher-resolution maps of the feature hierarchy. We show that these are important for detecting small objects.
The goal of this paper is to naturally leverage the pyra- midal shape of a ConvNetâs feature hierarchy while cre- ating a feature pyramid that has strong semantics at all scales. To achieve this goal, we rely on an architecture that combines low-resolution, semantically strong features with high-resolution, semantically weak features via a top-down pathway and lateral connections (Fig. 1(d)). The result is a feature pyramid that has rich semantics at all levels and is built quickly from a single input image scale. In other words, we show how to create in-network feature pyramids that can be used to replace featurized image pyramids with- out sacriï¬cing representational power, speed, or memory.
Similar architectures adopting top-down and skip con- nections are popular in recent research [28, 17, 8, 26]. Their goals are to produce a single high-level feature map of a ï¬ne resolution on which the predictions are to be made (Fig. 2 top). On the contrary, our method leverages the architecture as a feature pyramid where predictions (e.g., object detec- tions) are independently made on each level (Fig. 2 bottom). Our model echoes a featurized image pyramid, which has not been explored in these works.
We evaluate our method, called a Feature Pyramid Net- work (FPN), in various systems for detection and segmen- tation [11, 29, 27]. Without bells and whistles, we re- port a state-of-the-art single-model result on the challenging COCO detection benchmark [21] simply based on FPN and
2
Y / i ye redict t Pp >| predict predict predict
Figure 2. Top: a top-down architecture with skip connections, where predictions are made on the ï¬nest level (e.g., [28]). Bottom: our model that has a similar structure but leverages it as a feature pyramid, with predictions made independently at all levels.
a basic Faster R-CNN detector [29], surpassing all exist- ing heavily-engineered single-model entries of competition winners. In ablation experiments, we ï¬nd that for bound- ing box proposals, FPN signiï¬cantly increases the Average Recall (AR) by 8.0 points; for object detection, it improves the COCO-style Average Precision (AP) by 2.3 points and PASCAL-style AP by 3.8 points, over a strong single-scale baseline of Faster R-CNN on ResNets [16]. Our method is also easily extended to mask proposals and improves both instance segmentation AR and speed over state-of-the-art methods that heavily depend on image pyramids.
In addition, our pyramid structure can be trained end-to- end with all scales and is used consistently at train/test time, which would be memory-infeasible using image pyramids. As a result, FPNs are able to achieve higher accuracy than all existing state-of-the-art methods. Moreover, this im- provement is achieved without increasing testing time over the single-scale baseline. We believe these advances will facilitate future research and applications. Our code will be made publicly available.
# 2. Related Work
Hand-engineered features and early neural networks. SIFT features [25] were originally extracted at scale-space extrema and used for feature point matching. HOG fea- tures [5], and later SIFT features as well, were computed densely over entire image pyramids. These HOG and SIFT pyramids have been used in numerous works for image classiï¬cation, object detection, human pose estimation, and more. There has also been signiï¬cant interest in comput- ing featurized image pyramids quickly. Doll´ar et al. [6] demonstrated fast pyramid computation by ï¬rst computing a sparsely sampled (in scale) pyramid and then interpolat- ing missing levels. Before HOG and SIFT, early work on face detection with ConvNets [38, 32] computed shallow networks over image pyramids to detect faces across scales.
Deep ConvNet object detectors. With the development of modern deep ConvNets [19], object detectors like Over- Feat [34] and R-CNN [12] showed dramatic improvements in accuracy. OverFeat adopted a strategy similar to early neural network face detectors by applying a ConvNet as a sliding window detector on an image pyramid. R-CNN adopted a region proposal-based strategy [37] in which each proposal was scale-normalized before classifying with a ConvNet. SPPnet [15] demonstrated that such region-based detectors could be applied much more efï¬ciently on fea- ture maps extracted on a single image scale. Recent and more accurate detection methods like Fast R-CNN [11] and Faster R-CNN [29] advocate using features computed from a single scale, because it offers a good trade-off between accuracy and speed. Multi-scale detection, however, still performs better, especially for small objects.
Methods using multiple layers. A number of recent ap- proaches improve detection and segmentation by using dif- ferent layers in a ConvNet. FCN [24] sums partial scores for each category over multiple scales to compute semantic segmentations. Hypercolumns [13] uses a similar method for object instance segmentation. Several other approaches (HyperNet [18], ParseNet [23], and ION [2]) concatenate features of multiple layers before computing predictions, which is equivalent to summing transformed features. SSD [22] and MS-CNN [3] predict objects at multiple layers of the feature hierarchy without combining features or scores. There are recent methods exploiting lateral/skip connec- tions that associate low-level feature maps across resolu- tions and semantic levels, including U-Net [31] and Sharp- Mask [28] for segmentation, Recombinator networks [17] for face detection, and Stacked Hourglass networks [26] for keypoint estimation. Ghiasi et al. [8] present a Lapla- cian pyramid presentation for FCNs to progressively reï¬ne segmentation. Although these methods adopt architectures with pyramidal shapes, they are unlike featurized image pyramids [5, 7, 34] where predictions are made indepen- dently at all levels, see Fig. 2. In fact, for the pyramidal architecture in Fig. 2 (top), image pyramids are still needed to recognize objects across multiple scales [28].
# 3. Feature Pyramid Networks
Our goal is to leverage a ConvNetâs pyramidal feature hierarchy, which has semantics from low to high levels, and build a feature pyramid with high-level semantics through- out. The resulting Feature Pyramid Network is general- purpose and in this paper we focus on sliding window pro- posers (Region Proposal Network, RPN for short) [29] and region-based detectors (Fast R-CNN) [11]. We also gener- alize FPNs to instance segmentation proposals in Sec. 6.
Our method takes a single-scale image of an arbitrary size as input, and outputs proportionally sized feature maps
3
[predict â > 1x1 conv -*
Figure 3. A building block illustrating the lateral connection and the top-down pathway, merged by addition.
at multiple levels, in a fully convolutional fashion. This pro- cess is independent of the backbone convolutional architec- tures (e.g., [19, 36, 16]), and in this paper we present results using ResNets [16]. The construction of our pyramid in- volves a bottom-up pathway, a top-down pathway, and lat- eral connections, as introduced in the following.
Bottom-up pathway. The bottom-up pathway is the feed- forward computation of the backbone ConvNet, which com- putes a feature hierarchy consisting of feature maps at sev- eral scales with a scaling step of 2. There are often many layers producing output maps of the same size and we say these layers are in the same network stage. For our feature pyramid, we deï¬ne one pyramid level for each stage. We choose the output of the last layer of each stage as our ref- erence set of feature maps, which we will enrich to create our pyramid. This choice is natural since the deepest layer of each stage should have the strongest features.
Speciï¬cally, for ResNets [16] we use the feature activa- tions output by each stageâs last residual block. We denote the output of these last residual blocks as {C2, C3, C4, C5} for conv2, conv3, conv4, and conv5 outputs, and note that they have strides of {4, 8, 16, 32} pixels with respect to the input image. We do not include conv1 into the pyramid due to its large memory footprint.
Top-down pathway and lateral connections. The top- down pathway hallucinates higher resolution features by upsampling spatially coarser, but semantically stronger, fea- ture maps from higher pyramid levels. These features are then enhanced with features from the bottom-up pathway via lateral connections. Each lateral connection merges fea- ture maps of the same spatial size from the bottom-up path- way and the top-down pathway. The bottom-up feature map is of lower-level semantics, but its activations are more ac- curately localized as it was subsampled fewer times.
Fig. 3 shows the building block that constructs our top- down feature maps. With a coarser-resolution feature map, we upsample the spatial resolution by a factor of 2 (using nearest neighbor upsampling for simplicity). The upsam-
pled map is then merged with the corresponding bottom-up map (which undergoes a 1Ã1 convolutional layer to reduce channel dimensions) by element-wise addition. This pro- cess is iterated until the ï¬nest resolution map is generated. To start the iteration, we simply attach a 1Ã1 convolutional layer on C5 to produce the coarsest resolution map. Fi- nally, we append a 3Ã3 convolution on each merged map to generate the ï¬nal feature map, which is to reduce the alias- ing effect of upsampling. This ï¬nal set of feature maps is called {P2, P3, P4, P5}, corresponding to {C2, C3, C4, C5} that are respectively of the same spatial sizes.
Because all levels of the pyramid use shared classi- ï¬ers/regressors as in a traditional featurized image pyramid, we ï¬x the feature dimension (numbers of channels, denoted as d) in all the feature maps. We set d = 256 in this pa- per and thus all extra convolutional layers have 256-channel outputs. There are no non-linearities in these extra layers, which we have empirically found to have minor impacts.
Simplicity is central to our design and we have found that our model is robust to many design choices. We have exper- imented with more sophisticated blocks (e.g., using multi- layer residual blocks [16] as the connections) and observed marginally better results. Designing better connection mod- ules is not the focus of this paper, so we opt for the simple design described above.
# 4. Applications
Our method is a generic solution for building feature pyramids inside deep ConvNets. In the following we adopt our method in RPN [29] for bounding box proposal gen- eration and in Fast R-CNN [11] for object detection. To demonstrate the simplicity and effectiveness of our method, we make minimal modiï¬cations to the original systems of [29, 11] when adapting them to our feature pyramid.
# 4.1. Feature Pyramid Networks for RPN
RPN [29] is a sliding-window class-agnostic object de- tector. In the original RPN design, a small subnetwork is evaluated on dense 3Ã3 sliding windows, on top of a single- scale convolutional feature map, performing object/non- object binary classiï¬cation and bounding box regression. This is realized by a 3Ã3 convolutional layer followed by two sibling 1Ã1 convolutions for classiï¬cation and regres- sion, which we refer to as a network head. The object/non- object criterion and bounding box regression target are de- ï¬ned with respect to a set of reference boxes called anchors [29]. The anchors are of multiple pre-deï¬ned scales and aspect ratios in order to cover objects of different shapes.
We adapt RPN by replacing the single-scale feature map with our FPN. We attach a head of the same design (3Ã3 conv and two sibling 1Ã1 convs) to each level on our feature pyramid. Because the head slides densely over all locations in all pyramid levels, it is not necessary to have multi-scale
4
anchors on a speciï¬c level. Instead, we assign anchors of a single scale to each level. Formally, we deï¬ne the an- chors to have areas of {322, 642, 1282, 2562, 5122} pixels on {P2, P3, P4, P5, P6} respectively.1 As in [29] we also use anchors of multiple aspect ratios {1:2, 1:1, 2:1} at each level. So in total there are 15 anchors over the pyramid.
We assign training labels to the anchors based on their Intersection-over-Union (IoU) ratios with ground-truth bounding boxes as in [29]. Formally, an anchor is assigned a positive label if it has the highest IoU for a given ground- truth box or an IoU over 0.7 with any ground-truth box, and a negative label if it has IoU lower than 0.3 for all ground-truth boxes. Note that scales of ground-truth boxes are not explicitly used to assign them to the levels of the pyramid; instead, ground-truth boxes are associated with anchors, which have been assigned to pyramid levels. As such, we introduce no extra rules in addition to those in [29]. We note that the parameters of the heads are shared across all feature pyramid levels; we have also evaluated the alternative without sharing parameters and observed similar accuracy. The good performance of sharing parameters in- dicates that all levels of our pyramid share similar semantic levels. This advantage is analogous to that of using a fea- turized image pyramid, where a common head classiï¬er can be applied to features computed at any image scale.
With the above adaptations, RPN can be naturally trained and tested with our FPN, in the same fashion as in [29]. We elaborate on the implementation details in the experiments.
# 4.2. Feature Pyramid Networks for Fast R-CNN
Fast R-CNN [11] is a region-based object detector in which Region-of-Interest (RoI) pooling is used to extract features. Fast R-CNN is most commonly performed on a single-scale feature map. To use it with our FPN, we need to assign RoIs of different scales to the pyramid levels.
We view our feature pyramid as if it were produced from an image pyramid. Thus we can adapt the assignment strat- egy of region-based detectors [15, 11] in the case when they are run on image pyramids. Formally, we assign an RoI of width w and height h (on the input image to the network) to the level Pk of our feature pyramid by:
â
k = [ko + logy(Vwh/224)|. (1)
Here 224 is the canonical ImageNet pre-training size, and k0 is the target level on which an RoI with w à h = 2242 should be mapped into. Analogous to the ResNet-based Faster R-CNN system [16] that uses C4 as the single-scale feature map, we set k0 to 4. Intuitively, Eqn. (1) means that if the RoIâs scale becomes smaller (say, 1/2 of 224), it should be mapped into a ï¬ner-resolution level (say, k = 3).
1Here we introduce P6 only for covering a larger anchor scale of 5122. P6 is simply a stride two subsampling of P5. P6 is not used by the Fast R-CNN detector in the next section.
We attach predictor heads (in Fast R-CNN the heads are class-speciï¬c classiï¬ers and bounding box regressors) to all RoIs of all levels. Again, the heads all share parameters, regardless of their levels. In [16], a ResNetâs conv5 lay- ers (a 9-layer deep subnetwork) are adopted as the head on top of the conv4 features, but our method has already har- nessed conv5 to construct the feature pyramid. So unlike [16], we simply adopt RoI pooling to extract 7Ã7 features, and attach two hidden 1,024-d fully-connected (fc) layers (each followed by ReLU) before the ï¬nal classiï¬cation and bounding box regression layers. These layers are randomly initialized, as there are no pre-trained fc layers available in ResNets. Note that compared to the standard conv5 head, our 2-fc MLP head is lighter weight and faster.
Based on these adaptations, we can train and test Fast R- CNN on top of the feature pyramid. Implementation details are given in the experimental section.
# 5. Experiments on Object Detection
We perform experiments on the 80 category COCO de- tection dataset [21]. We train using the union of 80k train images and a 35k subset of val images (trainval35k [2]), and report ablations on a 5k subset of val images (minival). We also report ï¬nal results on the standard test set (test-std) [21] which has no disclosed labels.
As is common practice [12], all network backbones are pre-trained on the ImageNet1k classiï¬cation set [33] and then ï¬ne-tuned on the detection dataset. We use the pre-trained ResNet-50 and ResNet-101 models that are publicly available.2 Our code is a reimplementation of py-faster-rcnn3 using Caffe2.4
# 5.1. Region Proposal with RPN
We evaluate the COCO-style Average Recall (AR) and AR on small, medium, and large objects (ARs, ARm, and ARl) following the deï¬nitions in [21]. We report results for 100 and 1000 proposals per images (AR100 and AR1k).
Implementation details. All architectures in Table 1 are trained end-to-end. The input image is resized such that its shorter side has 800 pixels. We adopt synchronized SGD training on 8 GPUs. A mini-batch involves 2 images per GPU and 256 anchors per image. We use a weight decay of 0.0001 and a momentum of 0.9. The learning rate is 0.02 for the ï¬rst 30k mini-batches and 0.002 for the next 10k. For all RPN experiments (including baselines), we include the anchor boxes that are outside the image for training, which is unlike [29] where these anchor boxes are ignored. Other implementation details are as in [29]. Training RPN with FPN on 8 GPUs takes about 8 hours on COCO.
2https://github.com/kaiminghe/deep-residual-networks 3https://github.com/rbgirshick/py-faster-rcnn 4https://github.com/caffe2/caffe2
5
# 5.1.1 Ablation Experiments
Comparisons with baselines. For fair comparisons with original RPNs [29], we run two baselines (Table 1(a, b)) us- ing the single-scale map of C4 (the same as [16]) or C5, both using the same hyper-parameters as ours, including using 5 scale anchors of {322, 642, 1282, 2562, 5122}. Table 1 (b) shows no advantage over (a), indicating that a single higher- level feature map is not enough because there is a trade-off between coarser resolutions and stronger semantics.
Placing FPN in RPN improves AR1k to 56.3 (Table 1 (c)), which is 8.0 points increase over the single-scale RPN baseline (Table 1 (a)). In addition, the performance on small objects (AR1k s ) is boosted by a large margin of 12.9 points. Our pyramid representation greatly improves RPNâs robust- ness to object scale variation.
How important is top-down enrichment? Table 1(d) shows the results of our feature pyramid without the top- down pathway. With this modiï¬cation, the 1Ã1 lateral con- nections followed by 3Ã3 convolutions are attached to the bottom-up pyramid. This architecture simulates the effect of reusing the pyramidal feature hierarchy (Fig. 1(b)).
The results in Table 1(d) are just on par with the RPN baseline and lag far behind ours. We conjecture that this is because there are large semantic gaps between different levels on the bottom-up pyramid (Fig. 1(b)), especially for very deep ResNets. We have also evaluated a variant of Ta- ble 1(d) without sharing the parameters of the heads, but observed similarly degraded performance. This issue can- not be simply remedied by level-speciï¬c heads.
How important are lateral connections? Table 1(e) shows the ablation results of a top-down feature pyramid without the 1Ã1 lateral connections. This top-down pyra- mid has strong semantic features and ï¬ne resolutions. But we argue that the locations of these features are not precise, because these maps have been downsampled and upsampled several times. More precise locations of features can be di- rectly passed from the ï¬ner levels of the bottom-up maps via the lateral connections to the top-down maps. As a results, FPN has an AR1k score 10 points higher than Table 1(e).
How important are pyramid representations? Instead of resorting to pyramid representations, one can attach the head to the highest-resolution, strongly semantic feature maps of P2 (i.e., the ï¬nest level in our pyramids). Simi- lar to the single-scale baselines, we assign all anchors to the P2 feature map. This variant (Table 1(f)) is better than the baseline but inferior to our approach. RPN is a sliding win- dow detector with a ï¬xed window size, so scanning over pyramid levels can increase its robustness to scale variance. In addition, we note that using P2 alone leads to more anchors (750k, Table 1(f)) caused by its large spatial reso- lution. This result suggests that a larger number of anchors is not sufï¬cient in itself to improve accuracy.
RPN feature | # anchors | lateral? top-down? | | | ARIE ARIE AR} (a) baseline on conv4 C4 47k 36.1 48.3 32.0 58.7 62.2 (b) baseline on conv5 Cs 12k 36.3 44.9 25.3 55.5 64.2 (c) FPN {Py} 200k v v 44.0 | 56.3 | 44.9 63.4 66.2 Ablation experiments follow: (d) bottom-up pyramid {Px} 200k v 374 49.5 30.5 59.9 68.0 (e) top-down pyramid, w/o lateral {Px} 200k v 34.5 46.1 265 574 64.7 (f) only finest level Pz 750k v v 38.4 51.3 | 35.1 59.7 67.6
Table 1. Bounding box proposal results using RPN [29], evaluated on the COCO minival set. All models are trained on trainval35k. The columns âlateralâ and âtop-downâ denote the presence of lateral and top-down connections, respectively. The column âfeatureâ denotes the feature maps on which the heads are attached. All results are based on ResNet-50 and share the same hyper-parameters.
Fast R-CNN proposals feature | head | lateral? top-down? | AP@0.5 | AP | AP; AP, AP; (a) baseline on conv4 RPN, {P;,} C4 conv5S 54.7 31.9] 15.7 365 45.5 (b) baseline on conv5 RPN, {Px} Cs 2fc 52.9 28.8} 11.9 324 43.4 (c) FPN RPN, {P} | {Px} 2fc v v 56.9 33.9 | 17.8 37.7 45.8 Ablation experiments follow: (d) bottom-up pyramid RPN, {Px} | {Pe} | 2fe v 44.9 | 249] 10.9 244 38.5 (e) top-down pyramid, w/o lateral RPN, {Px} ] {Pe} | 2fc v 54.0 | 31.3] 13.3 35.2 45.3 (f) only finest level RPN, {P,.} P2 2fc v v 56.3 33.4] 17.3 37.3 45.6
Table 2. Object detection results using Fast R-CNN [11] on a ï¬xed set of proposals (RPN, {Pk}, Table 1(c)), evaluated on the COCO minival set. Models are trained on the trainval35k set. All results are based on ResNet-50 and share the same hyper-parameters.
Faster R-CNN proposals feature | head | lateral? top-down? | AP@0.5 | AP | AP; AP, AP; (*) baseline from He er al. [16] RPN, C4 C41 conv5 47.3 26.3 - - - (a) baseline on conv4 RPN, C4 C41 conv5 53.1 31.6 | 13.2 35.6 47.1 (b) baseline on conv5 RPN, C5 Cs 2fe 51.7 28.0) 96 31.9 43.1 (c) FPN RPN, {Px} | {Pe} 2fc v v 56.9 33.9 | 17.8 37.7 45.8
Table 3. Object detection results using Faster R-CNN [29] evaluated on the COCO minival set. The backbone network for RPN are consistent with Fast R-CNN. Models are trained on the trainval35k set and use ResNet-50. â Provided by authors of [16].
# 5.2. Object Detection with Fast/Faster R-CNN
Next we investigate FPN for region-based (non-sliding window) detectors. We evaluate object detection by the COCO-style Average Precision (AP) and PASCAL-style AP (at a single IoU threshold of 0.5). We also report COCO AP on objects of small, medium, and large sizes (namely, APs, APm, and APl) following the deï¬nitions in [21].
Implementation details. The input image is resized such that its shorter side has 800 pixels. Synchronized SGD is used to train the model on 8 GPUs. Each mini-batch in- volves 2 image per GPU and 512 RoIs per image. We use a weight decay of 0.0001 and a momentum of 0.9. The learning rate is 0.02 for the ï¬rst 60k mini-batches and 0.002 for the next 20k. We use 2000 RoIs per image for training and 1000 for testing. Training Fast R-CNN with FPN takes about 10 hours on the COCO dataset.
# 5.2.1 Fast R-CNN (on ï¬xed proposals)
puted by RPN on FPN (Table 1(c)), because it has good per- formance on small objects that are to be recognized by the detector. For simplicity we do not share features between Fast R-CNN and RPN, except when speciï¬ed.
As a ResNet-based Fast R-CNN baseline, following [16], we adopt RoI pooling with an output size of 14Ã14 and attach all conv5 layers as the hidden layers of the head. This gives an AP of 31.9 in Table 2(a). Table 2(b) is a base- line exploiting an MLP head with 2 hidden fc layers, similar to the head in our architecture. It gets an AP of 28.8, indi- cating that the 2-fc head does not give us any orthogonal advantage over the baseline in Table 2(a).
Table 2(c) shows the results of our FPN in Fast R-CNN. Comparing with the baseline in Table 2(a), our method im- proves AP by 2.0 points and small object AP by 2.1 points. Comparing with the baseline that also adopts a 2fc head (Ta- ble 2(b)), our method improves AP by 5.1 points.5 These comparisons indicate that our feature pyramid is superior to single-scale features for a region-based object detector.
To better investigate FPNâs effects on the region-based de- tector alone, we conduct ablations of Fast R-CNN on a ï¬xed set of proposals. We choose to freeze the proposals as com-
Table 2(d) and (e) show that removing top-down con-
5We expect a stronger architecture of the head [30] will improve upon our results, which is beyond the focus of this paper.
6
image test-dev test-std method backbone competition | pyramid | APa@.5 | AP | APs AP», AP; | APa.s] AP | APs APm AP, ours, Faster R-CNN on FPN ResNet-101 - 59.1 | 36.2] 18.2 39.0 48.2) 585 | 35.8|17.5 38.7 47.8 Competition-winning single-model results follow: G-RMIt Inception-ResNet 2016 - 34.7] - - - - - - - - AttractioNet? [10] VGG16 + Wide ResNet® 2016 v 53.4 | 35.7] 15.6 38.0 52.7) 52.9 | 35.3] 14.7 37.6 51.9 Faster R-CNN +++ [16] ResNet-101 2015 v 55.7 | 34.9] 15.6 38.7 50.9 - - - - - Multipath [40] (on minival) VGG-16 2015 49.6 |31.5] - - - - - - - - ION? [2] VGG-16 2015 53.4 | 31.2] 12.8 32.9 45.2) 52.9 | 30.7] 11.8 32.8 44.8
Table 4. Comparisons of single-model results on the COCO detection benchmark. Some results were not available on the test-std set, so we also include the test-dev results (and for Multipath [40] on minival). â : http://image-net.org/challenges/ talks/2016/GRMI-COCO-slidedeck.pdf. â¡: http://mscoco.org/dataset/#detections-leaderboard. §: This entry of AttractioNet [10] adopts VGG-16 for proposals and Wide ResNet [39] for object detection, so is not strictly a single-model result.
nections or removing lateral connections leads to inferior results, similar to what we have observed in the above sub- section for RPN. It is noteworthy that removing top-down connections (Table 2(d)) signiï¬cantly degrades the accu- racy, suggesting that Fast R-CNN suffers from using the low-level features at the high-resolution maps.
In Table 2(f), we adopt Fast R-CNN on the single ï¬nest scale feature map of P2. Its result (33.4 AP) is marginally worse than that of using all pyramid levels (33.9 AP, Ta- ble 2(c)). We argue that this is because RoI pooling is a warping-like operation, which is less sensitive to the re- gionâs scales. Despite the good accuracy of this variant, it is based on the RPN proposals of {Pk} and has thus already beneï¬ted from the pyramid representation.
# 5.2.2 Faster R-CNN (on consistent proposals)
In the above we used a ï¬xed set of proposals to investi- gate the detectors. But in a Faster R-CNN system [29], the RPN and Fast R-CNN must use the same network back- bone in order to make feature sharing possible. Table 3 shows the comparisons between our method and two base- lines, all using consistent backbone architectures for RPN and Fast R-CNN. Table 3(a) shows our reproduction of the baseline Faster R-CNN system as described in [16]. Under controlled settings, our FPN (Table 3(c)) is better than this strong baseline by 2.3 points AP and 3.8 points AP@0.5.
Note that Table 3(a) and (b) are baselines that are much stronger than the baseline provided by He et al. [16] in Ta- ble 3(*). We ï¬nd the following implementations contribute to the gap: (i) We use an image scale of 800 pixels instead of 600 in [11, 16]; (ii) We train with 512 RoIs per image which accelerate convergence, in contrast to 64 RoIs in [11, 16]; (iii) We use 5 scale anchors instead of 4 in [16] (adding 322); (iv) At test time we use 1000 proposals per image in- stead of 300 in [16]. So comparing with He et al.âs ResNet- 50 Faster R-CNN baseline in Table 3(*), our method im- proves AP by 7.6 points and AP@0.5 by 9.6 points.
ResNet-50 ResNet-101 share features? no yes AP@0.5 56.9 57.2 AP 33.9 34.3 AP@0.5 58.0 58.2 AP 35.0 35.2
Table 5. More object detection results using Faster R-CNN and our FPNs, evaluated on minival. Sharing features increases train time by 1.5Ã (using 4-step training [29]), but reduces test time.
ble 5, we evaluate sharing features following the 4-step training described in [29]. Similar to [29], we ï¬nd that shar- ing features improves accuracy by a small margin. Feature sharing also reduces the testing time.
Running time. With feature sharing, our FPN-based Faster R-CNN system has inference time of 0.148 seconds per image on a single NVIDIA M40 GPU for ResNet-50, and 0.172 seconds for ResNet-101.6 As a comparison, the single-scale ResNet-50 baseline in Table 3(a) runs at 0.32 seconds. Our method introduces small extra cost by the ex- tra layers in the FPN, but has a lighter weight head. Overall our system is faster than the ResNet-based Faster R-CNN counterpart. We believe the efï¬ciency and simplicity of our method will beneï¬t future research and applications.
# 5.2.3 Comparing with COCO Competition Winners
We ï¬nd that our ResNet-101 model in Table 5 is not sufï¬- ciently trained with the default learning rate schedule. So we increase the number of mini-batches by 2à at each learning rate when training the Fast R-CNN step. This in- creases AP on minival to 35.6, without sharing features. This model is the one we submitted to the COCO detection leaderboard, shown in Table 4. We have not evaluated its feature-sharing version due to limited time, which should be slightly better as implied by Table 5.
Table 4 compares our method with the single-model re- sults of the COCO competition winners, including the 2016 winner G-RMI and the 2015 winner Faster R-CNN+++. Without adding bells and whistles, our single-model entry has surpassed these strong, heavily engineered competitors.
Sharing features. In the above, for simplicity we do not share the features between RPN and Fast R-CNN. In Ta-
6These runtimes are updated from an earlier version of this paper.
7
Taxt4 160x160 [128x128] 14x14 gox80 [64x64]
Figure 4. FPN for object segment proposals. The feature pyramid is constructed with identical structure as for object detection. We apply a small MLP on 5Ã5 windows to generate dense object seg- ments with output dimension of 14Ã14. Shown in orange are the size of the image regions the mask corresponds to for each pyra- mid level (levels P3â5 are shown here). Both the corresponding image region size (light orange) and canonical object size (dark orange) are shown. Half octaves are handled by an MLP on 7x7 windows (7 â 5 2), not shown here. Details are in the appendix.
On the test-dev set, our method increases over the ex- isting best results by 0.5 points of AP (36.2 vs. 35.7) and 3.4 points of AP@0.5 (59.1 vs. 55.7). It is worth noting that our method does not rely on image pyramids and only uses a single input image scale, but still has outstanding AP on small-scale objects. This could only be achieved by high- resolution image inputs with previous methods.
Moreover, our method does not exploit many popular improvements, such as iterative regression [9], hard nega- tive mining [35], context modeling [16], stronger data aug- mentation [22], etc. These improvements are complemen- tary to FPNs and should boost accuracy further.
Recently, FPN has enabled new top results in all tracks of the COCO competition, including detection, instance segmentation, and keypoint estimation. See [14] for details.
# 6. Extensions: Segmentation Proposals
Our method is a generic pyramid representation and can be used in applications other than object detection. In this section we use FPNs to generate segmentation proposals, following the DeepMask/SharpMask framework [27, 28].
DeepMask/SharpMask were trained on image crops for predicting instance segments and object/non-object scores. At inference time, these models are run convolutionally to generate dense proposals in an image. To generate segments at multiple scales, image pyramids are necessary [27, 28].
It is easy to adapt FPN to generate mask proposals. We use a fully convolutional setup for both training and infer- ence. We construct our feature pyramid as in Sec. 5.1 and set d = 128. On top of each level of the feature pyramid, we apply a small 5Ã5 MLP to predict 14Ã14 masks and object scores in a fully convolutional fashion, see Fig. 4. Addition- ally, motivated by the use of 2 scales per octave in the image pyramid of [27, 28], we use a second MLP of input size 7Ã7 to handle half octaves. The two MLPs play a similar role as anchors in RPN. The architecture is trained end-to-end; full implementation details are given in the appendix.
8
image pyramid] AR AR, AR AR; |time (s) DeepMask [27] v 37.1 15.8 50.1 54.9} 0.49 SharpMask [28] v 39.8 17.4 53.1 59.1} 0.77 InstanceFCN [4] v 39.2 = - - 1.50¢ FPN Mask Results: single MLP [5x5] 43.4 32.5 49.2 53.7) 0.15 single MLP [7x7] 43.5 30.0 49.6 57.8) 0.19 dual MLP [5x5, 7x7] 45.7 31.9 51.5 60.8] 0.24 + 2x mask resolution 46.7 31.7 53.1 63.2] 0.25 + 2x train schedule 48.1 32.6 54.2 65.6) 0.25
Table 6. Instance segmentation proposals evaluated on the ï¬rst 5k COCO val images. All models are trained on the train set. DeepMask, SharpMask, and FPN use ResNet-50 while Instance- FCN uses VGG-16. DeepMask and SharpMask performance is computed with models available from https://github. com/facebookresearch/deepmask (both are the âzoomâ variants). â Runtimes are measured on an NVIDIA M40 GPU, ex- cept the InstanceFCN timing which is based on the slower K40.
# 6.1. Segmentation Proposal Results
Results are shown in Table 6. We report segment AR and segment AR on small, medium, and large objects, always for 1000 proposals. Our baseline FPN model with a single 5Ã5 MLP achieves an AR of 43.4. Switching to a slightly larger 7Ã7 MLP leaves accuracy largely unchanged. Using both MLPs together increases accuracy to 45.7 AR. Increas- ing mask output size from 14Ã14 to 28Ã28 increases AR another point (larger sizes begin to degrade accuracy). Fi- nally, doubling the training iterations increases AR to 48.1. We also report comparisons to DeepMask [27], Sharp- Mask [28], and InstanceFCN [4], the previous state of the art methods in mask proposal generation. We outperform the accuracy of these approaches by over 8.3 points AR. In particular, we nearly double the accuracy on small objects. Existing mask proposal methods [27, 28, 4] are based on densely sampled image pyramids (e.g., scaled by 2{â2:0.5:1} in [27, 28]), making them computationally expensive. Our approach, based on FPNs, is substantially faster (our mod- els run at 6 to 7 FPS). These results demonstrate that our model is a generic feature extractor and can replace image pyramids for other multi-scale detection problems.
# 7. Conclusion
We have presented a clean and simple framework for building feature pyramids inside ConvNets. Our method shows signiï¬cant improvements over several strong base- lines and competition winners. Thus, it provides a practical solution for research and applications of feature pyramids, without the need of computing image pyramids. Finally, our study suggests that despite the strong representational power of deep ConvNets and their implicit robustness to scale variation, it is still critical to explicitly address multi- scale problems using pyramid representations.
# A. Implementation of Segmentation Proposals
We use our feature pyramid networks to efï¬ciently gen- erate object segment proposals, adopting an image-centric training strategy popular for object detection [11, 29]. Our FPN mask generation model inherits many of the ideas and motivations from DeepMask/SharpMask [27, 28]. How- ever, in contrast to these models, which were trained on image crops and used a densely sampled image pyramid for inference, we perform fully-convolutional training for mask prediction on a feature pyramid. While this requires chang- ing many of the speciï¬cs, our implementation remains sim- ilar in spirit to DeepMask. Speciï¬cally, to deï¬ne the label of a mask instance at each sliding window, we think of this window as being a crop on the input image, allowing us to inherit deï¬nitions of positives/negatives from DeepMask. We give more details next, see also Fig. 4 for a visualization. We construct the feature pyramid with P2â6 using the same architecture as described in Sec. 5.1. We set d = 128. Each level of our feature pyramid is used for predicting masks at a different scale. As in DeepMask, we deï¬ne the scale of a mask as the max of its width and height. Masks with scales of {32, 64, 128, 256, 512} pixels map to {P2, P3, P4, P5, P6}, respectively, and are handled by a 5Ã5 MLP. As DeepMask uses a pyramid with half octaves, 2) we use a second slightly larger MLP of size 7Ã7 (7 â 5 to handle half-octaves in our model (e.g., a 128 2 scale mask is predicted by the 7Ã7 MLP on P4). Objects at inter- mediate scales are mapped to the nearest scale in log space. As the MLP must predict objects at a range of scales for each pyramid level (speciï¬cally a half octave range), some padding must be given around the canonical object size. We use 25% padding. This means that the mask output over {P2, P3, P4, P5, P6} maps to {40, 80, 160, 320, 640} sized image regions for the 5Ã5 MLP (and to 2 larger corre- sponding sizes for the 7Ã7 MLP).
Each spatial position in the feature map is used to pre- dict a mask at a different location. Speciï¬cally, at scale Pk, each spatial position in the feature map is used to predict the mask whose center falls within 2k pixels of that loca- tion (corresponding to ±1 cell offset in the feature map). If no object center falls within this range, the location is con- sidered a negative, and, as in DeepMask, is used only for training the score branch and not the mask branch.
The MLP we use for predicting the mask and score is fairly simple. We apply a 5Ã5 kernel with 512 outputs, fol- lowed by sibling fully connected layers to predict a 14Ã14 mask (142 outputs) and object score (1 output). The model is implemented in a fully convolutional manner (using 1Ã1 convolutions in place of fully connected layers). The 7Ã7 MLP for handling objects at half octave scales is identical to the 5Ã5 MLP except for its larger input region.
During training, we randomly sample 2048 examples per mini-batch (128 examples per image from 16 images) with
9
a positive/negative sampling ratio of 1:3. The mask loss is given 10Ã higher weight than the score loss. This model is trained end-to-end on 8 GPUs using synchronized SGD (2 images per GPU). We start with a learning rate of 0.03 and train for 80k mini-batches, dividing the learning rate by 10 after 60k mini-batches. The image scale is set to 800 pixels during training and testing (we do not use scale jitter). Dur- ing inference our fully-convolutional model predicts scores at all positions and scales and masks at the 1000 highest scoring locations. We do not perform any non-maximum suppression or post-processing.
# References
[1] E. H. Adelson, C. H. Anderson, J. R. Bergen, P. J. Burt, and J. M. Ogden. Pyramid methods in image processing. RCA engineer, 1984.
Inside- outside net: Detecting objects in context with skip pooling and recurrent neural networks. In CVPR, 2016.
[3] Z. Cai, Q. Fan, R. S. Feris, and N. Vasconcelos. A uniï¬ed multi-scale deep convolutional neural network for fast object detection. In ECCV, 2016.
[4] J. Dai, K. He, Y. Li, S. Ren, and J. Sun. Instance-sensitive fully convolutional networks. In ECCV, 2016.
[5] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
[6] P. Doll´ar, R. Appel, S. Belongie, and P. Perona. Fast feature pyramids for object detection. TPAMI, 2014.
[7] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ra- manan. Object detection with discriminatively trained part- based models. TPAMI, 2010.
[8] G. Ghiasi and C. C. Fowlkes. Laplacian pyramid reconstruc- In ECCV, tion and reï¬nement for semantic segmentation. 2016.
[9] S. Gidaris and N. Komodakis. Object detection via a multi- region & semantic segmentation-aware CNN model. In ICCV, 2015.
[10] S. Gidaris and N. Komodakis. Attend reï¬ne repeat: Active box proposal generation via in-out localization. In BMVC, 2016.
[11] R. Girshick. Fast R-CNN. In ICCV, 2015. [12] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- ture hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014.
[13] B. Hariharan, P. Arbel´aez, R. Girshick, and J. Malik. Hyper- columns for object segmentation and ï¬ne-grained localiza- tion. In CVPR, 2015.
[14] K. He, G. Gkioxari, P. Doll´ar, and R. Girshick. Mask r-cnn. arXiv:1703.06870, 2017.
[15] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In ECCV. 2014.
[16] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
[17] S. Honari, J. Yosinski, P. Vincent, and C. Pal. Recombinator networks: Learning coarse-to-ï¬ne feature aggregation. In CVPR, 2016.
[18] T. Kong, A. Yao, Y. Chen, and F. Sun. Hypernet: Towards ac- curate region proposal generation and joint object detection. In CVPR, 2016.
[19] A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet clas- siï¬cation with deep convolutional neural networks. In NIPS, 2012.
[20] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural compu- tation, 1989.
[21] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. L. Zitnick. Microsoft COCO: Com- mon objects in context. In ECCV, 2014.
[22] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, and S. Reed. SSD: Single shot multibox detector. In ECCV, 2016. [23] W. Liu, A. Rabinovich, and A. C. Berg. ParseNet: Looking
wider to see better. In ICLR workshop, 2016.
[24] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. [25] D. G. Lowe. Distinctive image features from scale-invariant
keypoints. IJCV, 2004.
[26] A. Newell, K. Yang, and J. Deng. Stacked hourglass net- works for human pose estimation. In ECCV, 2016.
[27] P. O. Pinheiro, R. Collobert, and P. Dollar. Learning to seg- ment object candidates. In NIPS, 2015.
[28] P. O. Pinheiro, T.-Y. Lin, R. Collobert, and P. Doll´ar. Learn- ing to reï¬ne object segments. In ECCV, 2016.
[29] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: To- wards real-time object detection with region proposal net- works. In NIPS, 2015.
[30] S. Ren, K. He, R. Girshick, X. Zhang, and J. Sun. Object detection networks on convolutional feature maps. PAMI, 2016.
[31] O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolu- tional networks for biomedical image segmentation. In MIC- CAI, 2015.
[32] H. Rowley, S. Baluja, and T. Kanade. Human face detec- tion in visual scenes. Technical Report CMU-CS-95-158R, Carnegie Mellon University, 1995.
[33] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015.
[34] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. In ICLR, 2014. [35] A. Shrivastava, A. Gupta, and R. Girshick. Training region- based object detectors with online hard example mining. In CVPR, 2016.
[36] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. [37] J. R. Uijlings, K. E. van de Sande, T. Gevers, and A. W. IJCV, Smeulders. Selective search for object recognition. 2013.
10
[38] R. Vaillant, C. Monrocq, and Y. LeCun. Original approach for the localisation of objects in images. IEE Proc. on Vision, Image, and Signal Processing, 1994.
[39] S. Zagoruyko and N. Komodakis. Wide residual networks. In BMVC, 2016.
[40] S. Zagoruyko, A. Lerer, T.-Y. Lin, P. O. Pinheiro, S. Gross, S. Chintala, and P. Doll´ar. A multipath network for object detection. In BMVC, 2016. | {
"id": "1703.06870"
} |
1612.02136 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | 7 1 0 2
r a M 2 ] G L . s c [
5 v 6 3 1 2 0 . 2 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# MODE REGULARIZED GENERATIVE ADVERSARIAL NETWORKS
â Tong Cheâ, â¡Yanran Liâ, â ,§Athul Paul Jacob, â Yoshua Bengio, â¡Wenjie Li â Montreal Institute for Learning Algorithms, Universit´e de Montr´eal, Montr´eal, QC H3T 1J4, Canada â¡Department of Computing, The Hong Kong Polytechnic University, Hong Kong §David R. Cheriton School of Computer Science, University Of Waterloo, Waterloo, ON N2L 3G1, Canada {tong.che,ap.jacob,yoshua.bengio}@umontreal.ca {csyli,cswjli}@comp.polyu.edu.hk
# ABSTRACT
Although Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, which can easily make training stuck or push probability mass in the wrong direction, towards that of higher concentration than that of the data generating distribution. We introduce several ways of regularizing the objective, which can dramatically stabilize the training of GAN models. We also show that our regularizers can help the fair distribution of probability mass across the modes of the data gener- ating distribution, during the early phases of training and thus providing a uniï¬ed solution to the missing modes problem.
1
# INTRODUCTION
Generative adversarial networks (GAN) (Goodfellow et al., 2014) have demonstrated their potential on various tasks, such as image generation, image super-resolution, 3D object generation, and video prediction (Radford et al., 2015; Ledig et al., 2016; Sønderby et al., 2016; Nguyen et al., 2016; Wu et al., 2016; Mathieu et al., 2015). The objective is to train a parametrized function (the generator) which maps noise samples (e.g., uniform or Gaussian) to samples whose distribution is close to that of the data generating distribution. The basic scheme of the GAN training procedure is to train a discriminator which assigns higher probabilities to real data samples and lower probabilities to generated data samples, while simultaneously trying to move the generated samples towards the real data manifold using the gradient information provided by the discriminator. In a typical setting, the generator and the discriminator are represented by deep neural networks.
Despite their success, GANs are generally considered as very hard to train due to training instability and sensitivity to hyper-parameters. On the other hand, a common failure pattern observed while training GANs is the collapsing of large volumes of probability mass onto a few modes. Namely, although the generators produce meaningful samples, these samples are often from just a few modes (small regions of high probability under the data distribution). Behind this phenomenon is the miss- ing modes problem, which is widely conceived as a major problem for training GANs: many modes of the data generating distribution are not at all represented in the generated samples, yielding a much lower entropy distribution, with less variety than the data generating distribution.
This issue has been the subject of several recent papers proposing several tricks and new archi- tectures to stabilize GANâs training and encourage its samplesâ diversity. However, we argue that a general cause behind these problems is the lack of control on the discriminator during GAN training. We would like to encourage the manifold of the samples produced by the generator to move towards that of real data, using the discriminator as a metric. However, even if we train the discriminator to distinguish between these two manifolds, we have no control over the shape of the discriminator function in between these manifolds. In fact, the shape of the discriminator function in the data
âAuthors contributed equally.
1
Published as a conference paper at ICLR 2017
space can be very non-linear with bad plateaus and wrong maxima and this can therefore hurt the training of GANs (Figure 1).
To remedy this problem, we propose a novel regu- larizer for the GAN training target. The basic idea is simple yet powerful: in addition to the gradient information provided by the discriminator, we want the generator to take advantage of other similarity metrics with much more predictable behavior, such as the L2 norm. Differentiating these similarity met- rics will provide us with more stable gradients to train our generator. Combining this idea with an ap- proach meant to penalize the missing modes, we pro- pose a family of additional regularizers for the GAN objective. We then design a set of metrics to evaluate the generated samples in terms of both the diversity of modes and the distribution fairness of the probability mass. These metrics are shown to be more robust in judging complex generative models, including those which are well-trained and collapsed ones.
Regularizers usually bring a trade-off between model variance and bias. Our results have shown that, when correctly applied, our regularizers can dramatically reduce model variance, stabilize the training, and ï¬x the missing mode problem all at once, with positive or at the least no negative effects on the generated samples. We also discuss a variant of the regularized GAN algorithm, which can even improve sample quality as compared to the DCGAN baseline.
# 2 RELATED WORK
The GAN approach was initially proposed by Goodfellow et al. (2014) where both the generator and the discriminator are deï¬ned by deep neural networks.
In Goodfellow et al. (2014), the GAN is able to generate interesting local structure but globally incoherent images on various datasets. Mirza & Osindero (2014) enlarges GANâs representation capacity by introducing an extra vector to allow the generator to produce samples conditioned on other beneï¬cial information. Motivated from this, several conditional variants of GAN has been applied to a wide range of tasks, including image prediction from a normal map Wang & Gupta (2016), image synthesis from text Reed et al. (2016) and edge map Isola et al. (2016), real-time image manipulation Zhu et al. (2016), temporal image generation Zhou & Berg (2016); Saito & Matsumoto (2016); Vondrick et al. (2016), texture synthesis, style transfer, and video stylization Li & Wand (2016).
Researchers also aim at stretching GANâs limit to generate higher-resolution, photo-realistic images. Denton et al. (2015) initially apply a Laplacian pyramid framework on GAN to generate images of high resolution. At each level of their LAPGAN, both the generator and the discriminator are convo- lutional networks. As an alternative to LAPGAN, Radford et al. (2015) successfully designs a class of deep convolutional generative adversarial networks which has led to signiï¬cant improvements on unsupervised image representation learning. Another line of work aimed at improving GANs are through feature learning, including features from the latent space and image space. The motivation is that features from different spaces are complementary for generating perceptual and natural-looking images. With this perspective, some researchers use distances between learned features as losses for training objectives for generative models. Larsen et al. (2015) combine a variational autoencoder objective with a GAN and utilize the learned features from the discriminator in the GANs for better image similarity metrics. It is shown that the learned distance from the discriminator is of great help for the sample visual ï¬delity. Recent literature have also shown impressive results on image super-resolution to infer photo-realistic natural images for 4x upscaling factors Ledig et al. (2016); Sønderby et al. (2016); Nguyen et al. (2016).
Despite these promising successes, GANs are notably hard to train. Although Radford et al. (2015) provide a class of empirical architectural choices that are critical to stabilize GANâs training, it would be even better to train GANs more robustly and systematically. Salimans et al. (2016) pro- pose feature matching technique to stabilize GANâs training. The generator is required to match the statistics of intermediate features of the discriminator. Similar idea is adopted by Zhao et al. (2016).
2
Published as a conference paper at ICLR 2017
In addition to feature distances, Dosovitskiy & Brox (2016) found that the counterpart loss in image space further improves GANâs training stability. Furthermore, some researchers make use of infor- mation in both spaces in a uniï¬ed learning procedure (Dumoulin et al., 2016; Donahue et al., 2016). In Dumoulin et al. (2016), one trains not just a generator but also an encoder, and the discriminator is trained to distinguish between two joint distributions over image and latent spaces produced either by the application of the encoder on the training data or by the application of the generator (decoder) to the latent prior. This is in contrast with the regular GAN training, in which the discriminator only attempts to separate the distributions in the image space. Parallelly, Metz et al. (2016) stabilize GANs by unrolling the optimization of discriminator, which can be considered as an orthogonal work with ours.
Our work is related to VAEGAN (Larsen et al., 2015) in terms of training an autoencoder or VAE jointly with the GAN model. However, the variational autoencoder (VAE) in VAEGAN is used to generate samples whereas our autoencoder based losses serves as a regularizer to penalize missing modes and thus improving GANâs training stability and sample qualities. We demonstrate detailed differences from various aspects in Appendix D.
# 3 MODE REGULARIZERS FOR GANS
The GAN training procedure can be viewed as a non-cooperative two player game, in which the discriminator D tries to distinguish real and generated examples, while the generator G tries to fool the discriminator by pushing the generated samples towards the direction of higher discrimination values. Training the discriminator D can be viewed as training an evaluation metric on the sample space. Then the generator G has to take advantage of the local gradient â log D(G) provided by the discriminator to improve itself, namely to move towards the data manifold.
We now take a closer look at the root cause of the instabilities while training GANs. The discrim- inator is trained on both generated and real examples. As pointed out by Goodfellow et al. (2014); Denton et al. (2015); Radford et al. (2015), when the data manifold and the generation manifold are disjoint (which is true in almost all practical situations), it is equivalent to training a characteristic function to be very close to 1 on the data manifold, and 0 on the generation manifold. In order to pass good gradient information to the generator, it is important that the trained discriminator pro- duces stable and smooth gradients. However, since the discriminator objective does not directly depend on the behavior of the discriminator in other parts of the space, training can easily fail if the shape of the discriminator function is not as expected. As an example,Denton et al. (2015) noted a common failure pattern for training GANs which is the vanishing gradient problem, in which the discriminator D perfectly classiï¬es real and fake examples, such that around the fake examples, D is nearly zero. In such cases, the generator will receive no gradient to improve itself.1
Another important problem while training GANs is mode missing. In theory, if the generated data and the real data come from the same low dimensional manifold, the discriminator can help the generator distribute its probability mass, because the missing modes will not have near-0 probability under the generator and so the samples in these areas can be appropriately concentrated towards regions where D is closer to 1. However, in practice since the two manifolds are disjoint, D tends to be near 1 on all the real data samples, so large modes usually have a much higher chance of attracting the gradient of discriminator. For a typical GAN model, since all modes have similar D values, there is no reason why the generator cannot collapse to just a few major modes. In other words, since the discriminatorâs output is nearly 0 and 1 on fake and real data respectively, the generator is not penalized for missing modes.
3.1 GEOMETRIC METRICS REGULARIZER
Compared with the objective for the GAN generator, the optimization targets for supervised learning are more stable from an optimization point of view. The difference is clear: the optimization target for the GAN generator is a learned discriminator. While in supervised models, the optimization targets are distance functions with nice geometric properties. The latter usually provides much easier training gradients than the former, especially at the early stages of training.
1This problem exists even when we use log D(G(z)) as target for the generator, as noted by Denton et al. (2015) and our experiments.
3
Published as a conference paper at ICLR 2017
Inspired by this observation, we propose to incorporate a supervised training signal as a regularizer on top of the discriminator target. Assume the generator G(z) : Z â X generates samples by sam- pling ï¬rst from a ï¬xed prior distribution in space Z followed by a deterministic trainable transforma- tion G into the sample space X. Together with G, we also jointly train an encoder E(x) : X â Z. Assume d is some similarity metric in the data space, we add Exâ¼pd [d(x, Gâ¦E(x))] as a regularizer, where pd is the data generating distribution. The encoder itself is trained by minimizing the same reconstruction error. In practice, there are many options for the distance measure d. For instance, the pixel-wise L2 distance, or the distance of learned features by the discriminator (Dumoulin et al., 2016) or by other networks, such as a VGG classiï¬er. (Ledig et al., 2016)
The geometric intuition for this regularizer is straight-forward. We are trying to move the generated manifold to the real data manifold using gradient descent. In addition to the gradient provided by the discriminator, we can also try to match the two manifolds by other geometric distances, say, Ls metric. The idea of adding an encoder is equivalent to ï¬rst training a point to point mapping G(E(x)) between the two manifolds and then trying to minimize the expected distance between the points on these two manifolds.
# 3.2 MODE REGULARIZER
In addition to the metric regularizer, we propose a mode regularizer to further penalize miss- ing modes. In traditional GANs, the optimization target for the generator is the empirical sum >=; Vo log D(Go(z;)). The missing mode problem is caused by the conjunction of two facts: (1) the areas near missing modes are rarely visited by the generator, by definition, thus providing very few examples to improve the generator around those areas, and (2) both missing modes and non- missing modes tend to correspond to a high value of D, because the generator is not perfect so that the discriminator can take strong decisions locally and obtain a high value of D even near non-missing modes.
As an example, consider the situation in Fig- ure 2. For most z, the gradient of the generator âθ log D(Gθ(z)) pushes the generator towards the major mode M1. Only when G(z) is very close to the mode M2 can the generator get gra- dients to push itself towards the minor mode M2. However, it is possible that such z is of low or zero probability in the prior distribution p0.
towards Mr towards Mp generation manifold
Given this observation, consider a regularized GAN model with the metric regularizer. As- sume M0 is a minor mode of the data generat- ing distribution. For x â M0, we know that if G ⦠E is a good autoencoder, G(E(x)) will be located very close to mode M0. Since there are sufï¬cient training examples of mode M0 in the training data, we add the mode regularizer Exâ¼pd [log D(G ⦠E(x))] to our optimization target for the generator, to encourage G(E(x)) to move towards a nearby mode of the data generating distribution. In this way, we can achieve fair probability mass distribution across different modes.
Figure 2: Illustration of missing modes problem.
In short, our regularized optimization target for the generator and the encoder becomes:
TG = âEz[log D(G(z))] + Exâ¼pd [λ1d(x, G ⦠E(x)) + λ2 log D(G ⦠E(x))] TE = Exâ¼pd [λ1d(x, G ⦠E(x)) + λ2 log D(G ⦠E(x))] (1) (2)
4
Published as a conference paper at ICLR 2017
3.3 MANIFOLD-DIFFUSION TRAINING FOR REGULARIZED GANS
On some large scale datasets, CelebA for example, the regularizers we have discussed do improve the diversity of generated samples, but the quality of samples may not be as good without care- fully tuning the hyperparameters. Here we propose a new algorithm for training metric-regularized GANs, which is very stable and much easier to tune for producing good samples.
The proposed algorithm divides the training procedure of GANs into two steps: a manifold step and a diffusion step. In the manifold step, we try to match the generation manifold and the real data manifold with the help of an encoder and the geometric metric loss. In the diffusion step, we try to distribute the probability mass on the generation manifold fairly according to the real data distribution.
An example of manifold-diffusion training of GAN (MDGAN for short) is as follows: we train a discriminator D1 which separates between the samples x and G ⦠E(x), for x from the data, and we optimize G with respect to the regularized GAN loss E[log D1(Gâ¦E(x))+λd(x, Gâ¦E(x))] in order to match the two manifolds. In the diffusion step we train a discriminator D2 between distributions G(z) and G ⦠E(x), and we train G to maximize log D2(G(z)). Since these two distributions are now nearly on the same low dimensional manifold, the discriminator D2 provides much smoother and more stable gradients. The detailed training procedure is given in Appendix A. See Figure 6 for the quality of generated samples.
3.4 EVALUATION METRICS FOR MODE MISSING
In order to estimate both the missing modes and the sample qualities in our experiments, we used several different metrics for different experiments instead of human annotators.
The inception score (Salimans et al., 2016) was considered as a good assessment for sample quality from a labelled dataset:
exp (ExKL(p(y|x)||pâ(y))) (3)
Where x denotes one sample, p(y|x) is the softmax output of a trained classiï¬er of the labels, and pâ(y) is the overall label distribution of generated samples. The intuition behind this score is that a strong classiï¬er usually has a high conï¬dence for good samples. However, the inception score is sometimes not a good metric for our purpose. Assume a generative model that collapse to a very bad image. Although the model is very bad, it can have a perfect inception score, because p(y|x) can have a high entropy and pâ(y) can have a low entropy. So instead, for labelled datasets, we propose another assessment for both visual quality and variety of samples, the MODE score:
exp (ExKL(p(y|x)||p(y)) â KL(pâ(y)||p(y))) (4)
where p(y) is the distribution of labels in the training data. According to our human evaluation experiences, the MODE score successfully measures two important aspects of generative models, i.e., variety and visual quality, in one metric.
However, in datasets without labels (LSUN) or where the labels are not sufï¬cient to characterize every data mode (CelebA), the above metric does not work well. We instead train a third party discriminator between the real data and the generated data from the model. It is similar to the GAN discriminator but is not used to train the generator. We can view the output of the discriminator as an estimator for the quantity (See (Goodfellow et al., 2014) for proof):
Dâ(s) â pg(s) pg(s) + pd(s) (5)
Where pg is the probability density of the generator and pd is the density of the data distribution. To prevent Dâ from learning a perfect 0-1 separation of pg and pd, we inject a zero-mean Gaussian noise to the inputs when training Dâ. After training, we test Dâ on the test set T of the real dataset. If for any test sample t â T , the discrimination value D(t) is close to 1, we can conclude that the mode corresponding to t is missing. In this way, although we cannot measure exactly the number of modes that are missing, we have a good estimator of the total probability mass of all the missing modes.
5
Published as a conference paper at ICLR 2017
4 EXPERIMENTS
# 4.1 MNIST
We perform two classes of experiments on MNIST. For the MNIST dataset, we can assume that the data generating distribution can be approximated with ten dominant modes, if we deï¬ne the term âmodeâ here as a connected component of the data manifold.
Table 1: Grid Search for Hyperparameters.
nLayerG nLayerD sizeG sizeD dropoutD [True,False] [SGD,Adam] optimG [SGD,Adam] optimD [1e-2,1e-3,1e-4] lr [2,3,4] [2,3,4] [400,800,1600,3200] [256, 512, 1024]
_
# 4.1.1 GRID SEARCH FOR MNIST GAN MODELS
In order to systemically explore the effect of our pro- posed regularizers on GAN models in terms of im- proving stability and sample quality, we use a large scale grid search of different GAN hyper-parameters on the MNIST dataset. The grid search is based on a pair of randomly selected loss weights: λ1 = 0.2 and λ2 = 0.4. We use the same hyper-parameter settings for both GAN and Regularized GAN, and list the search ranges in Table 1. Our grid search is similar to those proposed in Zhao et al. (2016). Please refer to it for detailed explanations regarding these hyper-parameters.
For evaluation, we ï¬rst train a 4-layer CNN classiï¬er on the MNIST digits, and then apply it to compute the MODE scores for the generated samples from all these models. The resulting distribu- tion of MODE score is shown in Figure 3. Clearly, our proposed regularizer signiï¬cantly improves the MODE scores and thus demonstrates its beneï¬ts on stabilizing GANs and improving sample qualities.
10 59.97 CAN sem Regularized GAN 22.29 20 7.34 14.86 96 * 6.19 6.19 743 soz 433 35 aay 248833 2.79 oles eM 2 cee conf) onl oo Si Py a 00.5 05-1 12 23
Figure 3: The distributions of MODE scores for GAN and regularized GAN.
To illustrate the effect of regularizers with different coefï¬cients, we randomly pick an architecture and train it with different λ1 = λ2. The results are shown in Figure 4.
Figure 4: (Left 1-5) Different hyperparameters for MNIST generation. The values of the λ1 and λ2 in our Regularized GAN are listed below the corresponding samples. (Right 6-7) Best samples through grid search for GAN and Regularized GAN.
4.1.2 COMPOSITIONAL MNIST DATA WITH 1000 MODES
In order to quantitatively study the effect of our regularizers on the missing modes, we concatenate three MNIST digits to a number in [0,999] in a single 64x64 image, and then train DCGAN as a baseline model on the 1000 modes dataset. The digits on the image are sampled with different
6
Published as a conference paper at ICLR 2017
probabilities, in order to test the modelâs capability to preserve small modes in generation. We again use a pre-trained classiï¬er for MNIST instead of a human to evaluate the models.
Table 2: Results for Compositional MNIST with 1000 modes. The proposed regularization (Reg- DCGAN) allows to substantially reduce the number of missed modes as well as the KL divergence that measures the plausibility of the generated samples (like in the Inception score).
Set 1 #Miss KL Set 2 #Miss KL Set 3 #Miss KL Set 4 #Miss KL DCGAN 204.7 77.9 204.3 60.2 103.4 75.9 89.3 77.8 Reg-DCGAN 32.1 62.3 71.5 58.9 42.7 68.4 31.6 67.8
The performances on the compositional experiment are measured by two metrics. #Miss represents the classiï¬er-reported number of missing modes, which is the size of the set of numbers that the model never generates. KL stands for the KL divergence between the classiï¬er-reported distribution of generated numbers and the distribution of numbers in the training data (as for the Inception score). The results are shown in Table 2. With the help of our proposed regularizer, both the number of missing modes and KL divergence drop dramatically among all the sets of the compositional MNIST dataset, which again proves the effectiveness of our regularizer for preventing the missing modes problem.
4.2 CELEBA
To test the effectiveness of our proposal on harder problems, we implement an encoder for the DCGAN algorithm and train our model with different hyper-parameters together with the DCGAN baseline on the CelebA dataset. We provide the detailed architecture of our regularized DCGAN in Appendix B.
4.2.1 MISSING MODES ESTIMATION ON CELEBA
We also employ a third party discriminator trained with injected noise as a metric for missing mode estimation. To implement this, we add noise in the input layer in the discriminator network. For each GAN model to be estimated, we independently train this noisy discriminator, as mode estimator, with the same architecture and hyper-parameters on the generated data and the training data. We then apply the mode estimator to the test data. The images which have high mode estimator outputs can be viewed as on the missing modes.
Table 3: Number of images on the missing modes on CelebA estimated by a third-party discrimina- tor. The numbers in the brackets indicate the dimension of prior z. Ï denotes the standard deviation of the added Gaussian noise applied at the input of the discriminator to regularize it. MDGAN achieves a very high reduction in the number of missing modes, in comparison to other methods .
Ï DCGAN (100) DCGAN (200) Reg-GAN (100) Reg-GAN (200) MDGAN (200) 3.5 5463 17089 754 3644 74 4.0 590 15832 42 391 13
The comparison result is shown in Table 3. Both our proposed Regularized-GAN and MDGAN outperform baseline DCGAN models on all settings. Especially, MDGAN suppresses other models, showing its superiority on modes preserving. We also ï¬nd that, although sharing the same architec- ture, the DCGAN with 200-dimensional noise performs quite worse than that with 100-dimensional noise as input. On the contrary, our regularized GAN performs more consistently.
To get a better understanding of the modelsâ performance, we want to ï¬gure out when and where these models miss the modes. Visualizing the test images associated with missed modes is instruc- tive. In Figure 5, the left three images are missed by all models. It is rare to see in the training data the cap in the second image and the type of background in the third, which thus can be viewed as small modes under this situation. These three images should be considered as the hardest test data
7
Published as a conference paper at ICLR 2017
for GAN to learn. Nonetheless, our best model, MDGAN still capture certain small modes. The seven images on the right in Figure 5 are only missed by DCGAN. The sideface, paleface, black, and the berets are special attributes among these images, but our proposed MDGAN performs well on all of them.
Hada AAAS
Figure 5: Test set images that are on missing mode. Left: Both MDGAN and DCGAN missing. Right: Only DCGAN missing.
4.2.2 QUALITATIVE EVALUATION OF GENERATED SAMPLES
After quantitative evaluation, we manually examine the generated samples by our regularized GAN to see whether the proposed regularizer has side-effects on sample quality. We compare our model with ALI (Dumoulin et al., 2016), VAEGAN (Larsen et al., 2015), and DCGAN (Radford et al., 2015) in terms of sample visual quality and mode diversity. Samples generated from these models are shown in Figure 62.
a 5 â a -GAN , a) es d a e " . Te }81 <[2) - i ~ J VAEGAN «ly â_ bg R: ie: rd bE we é| 7 , = |
Figure 6: Samples generated from different generative models. For each compared model, we directly take ten decent samples reported in their corresponding papers and code repositories. Note how MDGAN samples are both globally more coherent and locally have sharp textures.
Both MDGAN and Regularized-GAN generate clear and natural-looking face images. Although ALIâs samples are plausible, they are sightly deformed in comparison with those from MDGAN. The samples from VAEGAN and DCGAN seem globally less coherent and locally less sharp.
As to sample quality, it is worth noting that the samples from MDGAN enjoy fewer distortions. With all four other models, the majority of generated samples suffer from some sort of distortion. However, for the samples generated by MDGAN, the level of distortion is lower compared with the other four compared models. We attribute it to the help of the autoencoder as the regularizer to alter the generation manifolds. In this way, the generator is able to learn ï¬ne-grained details such as face edges. As a result, MDGAN is able to reduce distortions.
2For fair comparison, we also recommend readers to refer to the original papers Dumoulin et al. (2016); Larsen et al. (2015); Radford et al. (2015) for the reported samples of the compared. The ALI sam- ples are from https://github.com/IshmaelBelghazi/ALI/blob/master/paper/celeba_ samples.png and we reverted them to the original 64x64 size. The DCGAN samples are from https: //github.com/Newmu/dcgan_code/
8
Published as a conference paper at ICLR 2017
MDGAN Regularized an -GAN
Figure 7: Sideface samples generated by Regularized-GAN and MDGAN.
In terms of missing modes problem, we instructed ï¬ve individuals to conduct human evaluation on the generated samples. They achieve consensus that MDGAN wins in terms of mode diversities. Two people pointed out that MDGAN generates a larger amount of samples with side faces than other models. We select several of these side face samples in Figure 7. Clearly, our samples maintain acceptable visual ï¬delity meanwhile share diverse modes. Combined with the above quantitative results, it is convincing that our regularizers bring beneï¬ts for both training stability and mode variety without the loss of sample quality.
# 5 CONCLUSIONS
Although GANs achieve state-of-the-art results on a large variety of unsupervised learning tasks, training them is considered highly unstable, very difï¬cult and sensitive to hyper-parameters, all the while, missing modes from the data distribution or even collapsing large amounts of probability mass on some modes. Successful GAN training usually requires large amounts of human and com- puting efforts to ï¬ne tune the hyper-parameters, in order to stabilize training and avoid collapsing. Researchers usually rely on their own experience and published tricks and hyper-parameters instead of systematic methods for training GANs.
We provide systematic ways to measure and avoid the missing modes problem and stabilize training with the proposed autoencoder-based regularizers. The key idea is that some geometric metrics can provide more stable gradients than trained discriminators, and when combined with the encoder, they can be used as regularizers for training. These regularizers can also penalize missing modes and encourage a fair distribution of probability mass on the generation manifold.
# ACKNOWLEDGEMENTS
We thank Naiyan Wang, Jianbo Ye, Yuchen Ding, Saboya Yang for their GPU support. We also want to thank Huiling Zhen for helpful discussions, Junbo Zhao for providing the details of grid search experiments on the EBGAN model, as well as Anders Boesen Lindbo Larsen for kindly helping us on running VAEGAN experiments. We appreciate for the valuable suggestions and comments from the anonymous reviewers. The work described in this paper was partially supported by NSERC, Calcul Quebec, Compute Canada, the Canada Research Chairs, CIFAR, National Natural Science Foundation of China (61672445 and 61272291), Research Grants Council of Hong Kong (PolyU 152094/14E), and The Hong Kong Polytechnic University (G-YBP6).
# REFERENCES
Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in neural information processing systems, pp. 1486â1494, 2015.
Jeff Donahue, Philipp Kr¨ahenb¨uhl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016.
Alexey Dosovitskiy and Thomas Brox. Generating images with perceptual similarity metrics based on deep networks. arXiv preprint arXiv:1602.02644, 2016.
Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropi- etro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016.
9
Published as a conference paper at ICLR 2017
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor- mation Processing Systems, pp. 2672â2680, 2014.
Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. arxiv, 2016.
Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300, 2015.
Christian Ledig, Lucas Theis, Ferenc Husz´ar, Jose Caballero, Andrew Aitken, Alykhan Tejani, Jo- hannes Totz, Zehan Wang, and Wenzhe Shi. Photo-realistic single image super-resolution using a generative adversarial network. arXiv preprint arXiv:1609.04802, 2016.
Chuan Li and Michael Wand. Precomputed real-time texture synthesis with markovian generative adversarial networks. arXiv preprint arXiv:1604.04382, 2016.
Michael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440, 2015.
Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative adversarial networks. arXiv preprint arXiv:1611.02163, 2016.
Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
Anh Nguyen, Jason Yosinski, Yoshua Bengio, Alexey Dosovitskiy, and Jeff Clune. Plug & play generative networks: Conditional iterative generation of images in latent space. arXiv preprint arXiv:1612.00005, 2016.
Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396, 2016.
Masaki Saito and Eiichi Matsumoto. Temporal generative adversarial nets. arXiv preprint arXiv:1611.06624, 2016.
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016.
Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Husz´ar. Amortised map inference for image super-resolution. arXiv preprint arXiv:1610.04490, 2016.
Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics. In Advances In Neural Information Processing Systems, pp. 613â621, 2016.
Xiaolong Wang and Abhinav Gupta. Generative image modeling using style and structure adversar- ial networks. In ECCV, 2016.
Jiajun Wu, Chengkai Zhang, Tianfan Xue, William T Freeman, and Joshua B Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In Neural Information Processing Systems (NIPS), 2016.
Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126, 2016.
Yipin Zhou and Tamara L Berg. Learning temporal transformations from time-lapse videos. In European Conference on Computer Vision, pp. 262â277. Springer, 2016.
Jun-Yan Zhu, Philipp Kr¨ahenb¨uhl, Eli Shechtman, and Alexei A. Efros. Generative visual manipula- tion on the natural image manifold. In Proceedings of European Conference on Computer Vision (ECCV), 2016.
10
Published as a conference paper at ICLR 2017
# A APPENDIX: PSEUDO CODE FOR MDGAN
In this Appendix, we give the detailed training procedure of an MDGAN example we discuss in Section 3.3.
Manifold Step: 1. Sample {x1, x2, · · · xm} from data generating distribution pdata(x). 2. Update discriminator D1 using SGD with gradient ascent:
1 Voy = > aflog Di(xi) + log(1 â Di (G(E(%:))))] i=l
3. Update generator G using SGD with gradient ascent:
i< : Vo, â > [Alog Di (G(E(x:))) â |i - @(E(:))|7) i=1
Diffusion Step: 4. Sample {x1, x2, · · · xm} from data generating distribution pdata(x). 5. Sample {z1, z2, · · · zm} from prior distribution pÏ(z). 6. Update discriminator D2 using SGD with gradient ascent:
1 Var SV flog D2(G(E(x;))) + log(1 â Da(z:))] i=1
7. Update generator G using SGD with gradient ascent:
m Vo, <> log Da(G(es)) i=l
Figure 8: The detailed training procedure of an MDGAN example.
# B APPENDIX: ARCHITECTURE FOR EXPERIMENTS
We use similar architectures for Compositional MNIST and CelebA experiments. The architecture is based on that found in DCGAN Radford et al. (2015). Apart from the discriminator and generator which are the same as DCGAN, we add an encoder which is the âinverseâ of the generator, by reversing the order of layers and replacing the de-convolutional layers with convolutional layers.
One has to pay particular attention to batch normalization layers. In DCGAN, there are batch nor- malization layers both in the generator and the discriminator. However, two classes of data go through the batch normalization layers in the generator. One come from sampled noise z, the other one come from the encoder. In our implementation, we separate the batch statistics for these two classes of data in the generator, while keeping the parameters of BN layer to be shared. In this way, the batch statistics of these two kinds of batches cannot interfere with each other.
# C APPENDIX: ADDITIONAL SYNTHESIZED EXPERIMENTS
To demonstrate the effectiveness of mode-regularized GANs proposed in this paper, we train a very simple GAN architecture on synthesized 2D dataset, following Metz et al. (2016).
The data is sampled from a mixture of 6 Gaussians, with standard derivation of 0.1. The means of the Gaussians are placed around a circle with radius 5. The generator network has two ReLU hidden layers with 128 neurons. It generates 2D output samples from 3D uniform noise from [0,1]. The discriminator consists of only one fully connected layer of ReLU neurons, mapping the 2D input to
11
Published as a conference paper at ICLR 2017
a real 1D number. Both networks are optimized with the Adam optimizer with the learning rate of 1e-4.
In the regularized version, we choose λ1 = λ2 = 0.005. The comparison between the generator distribution from standard GAN and our proposed regularized GAN are shown in Figure 9.
oo. GAN . - . . â . . | . \ « âoo â s â+ N Reg-GAN + 7 : ; . : 4 s ? â ° ° @ . ad . . . Epoch | Epoch 200 Epoch 400 Epoch 600 Epoch 800 Epoch 1000 Target
Figure 9: Comparison results on a toy 2D mixture of Gaussians dataset. The columns on the left shows heatmaps of the generator distributions as the number of training epochs increases, whereas the rightmost column presents the target, the original data distribution. The top row shows standard GAN result. The generator has a hard time oscillating among the modes of the data distribution, and is only able to ârecoverâ a single data mode at once. In contrast, the bottom row shows results of our regularized GAN. Its generator quickly captures the underlying multiple modes and ï¬ts the target distribution.
# D APPENDIX: COMPARISON WITH VAEGAN
In this appendix section, we demonstrate the effectiveness and uniqueness of mode-regularized GANs proposed in this paper as compared to Larsen et al. (2015) in terms of its theoretical dif- ference, sample quality and number of missing modes.
With regard to the theoretical difference, the optimization of VAEGAN relies on the probabilistic variational bound, namely p(x) ⥠Eq(z|x)[log p(x|z)] â KL(q(z|x)||p(z)). This variational bound together with a GAN loss is optimized with several assumptions imposed in VAEGAN:
1. In general, VAE is based on the assumption that the true posterior p(z|x) can be well approximated by factorized Gaussian distribution q.
2. As to VAEGAN, It is also assumed that the maximum likelihood objectives does not con- ï¬ict with GAN objective in terms of probabilistic framework.
The ï¬rst assumption does not necessarily hold for GANs. We have found that in some trained models of DCGANs, the real posterior p(z|x) is even not guaranteed to have only one mode, not to mention it is anything close to factorized Gaussian. We believe that this difference in probabilistic framework is an essential obstacle when one tries to use the objective of VAEGAN as a regularizer. However, in our algorithm, where we use a plain auto-encoder instead of VAE as the objective. Plain auto-encooders works better than VAE for our purposes because as long as the model G(z) is able to generate training samples, there always exists a function Eâ(x) such that G(E(x)) = x. Our encoder can therefore be viewed as being trained to approximate this real encoder Eâ. There are no conï¬icts between a good GAN generator and our regularization objective. Hence, our objectives can be used as regularizers for encoding the prior knowledge that good models should be able to generate the training samples. This is why our work is essentially different from VAEGAN. In our experiments, we also believe that this is the reason why VAEGAN generates worse samples than a carefully tuned regularized GANs.
In terms of sample quality and missing modes, we run the ofï¬cial code of VAEGAN 3 with their default setting. We train VAEGAN for 30 epochs 4 and our models for only 20 epochs. For fairness,
3https://github.com/andersbll/autoencoding_beyond_pixels 4Note that we also trained 20-epoch version of VAEGAN, however the samples seemed worse.
12
Published as a conference paper at ICLR 2017
their model was run 3 times and the trained model with the best sample visual quality was taken for the comparison.
The generated samples are shown in Figure 10. The most obvious difference between our samples and VAEGANâs samples is the face distortion, which is consistent with our experimental results in Section 4.2.2. We conjecture that the distortions of VAEGANâs samples are due to the conï¬icts be- tween the two objectives, as we present above. In other words, the way we introduce auto-encoders as regularizers for GAN models is different from VAEGANâs. The difference is that the second as- sumption mentioned above is not required in our approaches. In our framework, the auto-encoders helps alter the generation manifolds, leading to fewer distortions in ï¬ne-grained details in our gen- erated samples.
â_ oO EE 6 2A VAEGAN -trained VAEGAN -reported yy = 222
Figure 10: Samples generated by our models and VAEGAN. The third line are samples generated by our self-trained VAEGAN model, with default settings. The last line are generated samples reported in the original VAEGAN paper. We depict both of them here for a fair comparison.
In terms of the missing modes problem, we use the same method described in Section 4.2.1 for computing the number of images with missing modes. The results are shown below.
Table 4: Number of images on the missing modes on CelebA estimated by a third-party discrimina- tor. The numbers in the brackets indicate the dimension of prior z. Ï denotes the standard deviation of the added Gaussian noise applied at the input of the discriminator to regularize it. MDGAN achieves a very high reduction in the number of missing modes, in comparison to VAEGAN.
Ï VAEGAN (100) Reg-GAN (100) Reg-GAN (200) MDGAN (200) 3.5 9720 754 3644 74 4.0 5862 42 391 13
We see that using our proposed regularizers results in a huge drop in the number of missing modes. We conjecture that the reason why VAEGAN performs very bad in our metric for missing modes is because the samples generated are of low quality, so the discriminator classiï¬es the samples as ânot on modeâ. Namely, the data generated is too far away from many real data modes. Essentially if a model generates very bad samples, we can say that the model misses all or most modes.
To conduct more fair evaluation between VAEGAN and our methods, we also perform a blind human evaluation. Again we instructed ï¬ve individuals to conduct this evaluation of sample variability. Without telling them which is generated by VAEGAN and which is generated by our methods, four people agree that our method wins in terms of sample diversity. One person thinks the samples are equally diverse.
In conclusion, we demonstrate that our proposed mode-regularized GANs, i.e., Reg-GAN and MDGAN, are different from VAEGAN theoretically as discussed above. Such differences empiri- cally result in better sample quality and mode preserving ability, which are our main contributions.
13 | {
"id": "1511.05440"
} |
1612.01543 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | 7 1 0 2
2017
v o N 3 1 ] V C . s c [ 2 v 3 4 5 1 0 . 2 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# TOWARDS THE LIMIT OF NETWORK QUANTIZATION
Yoojin Choi, Mostafa El-Khamy, and Jungwon Lee Samsung US R&D Center, San Diego, CA 92121, USA {yoojin.c,mostafa.e,jungwon2.lee}@samsung.com
# ABSTRACT
Network quantization is one of network compression techniques to reduce the re- dundancy of deep neural networks. It reduces the number of distinct network pa- rameter values by quantization in order to save the storage for them. In this paper, we design network quantization schemes that minimize the performance loss due to quantization given a compression ratio constraint. We analyze the quantitative relation of quantization errors to the neural network loss function and identify that the Hessian-weighted distortion measure is locally the right objective function for the optimization of network quantization. As a result, Hessian-weighted k-means clustering is proposed for clustering network parameters to quantize. When opti- mal variable-length binary codes, e.g., Huffman codes, are employed for further compression, we derive that the network quantization problem can be related to the entropy-constrained scalar quantization (ECSQ) problem in information the- ory and consequently propose two solutions of ECSQ for network quantization, i.e., uniform quantization and an iterative solution similar to Lloydâs algorithm. Finally, using the simple uniform quantization followed by Huffman coding, we show from our experiments that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet, 32-layer ResNet and AlexNet, respectively.
# INTRODUCTION
Deep neural networks have emerged to be the state-of-the-art in the ï¬eld of machine learning for image classiï¬cation, object detection, speech recognition, natural language processing, and machine translation (LeCun et al., 2015). The substantial progress of neural networks however comes with high cost of computations and hardware resources resulting from a large number of parameters. For example, Krizhevsky et al. (2012) came up with a deep convolutional neural network consisting of 61 million parameters and won the ImageNet competition in 2012. It is followed by deeper neural networks with even larger numbers of parameters, e.g., Simonyan & Zisserman (2014).
The large sizes of deep neural networks make it difï¬cult to deploy them on resource-limited devices, e.g., mobile or portable devices, and network compression is of great interest in recent years to reduce computational cost and memory requirements for deep neural networks. Our interest in this paper is mainly on curtailing the size of the storage (memory) for network parameters (weights and biases). In particular, we focus on the network size compression by reducing the number of distinct network parameters by quantization.
Besides network quantization, network pruning has been studied for network compression to remove redundant parameters permanently from neural networks (Mozer & Smolensky, 1989; LeCun et al., 1989; Hassibi & Stork, 1993; Han et al., 2015b; Lebedev & Lempitsky, 2016; Wen et al., 2016). Matrix/tensor factorization and low-rank approximation have been investigated as well to ï¬nd more efï¬cient representations of neural networks with a smaller number of parameters and consequently to save computations (Sainath et al., 2013; Xue et al., 2013; Jaderberg et al., 2014; Lebedev et al., 2014; Yang et al., 2015; Liu et al., 2015; Kim et al., 2015; Tai et al., 2015; Novikov et al., 2015). Moreover, similar to network quantization, low-precision network implementation has been exam- ined in Vanhoucke et al. (2011); Courbariaux et al. (2014); Anwar et al. (2015); Gupta et al. (2015); Lin et al. (2015a). Some extremes of low-precision neural networks consisting of binary or ternary parameters can be found in Courbariaux et al. (2015); Lin et al. (2015b); Rastegari et al. (2016). We note that these are different types of network compression techniques, which can be employed on top of each other.
1
Published as a conference paper at ICLR 2017
The most related work to our investigation in this paper can be found in Gong et al. (2014); Han et al. (2015a), where a conventional quantization method using k-means clustering is employed for net- work quantization. This conventional approach however is proposed with little consideration for the impact of quantization errors on the neural network performance loss and no effort to optimize the quantization procedure for a given compression ratio constraint. In this paper, we reveal the subop- timality of this conventional method and newly design quantization schemes for neural networks. In particular, we formulate an optimization problem to minimize the network performance loss due to quantization given a compression ratio constraint and ï¬nd efï¬cient quantization methods for neural networks.
The main contribution of the paper can be summarized as follows:
⢠It is derived that the performance loss due to quantization in neural networks can be quan- tiï¬ed approximately by the Hessian-weighted distortion measure. Then, Hessian-weighted k-means clustering is proposed for network quantization to minimize the performance loss.
⢠It is identiï¬ed that the optimization problem for network quantization provided a compres- sion ratio constraint can be reduced to an entropy-constrained scalar quantization (ECSQ) problem when optimal variable-length binary coding is employed after quantization. Two efï¬cient heuristic solutions for ECSQ are proposed for network quantization, i.e., uniform quantization and an iterative solution similar to Lloydâs algorithm.
⢠As an alternative of Hessian, it is proposed to utilize some function (e.g., square root) of the second moment estimates of gradients when the Adam (Kingma & Ba, 2014) stochastic gradient descent (SGD) optimizer is used in training. The advantage of using this alterna- tive is that it is computed while training and can be obtained at the end of training at no additional cost.
⢠It is shown how the proposed network quantization schemes can be applied for quantizing network parameters of all layers together at once, rather than layer-by-layer network quan- tization in Gong et al. (2014); Han et al. (2015a). This follows from our investigation that Hessian-weighting can handle the different impact of quantization errors properly not only within layers but also across layers. Moreover, quantizing network parameters of all layers together, one can even avoid layer-by-layer compression rate optimization.
The rest of the paper is organized as follows. In Section 2, we deï¬ne the network quantization prob- lem and review the conventional quantization method using k-means clustering. Section 3 discusses Hessian-weighted network quantization. Our entropy-constrained network quantization schemes follow in Section 4. Finally, experiment results and conclusion can be found in Section 5 and Sec- tion 6, respectively.
# 2 NETWORK QUANTIZATION
We consider a neural network that is already trained, pruned if employed and ï¬ne-tuned before quan- tization. If no network pruning is employed, all parameters in a network are subject to quantization. For pruned networks, our focus is on quantization of unpruned parameters.
The goal of network quantization is to quantize (unpruned) network parameters in order to reduce the size of the storage for them while minimizing the performance degradation due to quantization. For network quantization, network parameters are grouped into clusters. Parameters in the same cluster share their quantized value, which is the representative value (i.e., cluster center) of the cluster they belong to. After quantization, lossless binary coding follows to encode quantized parameters into binary codewords to store instead of actual parameter values. Either ï¬xed-length binary coding or variable-length binary coding, e.g., Huffman coding, can be employed to this end.
2.1 COMPRESSION RATIO
Suppose that we have total N parameters in a neural network. Before quantization, each parameter is assumed to be of b bits. For quantization, we partition the network parameters into k clusters. Let Ci be the set of network parameters in cluster i and let bi be the number of bits of the codeword assigned to the network parameters in cluster i for 1 ⤠i ⤠k. For a lookup table to decode quantized
2
Published as a conference paper at ICLR 2017
values from their binary encoded codewords, we store k binary codewords (bi bits for 1 ⤠i ⤠k) and corresponding quantized values (b bits for each). The compression ratio is then given by
Compression ratio = N b k i=1(|Ci| + 1)bi + kb . (1)
Observe in (1) that the compression ratio depends not only on the number of clusters but also on the P sizes of the clusters and the lengths of the binary codewords assigned to them, in particular, when a variable-length code is used for encoding quantized values. For ï¬xed-length codes, however, all codewords are of the same length, i.e., bi = âlog2 kâ for all 1 ⤠i ⤠k, and thus the compression ratio is reduced to only a function of the number of clusters, i.e., k, assuming that N and b are given.
2.2 K-MEANS CLUSTERING
Provided network parameters {wi}N i=1 to quantize, k-means clustering partitions them into k dis- joint sets (clusters), denoted by C1, C2, . . . , Ck, while minimizing the mean square quantization error (MSQE) as follows:
# k
argmin C1,C2,...,Ck |w â ci|2, where ci = 1 |Ci| w. (2)
# wâCi X
# wâCi X
# i=1 X
We observe two issues with employing k-means clustering for network quantization.
⢠First, although k-means clustering minimizes the MSQE, it does not imply that k-means clustering minimizes the performance loss due to quantization as well in neural networks. K-means clustering treats quantization errors from all network parameters with equal im- portance. However, quantization errors from some network parameters may degrade the performance more signiï¬cantly that the others. Thus, for minimizing the loss due to quan- tization in neural networks, one needs to take this dissimilarity into account.
⢠Second, k-means clustering does not consider any compression ratio constraint. It simply minimizes its distortion measure for a given number of clusters, i.e., for k clusters. This is however suboptimal when variable-length coding follows since the compression ratio de- pends not only on the number of clusters but also on the sizes of the clusters and assigned codeword lengths to them, which are determined by the binary coding scheme employed af- ter clustering. Therefore, for the optimization of network quantization given a compression ratio constraint, one need to take the impact of binary coding into account, i.e., we need to solve the quantization problem under the actual compression ratio constraint imposed by the speciï¬c binary coding scheme employed after clustering.
# 3 HESSIAN-WEIGHTED NETWORK QUANTIZATION
In this section, we analyze the impact of quantization errors on the neural network loss function and derive that the Hessian-weighted distortion measure is a relevant objective function for network quantization in order to minimize the quantization loss locally. Moreover, from this analysis, we pro- pose Hessian-weighted k-means clustering for network quantization to minimize the performance loss due to quantization in neural networks.
3.1 NETWORK MODEL
We consider a general non-linear neural network that yields output y = f (x; w) from input x, where w = [w1 · · · wN ]T is the vector consisting of all trainable network parameters in the network; N is the total number of trainable parameters in the network. A loss function loss(y, Ëy) is deï¬ned as the objective function that we aim to minimize in average, where Ëy = Ëy(x) is the expected (ground- truth) output for input x. Cross entropy or mean square error are typical examples of a loss function. Given a training data set Xtrain, we optimize network parameters by solving the following problem, e.g., approximately by using a stochastic gradient descent (SGD) method with mini-batches:
Ëw = argmin w L(Xtrain; w), where L(X ; w) = 1 |X | loss(f (x; w), Ëy(x)).
# xâX X
3
Published as a conference paper at ICLR 2017
# 3.2 HESSIAN-WEIGHTED QUANTIZATION ERROR
The average loss function L(X ; w) can be expanded by Taylor series with respect to w as follows:
δL(X ; w) = g(w)T δw + 1 2 δwT H(w)δw + O(kδwk3), (3)
# where
where
g(w) = âL(X ; w) âw , H(w) = â2L(X ; w) âw2 ;
the square matrix H(w) consisting of second-order partial derivatives is called as Hessian matrix or Hessian. Assume that the loss function has reached to one of its local minima, at w = Ëw, after training. At local minima, gradients are all zero, i.e., we have g( Ëw) = 0, and thus the ï¬rst term in the right-hand side of (3) can be neglected at w = Ëw. The third term in the right-hand side of (3) is also ignored under the assumption that the average loss function is approximately quadratic at the local minimum w = Ëw. Finally, for simplicity, we approximate the Hessian matrix as a diagonal matrix by setting its off-diagonal terms to be zero. Then, it follows from (3) that
N 1 2 hii( Ëw)|δ Ëwi|2, δL(X ; Ëw) â (4)
i=1 X where hii( Ëw) is the second-order partial derivative of the average loss function with respect to wi evaluated at w = Ëw, which is the i-th diagonal element of the Hessian matrix H( Ëw).
Now, we connect (4) with the problem of network quantization by treating δ Ëwi as the quantization error of network parameter wi at its local optimum wi = Ëwi, i.e.,
δ Ëwi = ¯wi â Ëwi, (5)
where ¯wi is a quantized value of Ëwi. Finally, combining (4) and (5), we derive that the local impact of quantization on the average loss function at w = Ëw can be quantiï¬ed approximately as follows:
δL(X ; Ëw) â 1 2 N hii( Ëw)| Ëwi â ¯wi|2. (6)
# i=1 X
At a local minimum, the diagonal elements of Hessian, i.e., hii( Ëw)âs, are all non-negative and thus the summation in (6) is always additive, implying that the average loss function either increases or stays the same. Therefore, the performance degradation due to quantization of a neural network can be measured approximately by the Hessian-weighted distortion as shown in (6). Further discussion on the Hessian-weighted distortion measure can be found in Appendix A.1.
# 3.3 HESSIAN-WEIGHTED K-MEANS CLUSTERING
For notational simplicity, we use wi â¡ Ëwi and hii â¡ hii( Ëw) from now on. The optimal clustering that minimizes the Hessian-weighted distortion measure is given by
argmin C1,C2,...,Ck k hii|wi â cj|2, where cj = wiâCj hiiwi wiâCj hii P . (7)
# wiâCj X
# j=1 X
# P
We call this as Hessian-weighted k-means clustering. Observe in (7) that we give a larger penalty for a network parameter in deï¬ning the distortion measure for clustering when its second-order partial derivative is larger, in order to avoid a large deviation from its original value, since the impact on the loss function due to quantization is expected to be larger for that parameter.
Hessian-weighted k-means clustering is locally optimal in minimizing the quantization loss when ï¬xed-length binary coding follows, where the compression ratio solely depends on the number of clusters as shown in Section 2.1. Similar to the conventional k-means clustering, solving this op- timization is not easy, but Lloydâs algorithm is still applicable as an efï¬cient heuristic solution for this problem if Hessian-weighted means are used as cluster centers instead of non-weighted regular means.
4
Published as a conference paper at ICLR 2017
3.4 HESSIAN COMPUTATION
For obtaining Hessian, one needs to evaluate the second-order partial derivative of the average loss function with respect to each of network parameters, i.e., we need to calculate
â2L(X ; w) âw2 i â2 âw2 i 1 |X | hii( Ëw) = = . w= Ëw w= Ëw (8)
° loss(f (x; w), ¥(x)) Hessian. An efficient & Le Cun]
# xâX X
Recall that we are interested in only the diagonal elements of Hessian. An efï¬cient way of computing the diagonal of Hessian is presented in Le Cun (1987); Becker & Le Cun (1988) and it is based on the back propagation method that is similar to the back propagation algorithm used for computing ï¬rst-order partial derivatives (gradients). That is, computing the diagonal of Hessian is of the same order of complexity as computing gradients.
Hessian computation and our network quantization are performed after completing network training. For the data set X used to compute Hessian in (8), we can either reuse a training data set or use some other data set, e.g., validation data set. We observed from our experiments that even using a small subset of the training or validation data set is sufï¬cient to yield good approximation of Hessian for network quantization.
3.5 ALTERNATIVE OF HESSIAN
Although there is an efï¬cient way to obtain the diagonal of Hessian as discussed in the previous sub- section, Hessian computation is not free. In order to avoid this additional Hessian computation, we propose to use an alternative metric instead of Hessian. In particular, we consider neural networks trained with the Adam SGD optimizer (Kingma & Ba, 2014) and propose to use some function (e.g., square root) of the second moment estimates of gradients as an alternative of Hessian.
The Adam algorithm computes adaptive learning rates for individual network parameters from the ï¬rst and second moment estimates of gradients. We compare the Adam method to Newtonâs op- timization method using Hessian and notice that the second moment estimates of gradients in the Adam method act like the Hessian in Newtonâs method. This observation leads us to use some func- tion (e.g., square root) of the second moment estimates of gradients as an alternative of Hessian.
The advantage of using the second moment estimates from the Adam method is that they are com- puted while training and we can obtain them at the end of training at no additional cost. It makes Hessian-weighting more feasible for deep neural networks, which have millions of parameters. We note that similar quantities can be found and used for other SGD optimization methods using adaptive learning rates, e.g., AdaGrad (Duchi et al., 2011), Adadelta (Zeiler, 2012) and RMSProp (Tieleman & Hinton, 2012).
3.6 QUANTIZATION OF ALL LAYERS
We propose quantizing the network parameters of all layers in a neural network together at once by taking Hessian-weight into account. Layer-by-layer quantization was examined in the previous work (Gong et al., 2014; Han et al., 2015a). However, e.g., in Han et al. (2015a), a larger number of bits (a larger number of clusters) are assigned to convolutional layers than fully-connected layers, which implies that they heuristically treat convolutional layers more importantly. This follows from the fact that the impact of quantization errors on the performance varies signiï¬cantly across layers; some layers, e.g., convolutional layers, may be more important than the others. This concern is exactly what we can address by Hessian-weighting.
Hessian-weighting properly handles the different impact of quantization errors not only within layers but also across layers and thus it can be employed for quantizing all layers of a network together. The impact of quantization errors may vary more substantially across layers than within layers. Thus, Hessian-weighting may show more beneï¬t in deeper neural networks. We note that Hessian- weighting can still provide gain even for layer-by-layer quantization since it can address the different impact of the quantization errors of network parameters within each layer as well.
Recent neural networks are getting deeper, e.g., see Szegedy et al. (2015a;b); He et al. (2015). For such deep neural networks, quantizing network parameters of all layers together is even more advan- tageous since we can avoid layer-by-layer compression rate optimization. Optimizing compression
5
Published as a conference paper at ICLR 2017
ratios jointly across all individual layers (to maximize the overall compression ratio for a network) requires exponential time complexity with respect to the number of layers. This is because the total number of possible combinations of compression ratios for individual layers increases exponentially as the number of layers increases.
# 4 ENTROPY-CONSTRAINED NETWORK QUANTIZATION
In this section, we investigate how to solve the network quantization problem under a constraint on the compression ratio. In designing network quantization schemes, we not only want to minimize the performance loss but also want to maximize the compression ratio. In Section 3, we explored how to quantify and minimize the loss due to quantization. In this section, we investigate how to take the compression ratio into account properly in the optimization of network quantization.
4.1 ENTROPY CODING
After quantizing network parameters by clustering, lossless data compression by variable-length bi- nary coding can be followed for compressing quantized values. There is a set of optimal codes that achieve the minimum average codeword length for a given source. Entropy is the theoretical limit of the average codeword length per symbol that we can achieve by lossless data compression, proved by Shannon (see, e.g., Cover & Thomas (2012, Section 5.3)). It is known that optimal codes achieve this limit with some overhead less than 1 bit when only integer-length codewords are allowed. So optimal coding is also called as entropy coding. Huffman coding is one of entropy coding schemes commonly used when the source distribution is provided (see, e.g., Cover & Thomas (2012, Sec- tion 5.6)), or can be estimated.
4.2 ENTROPY-CONSTRAINED SCALAR QUANTIZATION (ECSQ)
Considering a compression ratio constraint in network quantization, we need to solve the clustering problem in (2) or (7) under the compression ratio constraint given by
# k
b 1 N > C, where ¯b = Compression ratio = |Ci|bi, k i=1 bi + kb)/N ¯b + ( (9)
# i=1 X
which follows from (1). This optimization problem is too complex to solve for any arbitrary variable- length binary code since the average codeword length ¯b can be arbitrary. However, we identify that it can be simpliï¬ed if optimal codes, e.g., Huffman codes, are assumed to be used. In particular, optimal coding closely achieves the lower limit of the average source code length, i.e., entropy, and then we approximately have
# k
¯b â H = â pi log2 pi, (10)
i=1 X where H is the entropy of the quantized network parameters after clustering (i.e., source), given that pi = |Ci|/N is the ratio of the number of network parameters in cluster Ci to the number of all network parameters (i.e., source distribution). Moreover, assuming that N â« k, we have k
1 N bi + kb â 0, ! (11)
# i=1 X
in (9). From (10) and (11), the constraint in (9) can be altered to an entropy constraint given by
k H = â pi log2 pi < R,
# i=1 X
where R â b/C. In summary, assuming that optimal coding is employed after clustering, one can approximately replace a compression ratio constraint with an entropy constraint for the clustering output. The network quantization problem is then translated into a quantization problem with an en- tropy constraint, which is called as entropy-constrained scalar quantization (ECSQ) in information theory. Two efï¬cient heuristic solutions for ECSQ are proposed for network quantization in the fol- lowing subsections, i.e., uniform quantization and an iterative solution similar to Lloydâs algorithm for k-means clustering.
6
Published as a conference paper at ICLR 2017
4.3 UNIFORM QUANTIZATION
It is shown in Gish & Pierce (1968) that the uniform quantizer is asymptotically optimal in mini- mizing the mean square quantization error for any random source with a reasonably smooth density function as the resolution becomes inï¬nite, i.e., as the number of clusters k â â. This asymptotic result leads us to come up with a very simple but efï¬cient network quantization scheme as follows:
1. We ï¬rst set uniformly spaced thresholds and divide network parameters into clusters. 2. After determining clusters, their quantized values (cluster centers) are obtained by taking
the mean of network parameters in each cluster.
Note that one can use Hessian-weighted mean instead of non-weighted mean in computing clus- ter centers in the second step above in order to take the beneï¬t of Hessian-weighting. A perfor- mance comparison of uniform quantization with non-weighted mean and uniform quantization with Hessian-weighted mean can be found in Appendix A.2.
Although uniform quantization is a straightforward method, it has never been shown before in the literature that it is actually one of the most efï¬cient quantization schemes for neural networks when optimal variable-length coding, e.g., Huffman coding, follows. We note that uniform quantization is not always good; it is inefï¬cient for ï¬xed-length coding, which is also ï¬rst shown in this paper.
4.4
# ITERATIVE ALGORITHM TO SOLVE ECSQ
Another scheme proposed to solve the ECSQ problem for network quantization is an iterative algo- rithm, which is similar to Lloydâs algorithm for k-means clustering. Although this iterative solution is more complicated than the uniform quantization in Section 4.3, it ï¬nds a local optimum for a given discrete source. An iterative algorithm to solve the general ECSQ problem is provided in Chou et al. (1989). We derive a similar iterative algorithm to solve the ECSQ problem for network quantization. The main difference from the method in Chou et al. (1989) is that we minimize the Hessian-weighted distortion measure instead of the non-weighted regular distortion measure for op- timal quantization. The detailed algorithm and further discussion can be found in Appendix A.3.
# 5 EXPERIMENTS
This section presents our experiment results for the proposed network quantization schemes in three exemplary convolutional neural networks: (a) LeNet (LeCun et al., 1998) for the MNIST data set, (b) ResNet (He et al., 2015) for the CIFAR-10 data set, and (c) AlexNet (Krizhevsky et al., 2012) for the ImageNet ILSVRC-2012 data set. Our experiments can be summarized as follows:
⢠We employ the proposed network quantization methods to quantize all of network param- eters in a network together at once, as discussed in Section 3.6.
We evaluate the performance of the proposed network quantization methods with and with- out network pruning. For a pruned model, we need to store not only the values of unpruned parameters but also their respective indexes (locations) in the original model. For the index information, we compute index differences between unpruned network parameters in the original model and further compress them by Huffman coding as in Han et al. (2015a). ⢠For Hessian computation, 50,000 samples of the training set are reused. We also evaluate
the performance when Hessian is computed with 1,000 samples only.
⢠Finally, we evaluate the performance of our network quantization schemes using Hessian when its alternative is used instead, as discussed in Section 3.5. To this end, we retrain the considered neural networks with the Adam SGD optimizer and obtain the second moment estimates of gradients at the end of training. Then, we use the square roots of the second moment estimates instead of Hessian and evaluate the performance.
# 5.1 EXPERIMENT MODELS
First, we evaluate our network quantization schemes for the MNIST data set with a simpliï¬ed ver- sion of LeNet5 (LeCun et al., 1998), consisting of two convolutional layers and two fully-connected
7
Published as a conference paper at ICLR 2017
100
100
100 100 90 90 80 80 ) % ( 70 60 ) % ( 70 60 y c a r u c c A 50 40 30 y c a r u c c A 50 40 30 20 10 0 0 1 kâmeans Hessianâweighted kâmeans Uniform quantization Iterative ECSQ 3 2 7 Codeword length (bits) 4 5 6 8 9 20 10 0 0 1 kâmeans Hessianâweighted kâmeans Uniform quantization Iterative ECSQ 3 2 7 Codeword length (bits) 4 5 6 8 (a) Fixed-length coding (b) Fixed-length coding + ï¬ne-tuning 100 100 90 90 80 80 ) % ( 70 60 ) % ( 70 60 y c a r u c c A 50 40 30 y c a r u c c A 50 40 30 20 10 0 0 kâmeans Hessianâweighted kâmeans Uniform quantization Iterative ECSQ 3 1 8 Average codeword length (bits) 2 4 5 6 7 (c) Huffman coding 9 kâmeans Hessianâweighted kâmeans Uniform quantization Iterative ECSQ 3 20 10 0 0 8 1 Average codeword length (bits) (d) Huffman coding + ï¬ne-tuning 2 4 5 6 7 9 9
Figure 1: Accuracy versus average codeword length per network parameter after network quantiza- tion for 32-layer ResNet.
layers followed by a soft-max layer. It has total 431,080 parameters and achieves 99.25% accuracy. For a pruned model, we prune 91% of the original network parameters and ï¬ne-tune the rest.
Second, we experiment our network quantization schemes for the CIFAR-10 data set (Krizhevsky, 2009) with a pre-trained 32-layer ResNet (He et al., 2015). The 32-layer ResNet consists of 464,154 parameters in total and achieves 92.58% accuracy. For a pruned model, we prune 80% of the original network parameters and ï¬ne-tune the rest.
Third, we evaluate our network quantization schemes with AlexNet (Krizhevsky et al., 2012) for the ImageNet ILSVRC-2012 data set (Russakovsky et al., 2015). We obtain a pre-trained AlexNet Caffe model, which achieves 57.16% top-1 accuracy. For a pruned model, we prune 89% parameters and ï¬ne-tune the rest. In ï¬ne-tuning, the Adam SGD optimizer is used in order to avoid the computation of Hessian by utilizing its alternative (see Section 3.5). However, the pruned model does not recover the original accuracy after ï¬ne-tuning with the Adam method; the top-1 accuracy recovered after pruning and ï¬ne-tuning is 56.00%. We are able to ï¬nd a better pruned model achieving the original accuracy by pruning and retraining iteratively (Han et al., 2015b), which is however not used here.
5.2 EXPERIMENT RESULTS
We ï¬rst present the quantization results without pruning for 32-layer ResNet in Figure 1, where the accuracy of 32-layer ResNet is plotted against the average codeword length per network pa- rameter after quantization. When ï¬xed-length coding is employed, the proposed Hessian-weighted k-means clustering method performs the best, as expected. Observe that Hessian-weighted k-means clustering yields better accuracy than others even after ï¬ne-tuning. On the other hand, when Huff- man coding is employed, uniform quantization and the iterative algorithm for ECSQ outperform Hessian-weighted k-means clustering and k-means clustering. However, these two ECSQ solutions underperform Hessian-weighted k-means clustering and even k-means clustering when ï¬xed-length coding is employed since they are optimized for optimal variable-length coding.
8
Published as a conference paper at ICLR 2017
100 100 99.5 90 99 80 ) % ( y c a r u c c A 98.5 98 97.5 97 96.5 ) % ( y c a r u c c A 70 60 50 40 30 96 95.5 95 0 kâmeans Hessianâweighted kâmeans (50,000) Hessianâweighted kâmeans (1,000) AltâHessianâweighted kâmeans 1 2 3 4 5 Average codeword length (bits) 6 7 20 10 0 0 kâmeans Hessianâweighted kâmeans (50,000) Hessianâweighted kâmeans (1,000) AltâHessianâweighted kâmeans 1 8 Average codeword length (bits) 2 3 4 5 6 7 (a) LeNet (b) ResNet 9
Figure 2: Accuracy versus average codeword length per network parameter after network quanti- zation, Huffman coding and ï¬ne-tuning for LeNet and 32-layer ResNet when Hessian is computed with 50,000 or 1,000 samples and when the square roots of the second moment estimates of gradients are used instead of Hessian as an alternative.
Figure 2 shows the performance of Hessian-weighted k-means clustering when Hessian is computed with a small number of samples (1,000 samples). Observe that even using the Hessian computed with a small number of samples yields almost the same performance. We also show the performance of Hessian-weighted k-means clustering when an alternative of Hessian is used instead of Hessian as explained in Section 3.5. In particular, the square roots of the second moment estimates of gradients are used instead of Hessian, and using this alternative provides similar performance to using Hessian.
In Table 1, we summarize the compression ratios that we can achieve with different network quanti- zation methods for pruned models. The original network parameters are 32-bit ï¬oat numbers. Using the simple uniform quantization followed by Huffman coding, we achieve the compression ratios of 51.25, 22.17 and 40.65 (i.e., the compressed model sizes are 1.95%, 4.51% and 2.46% of the original model sizes) for LeNet, 32-layer ResNet and AlexNet, respectively, at no or marginal per- formance loss. Observe that the loss in the compressed AlexNet is mainly due to pruning. Here, we also compare our network quantization results to the ones in Han et al. (2015a). Note that layer-by- layer quantization with k-means clustering is evaluated in Han et al. (2015a) while our quantization schemes including k-means clustering are employed to quantize network parameters of all layers together at once (see Section 3.6).
# 6 CONCLUSION
This paper investigates the quantization problem of network parameters in deep neural networks. We identify the suboptimality of the conventional quantization method using k-means clustering and newly design network quantization schemes so that they can minimize the performance loss due to quantization given a compression ratio constraint. In particular, we analytically show that Hessian can be used as a measure of the importance of network parameters and propose to minimize Hessian- weighted quantization errors in average for clustering network parameters to quantize. Hessian- weighting is beneï¬cial in quantizing all of the network parameters together at once since it can handle the different impact of quantization errors properly not only within layers but also across layers. Furthermore, we make a connection from the network quantization problem to the entropy- constrained data compression problem in information theory and push the compression ratio to the limit that information theory provides. Two efï¬cient heuristic solutions are presented to this end, i.e., uniform quantization and an iterative solution for ECSQ. Our experiment results show that the proposed network quantization schemes provide considerable gain over the conventional method using k-means clustering, in particular for large and deep neural networks.
# REFERENCES
Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. Fixed point optimization of deep convolutional neural networks for object recognition. In IEEE International Conference on Acoustics, Speech
9
Published as a conference paper at ICLR 2017
Table 1: Summary of network quantization results with Huffman coding for pruned models.
Accuracy % Compression ratio - 10.13 44.58 47.16 51.25 49.01 39.00 - 4.52 18.25 20.51 22.17 21.01 N/A - 7.91 30.53 33.71 40.65 35.00 99.25 99.27 99.27 99.27 99.28 99.27 99.26 92.58 92.58 92.64 92.67 92.68 92.73 N/A 57.16 56.00 56.12 56.04 56.20 57.22 Original model Pruned model k-means Hessian-weighted k-means Uniform quantization Iterative ECSQ Pruning + Quantization all layers + Huffman coding LeNet Deep compression (Han et al., 2015a) Original model Pruned model k-means Hessian-weighted k-means Uniform quantization Iterative ECSQ Pruning + Quantization all layers + Huffman coding ResNet Deep compression (Han et al., 2015a) Original model Pruned model Pruning + Quantization all layers + Huffman coding Deep compression (Han et al., 2015a) k-means Alt-Hessian-weighted k-means Uniform quantization AlexNet
and Signal Processing, pp. 1131â1135, 2015.
Sue Becker and Yann Le Cun. Improving the convergence of back-propagation learning with second In Proceedings of the Connectionist Models Summer School, pp. 29â37. San order methods. Matteo, CA: Morgan Kaufmann, 1988.
Philip A Chou, Tom Lookabaugh, and Robert M Gray. Entropy-constrained vector quantization. IEEE Transactions on Acoustics, Speech, and Signal Processing, 37(1):31â42, 1989.
Matthieu Courbariaux, Jean-Pierre David, and Yoshua Bengio. Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024, 2014.
Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing Systems, pp. 3123â3131, 2015.
Thomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012.
John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121â2159, 2011.
Herbert Gish and John Pierce. Asymptotically efï¬cient quantizing. IEEE Transactions on Informa- tion Theory, 14(5):676â683, 1968.
Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional net- works using vector quantization. arXiv preprint arXiv:1412.6115, 2014.
Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. In Proceedings of the 32nd International Conference on Machine Learning, pp. 1737â1746, 2015.
Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015a.
10
Published as a conference paper at ICLR 2017
Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efï¬cient neural network. In Advances in Neural Information Processing Systems, pp. 1135â1143, 2015b.
Babak Hassibi and David G Stork. Second order derivatives for network pruning: Optimal brain surgeon. In Advances in Neural Information Processing Systems, pp. 164â171, 1993.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. arXiv preprint arXiv:1512.03385, 2015.
Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. In Proceedings of the British Machine Vision Conference, 2014.
Yong-Deok Kim, Eunhyeok Park, Sungjoo Yoo, Taelim Choi, Lu Yang, and Dongjun Shin. Com- pression of deep convolutional neural networks for fast and low power mobile applications. arXiv preprint arXiv:1511.06530, 2015.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convo- lutional neural networks. In Advances in Neural Information Processing Systems, pp. 1097â1105, 2012.
Yann Le Cun. Mod`eles connexionnistes de lâapprentissage. PhD thesis, Paris 6, 1987.
Vadim Lebedev and Victor Lempitsky. Fast convnets using group-wise brain damage. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2554â2564, 2016.
Vadim Lebedev, Yaroslav Ganin, Maksim Rakhuba, Ivan Oseledets, and Victor Lempitsky. Speeding-up convolutional neural networks using ï¬ne-tuned CP-decomposition. arXiv preprint arXiv:1412.6553, 2014.
Yann LeCun, John S Denker, Sara A Solla, Richard E Howard, and Lawrence D Jackel. Optimal brain damage. In Advances in Neural Information Processing Systems, pp. 598â605, 1989.
Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998.
Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436â444, 2015.
Darryl D Lin, Sachin S Talathi, and V Sreekanth Annapureddy. Fixed point quantization of deep convolutional networks. arXiv preprint arXiv:1511.06393, 2015a.
Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. Neural networks with few multiplications. arXiv preprint arXiv:1510.03009, 2015b.
Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse convolu- tional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 806â814, 2015.
Michael C Mozer and Paul Smolensky. Skeletonization: A technique for trimming the fat from a network via relevance assessment. In Advances in Neural Information Processing Systems, pp. 107â115, 1989.
Alexander Novikov, Dmitrii Podoprikhin, Anton Osokin, and Dmitry P Vetrov. Tensorizing neural networks. In Advances in Neural Information Processing Systems, pp. 442â450, 2015.
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. XNOR-Net: Imagenet classiï¬cation using binary convolutional neural networks. arXiv preprint arXiv:1603.05279, 2016.
11
Published as a conference paper at ICLR 2017
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211â252, 2015.
Tara N Sainath, Brian Kingsbury, Vikas Sindhwani, Ebru Arisoy, and Bhuvana Ramabhadran. Low- rank matrix factorization for deep neural network training with high-dimensional output targets. In IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 6655â6659, 2013.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du- mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1â9, 2015a.
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re- thinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015b.
Cheng Tai, Tong Xiao, Xiaogang Wang, et al. Convolutional neural networks with low-rank regu- larization. arXiv preprint arXiv:1511.06067, 2015.
Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4(2), 2012.
Vincent Vanhoucke, Andrew Senior, and Mark Z Mao. Improving the speed of neural networks on CPUs. In Deep Learning and Unsupervised Feature Learning Workshop, NIPS, 2011.
Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, pp. 2074â2082, 2016.
Jian Xue, Jinyu Li, and Yifan Gong. Restructuring of deep neural network acoustic models with singular value decomposition. In INTERSPEECH, pp. 2365â2369, 2013.
Zichao Yang, Marcin Moczulski, Misha Denil, Nando de Freitas, Alex Smola, Le Song, and Ziyu Wang. Deep fried convnets. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1476â1483, 2015.
Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
12
Published as a conference paper at ICLR 2017
# A APPENDIX
A.1 FURTHER DISCUSSION ON THE HESSIAN-WEIGHTED QUANTIZATION ERROR
The diagonal approximation for Hessian simpliï¬es the optimization problem as well as its solution for network quantization. This simpliï¬cation comes with some performance loss. We conjecture that the loss due to this approximation is small. The reason is that the contributions from off-diagonal terms are not always additive and their summation may end up with a small value. However, diagonal terms are all non-negative and therefore their contributions are always additive. We do not verify this conjecture in this paper since solving the problem without diagonal approximation is too complex; we even need to compute the whole Hessian matrix, which is also too costly.
Observe that the relation of the Hessian-weighted distortion measure to the quantization loss holds for any model for which the objective function can be approximated as a quadratic function with respect to the parameters to quantize in the model. Hence, the quantization methods proposed in this paper to minimize the Hessian-weighted distortion measure are not speciï¬c to neural networks but are generally applicable to quantization of parameters of any model whose objective function is locally quadratic with respect to its parameters approximately.
Finally, we do not consider the interactions between quantization and retraining in our formulation in Section 3.2. We analyze the expected loss due to quantization assuming no further retraining and focus on ï¬nding optimal network quantization schemes that minimize the performance loss. In our experiments, however, we further ï¬ne-tune the quantized values (cluster centers) so that we can recover the loss due to quantization and improve the performance.
A.2 EXPERIMENT RESULTS FOR UNIFORM QUANTIZATION
We compare uniform quantization with non-weighted mean and uniform quantization with Hessian- weighted mean in Figure 3, which shows that uniform quantization with Hessian-weighted mean slightly outperforms uniform quantization with non-weighted mean.
100 100 90 90 80 80 ) % ( 70 60 ) % ( 70 60 y c a r u c c A 50 40 30 y c a r u c c A 50 40 30 20 20 10 0 0 Uniform with nonâweighted mean Uniform with Hessianâweighted mean 2 Average codeword length (bits) 1 3 (a) Huffman coding 4 Uniform with nonâweighted mean Uniform with Hessianâweighted mean 10 0 0 2 Average codeword length (bits) (b) Huffman coding + ï¬ne-tuning 1 3 4
Figure 3: Accuracy versus average codeword length per network parameter after network quanti- zation, Huffman coding and ï¬ne-tuning for 32-layer ResNet when uniform quantization with non- weighted mean and uniform quantization with Hessian-weighted mean are used.
# A.3 FURTHER DISCUSSION ON THE ITERATIVE ALGORITHM FOR ECSQ
In order to solve the ECSQ problem for network quantization, we deï¬ne a Lagrangian cost function:
Jλ(C1, C2, . . . , Ck) = D + λH = 1 N k j=1 X wiâCj X (hii|wi â cj|2 â λ log2 pj) =dλ(i,j) , (12)
where
|
# {z
D = 1 N k hii|wi â cj|2, H = â k pj log2 pj.
# wiâCj X
# j=1 X
# j=1 X
}
13
Published as a conference paper at ICLR 2017
Algorithm 1 Iterative solution for entropy-constrained network quantization
# Initialization: n â 0
Initialize the centers of k clusters: c(0) 1 , . . . , c(0) Initialize the proportions of k clusters (set all of them to be the same initially): p(0) k 1 , . . . , p(0) k Assignment: for all network parameters i = 1 â N do Assign wi to the cluster j that minimizes the individual Lagrangian cost as follows: end for C(n+1) l â C(n+1) l ⪠{wi} for l = argmin j hii|wi â c(n) n j |2 â λ log2 p(n) j o Update: for all clusters j = 1 â k do Update the cluster center and the proportion of cluster j: c(n+1) j â wiâC(n+1) j hiiwi P wiâC(n+1) j hii and p(n+1) j â |C(n+1) j N | end for n â n + 1 P
# repeat
until Lagrangian cost function Jλ decreases less than some threshold
The entropy-constrained network quantization problem is then reduced to ï¬nd k partitions (clusters) C1, C2, . . . , Ck that minimize the Lagrangian cost function as follows:
argmin C1,C2,...,Ck Jλ(C1, C2, . . . , Ck).
A heuristic iterative algorithm to solve this method of Lagrange multipliers for network quantization is presented in Algorithm 1. It is similar to Lloydâs algorithm for k-means clustering. The key difference is how to partition network parameters at the assignment step. In Lloydâs algorithm, the Euclidean distance (quantization error) is minimized. For ECSQ, the individual Lagrangian cost function, i.e., dλ(i, j) in (12), is minimized instead, which includes both quantization error and expected codeword length after entropy coding.
14 | {
"id": "1510.03009"
} |
1612.01064 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | 7 1 0 2
b e F 3 2 ] G L . s c [
3 v 4 6 0 1 0 . 2 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# TRAINED TERNARY QUANTIZATION
Chenzhuo Zhuâ Tsinghua University zhucz13@mails.tsinghua.edu.cn
Song Han Stanford University songhan@stanford.edu
Huizi Mao Stanford University huizi@stanford.edu
William J. Dally Stanford University NVIDIA dally@stanford.edu
# ABSTRACT
Deep neural networks are widely used in machine learning applications. However, the deployment of large neural networks models can be difï¬cult to deploy on mobile devices with limited power budgets. To solve this problem, we propose Trained Ternary Quantization (TTQ), a method that can reduce the precision of weights in neural networks to ternary values. This method has very little accuracy degradation and can even improve the accuracy of some models (32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet model is trained from scratch, which means itâs as easy as to train normal full precision model. We highlight our trained quantization method that can learn both ternary values and ternary assignment. During inference, only ternary values (2-bit weights) and scaling factors are needed, therefore our models are nearly 16à smaller than full- precision models. Our ternary models can also be viewed as sparse binary weight networks, which can potentially be accelerated with custom circuit. Experiments on CIFAR-10 show that the ternary models obtained by trained quantization method outperform full-precision models of ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and outperforms previous ternary models by 3%.
# INTRODUCTION
Deep neural networks are becoming the preferred approach for many machine learning applications. However, as networks get deeper, deploying a network with a large number of parameters on a small device becomes increasingly difï¬cult. Much work has been done to reduce the size of networks. Half- precision networks (Amodei et al., 2015) cut sizes of neural networks in half. XNOR-Net (Rastegari et al., 2016), DoReFa-Net (Zhou et al., 2016) and network binarization (Courbariaux et al.; 2015; Lin et al., 2015) use aggressively quantized weights, activations and gradients to further reduce computation during training. While weight binarization beneï¬ts from 32à smaller model size, the extreme compression rate comes with a loss of accuracy. Hubara et al. (2016) and Li & Liu (2016) propose ternary weight networks to trade off between model size and accuracy.
In this paper, we propose Trained Ternary Quantization which uses two full-precision scaling coefï¬cients W p for each layer l, and quantize the weights to {âW n l } instead of traditional {-1, 0, +1} or {-E, 0, +E} where E is the mean of the absolute weight value, which is not learned. Our positive and negative weights have different absolute values W p that are trainable parameters. We also maintain latent full-precision weights at training time, and discard them at test time. We back propagate the gradient to both W p l and to the latent full-precision weights. This makes it possible to adjust the ternary assignment (i.e. which of the three values a weight is assigned).
Our quantization method, achieves higher accuracy on the CIFAR-10 and ImageNet datasets. For AlexNet on ImageNet dataset, our method outperforms previously state-of-art ternary network(Li &
âWork done while at Stanford CVA lab.
1
Published as a conference paper at ICLR 2017
Liu, 2016) by 3.0% of Top-1 accuracy and the full-precision model by 1.6%. By converting most of the parameters to 2-bit values, we also compress the network by about 16x. Moreover, the advantage of few multiplications still remains, because W p l are ï¬xed for each layer during inference. On custom hardware, multiplications can be pre-computed on activations, so only two multiplications per activation are required.
# 2 MOTIVATIONS
The potential of deep neural networks, once deployed to mobile devices, has the advantage of lower latency, no reliance on the network, and better user privacy. However, energy efï¬ciency becomes the bottleneck for deploying deep neural networks on mobile devices because mobile devices are battery constrained. Current deep neural network models consist of hundreds of millions of parameters. Reducing the size of a DNN model makes the deployment on edge devices easier.
First, a smaller model means less overhead when exporting models to clients. Take autonomous driving for example; Tesla periodically copies new models from their servers to customersâ cars. Smaller models require less communication in such over-the-air updates, making frequent updates more feasible. Another example is on Apple Store; apps above 100 MB will not download until you connect to Wi-Fi. Itâs infeasible to put a large DNN model in an app. The second issue is energy consumption. Deep learning is energy consuming, which is problematic for battery-constrained mobile devices. As a result, iOS 10 requires iPhone to be plugged with charger while performing photo analysis. Fetching DNN models from memory takes more than two orders of magnitude more energy than arithmetic operations. Smaller neural networks require less memory bandwidth to fetch the model, saving the energy and extending battery life. The third issue is area cost. When deploying DNNs on Application-Speciï¬c Integrated Circuits (ASICs), a sufï¬ciently small model can be stored directly on-chip, and smaller models enable a smaller ASIC die.
Several previous works aimed to improve energy and spatial efï¬ciency of deep networks. One common strategy proven useful is to quantize 32-bit weights to one or two bits, which greatly reduces model size and saves memory reference. However, experimental results show that compressed weights usually come with degraded performance, which is a great loss for some performance- sensitive applications. The contradiction between compression and performance motivates us to work on trained ternary quantization, minimizing performance degradation of deep neural networks while saving as much energy and space as possible.
# 3 RELATED WORK
3.1 BINARY NEURAL NETWORK (BNN)
Lin et al. (2015) proposed binary and ternary connections to compress neural networks and speed up computation during inference. They used similar probabilistic methods to convert 32-bit weights into binary values or ternary values, deï¬ned as:
wb â¼ Bernoulli( Ëw + 1 2 ) Ã 2 â 1 wt â¼ Bernoulli(| Ëw|) Ã sign( Ëw) (1)
Here wb and wt denote binary and ternary weights after quantization. Ëw denotes the latent full precision weight.
During back-propagation, as the above quantization equations are not differentiable, derivatives of expectations of the Bernoulli distribution are computed instead, yielding the identity function:
âL â Ëw = âL âwb = âL âwt (2)
Here L is the loss to optimize.
For BNN with binary connections, only quantized binary values are needed for inference. Therefore a 32Ã smaller model can be deployed into applications.
2
Published as a conference paper at ICLR 2017
3.2 DOREFA-NET
Zhou et al. (2016) proposed DoReFa-Net which quantizes weights, activations and gradients of neural networks using different widths of bits. Therefore with speciï¬cally designed low-bit multiplication algorithm or hardware, both training and inference stages can be accelerated.
They also introduced a much simpler method to quantize 32-bit weights to binary values, deï¬ned as:
wb = E(| Ëw|) Ã sign( Ëw) (3)
Here E(| Ëw|) calculates the mean of absolute values of full precision weights Ëw as layer-wise scaling factors. During back-propagation, Equation 2 still applies.
3.3 TERNARY WEIGHT NETWORKS
Li & Liu (2016) proposed TWN (Ternary weight networks), which reduce accuracy loss of binary networks by introducing zero as a third quantized value. They use two symmetric thresholds ±âl and a scaling factor Wl for each layer l to quantize weighs into {âWl, 0, +Wl}:
wt l = Wl : Ëwl > âl 0 : | Ëwl| ⤠âl âWl : Ëwl < ââl (4)
They then solve an optimization problem of minimizing L2 distance between full precision and ternary weights to obtain layer-wise values of Wl and âl:
âl = 0.7 Ã E(| Ëwl|) Wl = E iâ{i| Ëwl(i)|>â} (| Ëwl(i)|) (5)
And again Equation 2 is used to calculate gradients. While an additional bit is required for ternary weights, TWN achieves a validation accuracy that is very close to full precision networks according to their paper.
3.4 DEEP COMPRESSION
Han et al. (2015) proposed deep compression to prune away trivial connections and reduce precision of weights. Unlike above models using zero or symmetric thresholds to quantize high precision weights, Deep Compression used clusters to categorize weights into groups. In Deep Compression, low precision weights are ï¬ne-tuned from a pre-trained full precision network, and the assignment of each weight is established at the beginning and stay unchanged, while representative value of each cluster is updated throughout ï¬ne-tuning.
# 4 METHOD
Our method is illustrated in Figure 1. First, we normalize the full-precision weights to the range [-1, +1] by dividing each weight by the maximum weight. Next, we quantize the intermediate full-resolution weights to {-1, 0, +1} by thresholding. The threshold factor t is a hyper-parameter that is the same across all the layers in order to reduce the search space. Finally, we perform trained quantization by back propagating two gradients, as shown in the dashed lines in Figure 1. We back-propagate gradient1 to the full-resolution weights and gradient2 to the scaling coefï¬cients. The former enables learning the ternary assignments, and the latter enables learning the ternary values.
At inference time, we throw away the full-resolution weights and only use ternary weights.
4.1 LEARNING BOTH TERNARY VALUES AND TERNARY ASSIGNMENTS
During gradient descent we learn both the quantized ternary weights (the codebook), and choose which of these values is assigned to each weight (choosing the codebook index).
3
Published as a conference paper at ICLR 2017
Published as a conference paper at ICLR 2017
# Figure 1: Overview of the trained ternary quantization procedure.
To learn the ternary value (codebook), we introduce two quantization factors W p and negative weights in each layer l. During feed-forward, quantized ternary weights wt as:
W p : Ëwl > âl l 0 : | Ëwl| ⤠âl : Ëwl < ââl wt l = (6) âW n l
Unlike previous work where quantized weights are calculated from 32-bit weights, the scaling coefï¬- cients W p l are two independent parameters and are trained together with other parameters. Following the rule of gradient descent, derivatives of W p
# yn and W7â
aL aL aL aL aw? = Ls dupâ aWP = Ls Buf â ielp ielp
Here I p l = {i|(i) Ëwl < ââl}. Furthermore, because of the existence of two scaling factors, gradients of latent full precision weights can no longer be calculated by Equation 2. We use scaled gradients for 32-bit weights:
âL âwt l âL âwt l âL âwt l W p l à : Ëwl > âl âL â Ëwl 1 à : | Ëwl| ⤠âl = (8) W n : Ëwl < ââl l Ã
Note we use scalar number 1 as factor of gradients of zero weights. The overall quantization process is illustrated as Figure 1. The evolution of the ternary weights from different layers during training is shown in Figure 2. We observe that as training proceeds, different layers behave differently: for the ï¬rst quantized conv layer, the absolute values of W p l get smaller and sparsity gets lower, while for the last conv layer and fully connected layer, the absolute values of W p l get larger and sparsity gets higher.
We learn the ternary assignments (index to the codebook) by updating the latent full-resolution weights during training. This may cause the assignments to change between iterations. Note that the thresholds are not constants as the maximal absolute values change over time. Once an updated weight crosses the threshold, the ternary assignment is changed. The beneï¬ts of using trained quantization factors are: i) The asymmetry of W p l enables l neural networks to have more model capacity. ii) Quantized weights play the role of "learning rate multipliers" during back propagation.
4 W/'
4.2 QUANTIZATION HEURISTIC
In previous work on ternary weight networks, Li & Liu (2016) proposed Ternary Weight Networks (TWN) using ±âl as thresholds to reduce 32-bit weights to ternary values, where ±âl is deï¬ned as Equation 5. They optimized value of ±âl by minimizing expectation of L2 distance between full precision weights and ternary weights. Instead of using a strictly optimized threshold, we adopt
4
W7" for positive are calculated
Published as a conference paper at ICLR 2017
â res1.0/conv1/Wn â rest.OlconviWWp â-â res3.2/conv2iWn â res3.2/conv2/Wp âlinearWn â â linearWp 3 S32 3 Bi pee ae 3 0 3 ee = Be A it oe 52 3 TE Negatives ml Zeros ll Positives Negatives ml Zeros i Positives BE Negatives ml Zeros ml Positives 100% Sg 75% 32 5 50% ae % 0% 0 50 400 150 0 50 400 150 0 50 100 150 Epochs
# z
=
5
Figure 2: Ternary weights value (above) and distribution (below) with iterations for different layers of ResNet-20 on CIFAR-10.
different heuristics: 1) use the maximum absolute value of the weights as a reference to the layerâs threshold and maintain a constant factor t for all layers:
âl = t à max(| Ëw|) (9)
and 2) maintain a constant sparsity r for all layers throughout training. By adjusting the hyper- parameter r we are able to obtain ternary weight networks with various sparsities. We use the ï¬rst method and set t to 0.05 in experiments on CIFAR-10 and ImageNet dataset and use the second one to explore a wider range of sparsities in section 5.1.1.
We perform our experiments on CIFAR-10 (Krizhevsky & Hinton, 2009) and ImageNet (Russakovsky et al., 2015). Our network is implemented on both TensorFlow (Abadi & et. al o, 2015) and Caffe (Jia et al., 2014) frameworks.
4.3 CIFAR-10
5 EXPERIMENTS
CIFAR-10 is an image classiï¬cation benchmark containing images of size 32Ã32RGB pixels in a training set of 50000 and a test set of 10000. ResNet (He et al., 2015) structure is used for our experiments.
We use parameters pre-trained from a full precision ResNet to initialize our model. Learning rate is set to 0.1 at beginning and scaled by 0.1 at epoch 80, 120 and 300. A L2-normalized weight decay
â Full precision â Binary weight (DoReFa-Net) â Ternary weight (Ours)
2 | eA UNO iS £459 5 15% c & 12.5% s 3 10% $ 7.5% 5% 0 50 100 150 Epochs
Figure 3: ResNet-20 on CIFAR-10 with different weight precision.
5
Published as a conference paper at ICLR 2017
of 0.0002 is used as regularizer. Most of our models converge after 160 epochs. We take a moving average on errors of all epochs to ï¬lter off ï¬uctuations when reporting error rate.
We compare our model with the full-precision model and a binary-weight model. We train a a full precision ResNet (He et al., 2016) on CIFAR-10 as the baseline (blue line in Figure 3). We ï¬ne-tune the trained baseline network as a 1-32-32 DoReFa-Net where weights are 1 bit and both activations and gradients are 32 bits giving a signiï¬cant loss of accuracy (green line) . Finally, we ï¬ne-tuning the baseline with trained ternary weights (red line). Our model has substantial accuracy improvement over the binary weight model, and our loss of accuracy over the full precision model is small. We also compare our model to Tenary Weight Network (TWN) on ResNet-20. Result shows our model improves the accuracy by â¼ 0.25% on CIFAR-10.
We expand our experiments to ternarize ResNet with 32, 44 and 56 layers. All ternary models are ï¬ne-tuned from full precision models. Our results show that we improve the accuracy of ResNet-32, ResNet-44 and ResNet-56 by 0.04%, 0.16% and 0.36% . The deeper the model, the larger the improvement. We conjecture that this is due to ternary weights providing the right model capacity and preventing overï¬tting for deeper networks.
Model ResNet-20 ResNet-32 ResNet-44 ResNet-56 Full resolution 8.23 7.67 7.18 6.80 Ternary (Ours) 8.87 7.63 7.02 6.44 Improvement -0.64 0.04 0.16 0.36
Table 1: Error rates of full-precision and ternary ResNets on Cifar-10
5.1 IMAGENET
We further train and evaluate our model on ILSVRC12(Russakovsky et al. (2015)). ILSVRC12 is a 1000-category dataset with over 1.2 million images in training set and 50 thousand images in validation set. Images from ILSVRC12 also have various resolutions. We used a variant of AlexNet(Krizhevsky et al. (2012)) structure by removing dropout layers and add batch normalization(Ioffe & Szegedy, 2015) for all models in our experiments. The same variant is also used in experiments described in the paper of DoReFa-Net.
Our ternary model of AlexNet uses full precision weights for the ï¬rst convolution layer and the last fully-connected layer. Other layer parameters are all quantized to ternary values. We train our model on ImageNet from scratch using an Adam optimizer (Kingma & Ba (2014)). Minibatch size is set to 128. Learning rate starts at 10â4 and is scaled by 0.2 at epoch 56 and 64. A L2-normalized weight decay of 5 à 10â6 is used as a regularizer. Images are ï¬rst resized to 256 à 256 then randomly cropped to 224 à 224 before input. We report both top 1 and top 5 error rate on validation set.
We compare our model to a full precision baseline, 1-32-32 DoReFa-Net and TWN. After around 64 epochs, validation error of our model dropped signiï¬cantly compared to other low-bit networks as well as the full precision baseline. Finally our model reaches top 1 error rate of 42.5%, while DoReFa-Net gets 46.1% and TWN gets 45.5%. Furthermore, our model still outperforms full precision AlexNet (the batch normalization version, 44.1% according to paper of DoReFa-Net) by 1.6%, and is even better than the best AlexNet results reported (42.8%1). The complete results are listed in Table 2.
Error Top1 Top5 Full precision 42.8% 19.7% 1-bit (DoReFa) 46.1% 23.7% 2-bit 2-bit (TWN) (Ours) 45.5% 42.5% 23.2% 20.3%
Table 2: Top1 and Top5 error rate of AlexNet on ImageNet
# 1https://github.com/BVLC/caffe/wiki/Models-accuracy-on-ImageNet-2012-val
6
Published as a conference paper at ICLR 2017
â DoReFa-Net â TWN â Ours --- Full precision (with Dropout) 80% Train Validation 60% > Top1 40% 42.8% Tops 20% 19.8% 0%
Figure 4: Training and validation accuracy of AlexNet on ImageNet
We draw the process of training in Figure 4, the baseline results of AlexNet are marked with dashed lines. Our ternary model effectively reduces the gap between training and validation performance, which appears to be quite great for DoReFa-Net and TWN. This indicates that adopting trainable W p l and W n
We also report the results of our methods on ResNet-18B in Table 3. The full-precision error rates are obtained from Facebookâs implementation. Here we cite Binarized Weight Network(BWN)Rastegari et al. (2016) results with all layers quantized and TWN ï¬netuned based on a full precision network, while we train our TTQ model from scratch. Compared with BWN and TWN, our method obtains a substantial improvement.
Error Top1 Top5 Full precision 30.4% 10.8% 2-bit 2-bit 1-bit (BWN) (Ours) (TWN) 39.2% 34.7% 33.4% 17.0% 13.8% 12.8%
Table 3: Top1 and Top5 error rate of ResNet-18 on ImageNet
# 6 DISCUSSION
In this section we analyze performance of our model with regard to weight compression and inference speeding up. These two goals are achieved through reducing bit precision and introducing sparsity. We also visualize convolution kernels in quantized convolution layers to ï¬nd that basic patterns of edge/corner detectors are also well learned from scratch even precision is low.
6.1 SPATIAL AND ENERGY EFFICIENCY
We save storage for models by 16à by using ternary weights. Although switching from a binary- weight network to a ternary-weight network increases bits per weight, it brings sparsity to the weights, which gives potential to skip the computation on zero weights and achieve higher energy efï¬ciency.
6.1.1 TRADE-OFF BETWEEN SPARSITY AND ACCURACY
Figure 5 shows the relationship between sparsity and accuracy. As the sparsity of weights grows from 0 (a pure binary-weight network) to 0.5 (a ternary network with 50% zeros), both the training and validation error decrease. Increasing sparsity beyond 50% reduces the model capacity too far, increasing error. Minimum error occurs with sparsity between 30% and 50%.
We introduce only one hyper-parameter to reduce search space. This hyper-parameter can be either sparsity, or the threshold t w.r.t the max value in Equation 6. We ï¬nd that using threshold produces better results. This is because ï¬xing the threshold allows the sparsity of each layer to vary (Figure refï¬g:weights).
7
Published as a conference paper at ICLR 2017
# Validation Error
# Train Error
18% 16% 14% 12% 10% _ Full Precision 8% 8% Error Rate 6% 4% 2% 0% w/o pruning 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Sparsity: percentage of zero weights Figure 5: v.s. Sparsity on ResNet-20
Figure 5: Accuracy v.s. Sparsity on ResNet-20
# 6.1.2 SPARSITY AND EFFICIENCY OF ALEXNET
We further analyze parameters from our AlexNet model. We calculate layer-wise density (complement of sparsity) as shown in Table 4. Despite we use different W p for each layer, ternary weights can be pre-computed when fetched from memory, thus multiplications during convolution and inner product process are still saved. Compared to Deep Compression, we accelerate inference speed using ternary values and more importantly, we reduce energy consumption of inference by saving memory references and multiplications, while achieving higher accuracy.
We notice that without all quantized layers sharing the same t for Equation 9, our model achieves considerable sparsity in convolution layers where the majority of computations takes place. Therefore we are able to squeeze forward time to less than 30% of full precision networks.
As for spatial compression, by substituting 32-bit weights with 2-bit ternary weights, our model is approximately 16Ã smaller than original 32-bit AlexNet.
6.2 KERNEL VISUALIZATION
We visualize quantized convolution kernels in Figure 6. The left matrix is kernels from the second convolution layer (5 à 5) and the right one is from the third (3 à 3). We pick ï¬rst 10 input channels and ï¬rst 10 output channels to display for each layer. Grey, black and white color represent zero, negative and positive weights respectively.
We observe similar ï¬lter patterns as full precision AlexNet. Edge and corner detectors of various directions can be found among listed kernels. While these patterns are important for convolution neural networks, the precision of each weight is not. Ternary value ï¬lters are capable enough extracting key features after a full precision ï¬rst convolution layer while saving unnecessary storage.
Furthermore, we ï¬nd that there are a number of empty ï¬lters (all zeros) or ï¬lters with single non-zero value in convolution layers. More aggressive pruning can be applied to prune away these redundant kernels to further compress and speed up our model.
Layer conv1 conv2 conv3 conv4 conv5 conv total fc1 fc2 fc3 fc total All total Pruning (NIPSâ15) Density Width Density Width 8 bit 100% 32 bit 8 bit 100% 32 bit 8 bit 100% 32 bit 8 bit 100% 32 bit 8 bit 100% 32 bit 100% - 5 bit 100% 32 bit 5 bit 100% 32 bit 5 bit 100% 32 bit - 100% - 100% Full precision 84% 38% 35% 37% 37% 37% 9% 9% 25% 10% 11% - - - Ours Density Width 32 bit 100% 2 bit 23% 2 bit 24% 2 bit 40% 2 bit 43% - 33% 2 bit 30% 2 bit 36% 32 bit 100% - 37% - 37%
Table 4: Alexnet layer-wise sparsity
8
Published as a conference paper at ICLR 2017
Published as a conference paper at ICLR 2017
Figure 6: Visualization of kernels from Ternary AlexNet trained from Imagenet.
# 7 CONCLUSION
We introduce a novel neural network quantization method that compresses network weights to ternary values. We introduce two trained scaling coefï¬cients W l n for each layer and train these coefï¬cients using back-propagation. During training, the gradients are back-propagated both to the latent full-resolution weights and to the scaling coefï¬cients. We use layer-wise thresholds that are proportional to the maximum absolute values to quantize the weights. When deploying the ternary network, only the ternary weights and scaling coefï¬cients are needed, which reducing parameter size by at least 16Ã. Experiments show that our model reaches or even surpasses the accuracy of full precision models on both CIFAR-10 and ImageNet dataset. On ImageNet we exceed the accuracy of prior ternary networks (TWN) by 3%.
9
Published as a conference paper at ICLR 2017
# REFERENCES
MartÃn Abadi and et. al o. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http://tensorflow.org/. Software available from tensorï¬ow.org.
Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, et al. Deep speech 2: End-to-end speech recognition in english and mandarin. arXiv preprint arXiv:1512.02595, 2015.
Matthieu Courbariaux, Itay Hubara, COM Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks: Training neural networks with weights and activations constrained to+ 1 or-.
Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks In Advances in Neural Information Processing Systems, pp. with binary weights during propagations. 3123â3131, 2015.
Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. CoRR, abs/1510.00149, 2, 2015.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016.
Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Quantized neural net- works: Training neural networks with low precision weights and activations. arXiv preprint arXiv:1609.07061, 2016.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadar- rama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. In F. Pereira, C. classiï¬cation with and pp. URL http://papers.nips.cc/paper/ Imagenet deep convolutional neural networks. K. Q. Weinberger 1097â1105. Curran Associates, 4824-imagenet-classification-with-deep-convolutional-neural-networks. pdf. J. C. Burges, L. Bottou, Information Processing Systems 25, (eds.), Advances Inc., in Neural 2012.
Fengfu Li and Bin Liu. Ternary weight networks. arXiv preprint arXiv:1605.04711, 2016.
Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. Neural networks with few multiplications. arXiv preprint arXiv:1510.03009, 2015.
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classiï¬cation using binary convolutional neural networks. arXiv preprint arXiv:1603.05279, 2016.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211â252, 2015. doi: 10.1007/s11263-015-0816-y.
Shuchang Zhou, Zekun Ni, Xinyu Zhou, He Wen, Yuxin Wu, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016.
10 | {
"id": "1502.03167"
} |
1611.10012 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | 7 1 0 2
r p A 5 2 ] V C . s c [ 3 v 2 1 0 0 1 . 1 1 6 1 : v i X r a
# Speed/accuracy trade-offs for modern convolutional object detectors
Vivek Rathod Ian Fischer Chen Sun Zbigniew Wojna Menglong Zhu Yang Song Anoop Korattikara Sergio Guadarrama Kevin Murphy Google Research
# Abstract
The goal of this paper is to serve as a guide for se- lecting a detection architecture that achieves the right speed/memory/accuracy balance for a given application and platform. To this end, we investigate various ways to trade accuracy for speed and memory usage in modern con- volutional object detection systems. A number of successful systems have been proposed in recent years, but apples-to- apples comparisons are difï¬cult due to different base fea- ture extractors (e.g., VGG, Residual Networks), different default image resolutions, as well as different hardware and software platforms. We present a uniï¬ed implementation of the Faster R-CNN [31], R-FCN [6] and SSD [26] systems, which we view as âmeta-architecturesâ and trace out the speed/accuracy trade-off curve created by using alterna- tive feature extractors and varying other critical parameters such as image size within each of these meta-architectures. On one extreme end of this spectrum where speed and mem- ory are critical, we present a detector that achieves real time speeds and can be deployed on a mobile device. On the opposite end in which accuracy is critical, we present a detector that achieves state-of-the-art performance mea- sured on the COCO detection task.
# 1. Introduction
A lot of progress has been made in recent years on object detection due to the use of convolutional neural networks (CNNs). Modern object detectors based on these networks â such as Faster R-CNN [31], R-FCN [6], Multibox [40], SSD [26] and YOLO [29] â are now good enough to be deployed in consumer products (e.g., Google Photos, Pin- terest Visual Search) and some have been shown to be fast enough to be run on mobile devices.
However, it can be difï¬cult for practitioners to decide what architecture is best suited to their application. Stan- dard accuracy metrics, such as mean average precision (mAP), do not tell the entire story, since for real deploy- ments of computer vision systems, running time and mem- ory usage are also critical. For example, mobile devices often require a small memory footprint, and self driving
cars require real time performance. Server-side production systems, like those used in Google, Facebook or Snapchat, have more leeway to optimize for accuracy, but are still sub- ject to throughput constraints. While the methods that win competitions, such as the COCO challenge [25], are opti- mized for accuracy, they often rely on model ensembling and multicrop methods which are too slow for practical us- age.
Unfortunately, only a small subset of papers (e.g., R- FCN [6], SSD [26] YOLO [29]) discuss running time in any detail. Furthermore, these papers typically only state that they achieve some frame-rate, but do not give a full picture of the speed/accuracy trade-off, which depends on many other factors, such as which feature extractor is used, input image sizes, etc.
In this paper, we seek to explore the speed/accuracy trade-off of modern detection systems in an exhaustive and fair way. While this has been studied for full image clas- siï¬cation( (e.g., [3]), detection models tend to be signif- icantly more complex. We primarily investigate single- model/single-pass detectors, by which we mean models that do not use ensembling, multi-crop methods, or other âtricksâ such as horizontal ï¬ipping. In other words, we only pass a single image through a single network. For simplicity (and because it is more important for users of this technol- ogy), we focus only on test-time performance and not on how long these models take to train.
Though it is impractical to compare every recently pro- posed detection system, we are fortunate that many of the leading state of the art approaches have converged on a common methodology (at least at a high level). This has allowed us to implement and compare a large number of de- tection systems in a uniï¬ed manner. In particular, we have created implementations of the Faster R-CNN, R-FCN and SSD meta-architectures, which at a high level consist of a single convolutional network, trained with a mixed regres- sion and classiï¬cation objective, and use sliding window style predictions.
To summarize, our main contributions are as follows:
⢠We provide a concise survey of modern convolutional
1
detection systems, and describe how the leading ones follow very similar designs.
We describe our ï¬exible and uniï¬ed implementation of three meta-architectures (Faster R-CNN, R-FCN and SSD) in Tensorï¬ow which we use to do exten- sive experiments that trace the accuracy/speed trade- off curve for different detection systems, varying meta- architecture, feature extractor, image resolution, etc. ⢠Our ï¬ndings show that using fewer proposals for Faster R-CNN can speed it up signiï¬cantly without a big loss in accuracy, making it competitive with its faster cousins, SSD and RFCN. We show that SSDs performance is less sensitive to the quality of the fea- ture extractor than Faster R-CNN and R-FCN. And we identify sweet spots on the accuracy/speed trade-off curve where gains in accuracy are only possible by sac- riï¬cing speed (within the family of detectors presented here).
⢠Several of the meta-architecture and feature-extractor combinations that we report have never appeared be- fore in literature. We discuss how we used some of these novel combinations to train the winning entry of the 2016 COCO object detection challenge.
# 2. Meta-architectures
Neural nets have become the leading method for high quality object detection in recent years. In this section we survey some of the highlights of this literature. The R-CNN paper by Girshick et al. [11] was among the ï¬rst modern incarnations of convolutional network based detection. In- spired by recent successes on image classiï¬cation [20], the R-CNN method took the straightforward approach of crop- ping externally computed box proposals out of an input im- age and running a neural net classiï¬er on these crops. This approach can be expensive however because many crops are necessary, leading to signiï¬cant duplicated computation from overlapping crops. Fast R-CNN [10] alleviated this problem by pushing the entire image once through a feature extractor then cropping from an intermediate layer so that crops share the computation load of feature extraction.
While both R-CNN and Fast R-CNN relied on an exter- nal proposal generator, recent works have shown that it is possible to generate box proposals using neural networks as well [41, 40, 8, 31]. In these works, it is typical to have a collection of boxes overlaid on the image at different spatial locations, scales and aspect ratios that act as âanchorsâ (sometimes called âpriorsâ or âdefault boxesâ). A model is then trained to make two predictions for each anchor: (1) a discrete class prediction for each anchor, and (2) a continuous prediction of an offset by which the anchor needs to be shifted to ï¬t the groundtruth bounding box.
Papers that follow this anchors methodology then
2
minimize a combined classiï¬cation and regression loss that we now describe. For each anchor a, we ï¬rst ï¬nd the best matching groundtruth box b (if one exists). If such a match can be found, we call a a âpositive anchorâ, and assign it (1) a class label ya â {1 . . . K} and (2) a vector encoding of box b with respect to anchor a (called the box encoding If no match is found, we call a a ânegative Ï(ba; a)). anchorâ and we set the class label to be ya = 0. If for the anchor a we predict box encoding floc(I; a, θ) and corresponding class fcls(I; a, θ), where I is the image and θ the model parameters, then the loss for a is measured as a weighted sum of a location-based loss and a classiï¬cation loss:
L(a,Z; 0) = a: U[ais positive] « Lioc(¢(ba; a) â fioc(Z; a, 4)) +8 bets(Ya, feis(Z; a, 4)), (dd)
where α, β are weights balancing localization and classi- ï¬cation losses. To train the model, Equation 1 is averaged over anchors and minimized with respect to parameters θ.
The choice of anchors has signiï¬cant implications both for accuracy and computation. In the (ï¬rst) Multibox paper [8], these anchors (called âbox priorsâ by the au- thors) were generated by clustering groundtruth boxes in the dataset. In more recent works, anchors are generated by tiling a collection of boxes at different scales and aspect ratios regularly across the image. The advantage of hav- ing a regular grid of anchors is that predictions for these boxes can be written as tiled predictors on the image with shared parameters (i.e., convolutions) and are reminiscent of traditional sliding window methods, e.g. [44]. The Faster R-CNN [31] paper and the (second) Multibox paper [40] (which called these tiled anchors âconvolutional priorsâ) were the ï¬rst papers to take this new approach.
# 2.1. Meta-architectures
In our paper we focus primarily on three recent (meta)- architectures: SSD (Single Shot Multibox Detector [26]), Faster R-CNN [31] and R-FCN (Region-based Fully Con- volutional Networks [6]). While these papers were orig- inally presented with a particular feature extractor (e.g., VGG, Resnet, etc), we now review these three methods, de- coupling the choice of meta-architecture from feature ex- tractor so that conceptually, any feature extractor can be used with SSD, Faster R-CNN or R-FCN.
# 2.1.1 Single Shot Detector (SSD).
Though the SSD paper was published only recently (Liu et al., [26]), we use the term SSD to refer broadly to archi- tectures that use a single feed-forward convolutional net- work to directly predict classes and anchor offsets without requiring a second stage per-proposal classiï¬cation oper- ation (Figure 1a). Under this deï¬nition, the SSD meta- architecture has been explored in a number of precursors to [26]. Both Multibox and the Region Proposal Network
Paper Szegedy et al. [40] Redmon et al. [29] Ren et al. [31] He et al. [13] Liu et al. [26] (v1) Liu et al. [26] (v2, v3) Dai et al [6] Meta-architecture SSD SSD Faster R-CNN Faster R-CNN SSD SSD R-FCN Feature Extractor InceptionV3 Custom (GoogLeNet inspired) VGG ResNet-101 InceptionV3 VGG ResNet-101 Matching Bipartite Box Center Argmax Argmax Argmax Argmax Argmax Box Encoding Ï(ba, a) [x0, y0, x1, y1] â â [xc, yc, h] , yc ha , yc ha [x0, y0, x1, y1] , yc ha , yc ha w, [ xc wa [ xc wa , log w, log h] , log w, log h] [ xc wa [ xc wa , log w, log h] , log w, log h] Location Loss functions L2 L2 SmoothL1 SmoothL1 L2 SmoothL1 SmoothL1
Table 1: Convolutional detection models that use one of the meta-architectures described in Section 2. Boxes are encoded with respect to a matching anchor a via a function Ï (Equation 1), where [x0, y0, x1, y1] are min/max coordinates of a box, xc, yc are its center coordinates, and w, h its width and height. In some cases, wa, ha, width and height of the matching anchor are also used. Notes: (1) We include an early arXiv version of [26], which used a different conï¬guration from that published at ECCV 2016; (2) [29] uses a fast feature extractor described as being inspired by GoogLeNet [39], which we do not compare to; (3) YOLO matches a groundtruth box to an anchor if its center falls inside the anchor (we refer to this as BoxCenter).
(a) SSD. (b) Faster RCNN. (c) R-FCN.
; a AS:
iP â â_ Gg
animate p sete are
Figure 1: High level diagrams of the detection meta-architectures compared in this paper.
(RPN) stage of Faster R-CNN [40, 31] use this approach to predict class-agnostic box proposals. [33, 29, 30, 9] use SSD-like architectures to predict ï¬nal (1 of K) class labels. And Poirson et al., [28] extended this idea to predict boxes, classes and pose.
ticularly inï¬uential, and has led to a number of follow-up works [2, 35, 34, 46, 13, 5, 19, 45, 24, 47] (including SSD and R-FCN). Notably, half of the submissions to the COCO object detection server as of November 2016 are reported to be based on the Faster R-CNN system in some way.
# 2.1.2 Faster R-CNN.
In the Faster R-CNN setting, detection happens in two stages (Figure 1b). In the ï¬rst stage, called the region pro- posal network (RPN), images are processed by a feature extractor (e.g., VGG-16), and features at some selected in- termediate level (e.g., âconv5â) are used to predict class- agnostic box proposals. The loss function for this ï¬rst stage takes the form of Equation 1 using a grid of anchors tiled in space, scale and aspect ratio.
In the second stage, these (typically 300) box proposals are used to crop features from the same intermediate feature map which are subsequently fed to the remainder of the fea- ture extractor (e.g., âfc6â followed by âfc7â) in order to pre- dict a class and class-speciï¬c box reï¬nement for each pro- posal. The loss function for this second stage box classiï¬er also takes the form of Equation 1 using the proposals gener- ated from the RPN as anchors. Notably, one does not crop proposals directly from the image and re-run crops through the feature extractor, which would be duplicated computa- tion. However there is part of the computation that must be run once per region, and thus the running time depends on the number of regions proposed by the RPN.
# 2.2. R-FCN
While Faster R-CNN is an order of magnitude faster than Fast R-CNN, the fact that the region-speciï¬c component must be applied several hundred times per image led Dai et al. [6] to propose the R-FCN (Region-based Fully Con- volutional Networks) method which is like Faster R-CNN, but instead of cropping features from the same layer where region proposals are predicted, crops are taken from the last layer of features prior to prediction (Figure 1c). This approach of pushing cropping to the last layer minimizes the amount of per-region computation that must be done. Dai et al. argue that the object detection task needs local- ization representations that respect translation variance and thus propose a position-sensitive cropping mechanism that is used instead of the more standard ROI pooling operations used in [10, 31] and the differentiable crop mechanism of [5]. They show that the R-FCN model (using Resnet 101) could achieve comparable accuracy to Faster R-CNN often at faster running times. Recently, the R-FCN model was also adapted to do instance segmentation in the recent TA- FCN model [22], which won the 2016 COCO instance seg- mentation challenge.
Since appearing in 2015, Faster R-CNN has been par-
3
# 3. Experimental setup
The introduction of standard benchmarks such as Im- agenet [32] and COCO [25] has made it easier in recent years to compare detection methods with respect to ac- curacy. However, when it comes to speed and memory, apples-to-apples comparisons have been harder to come by. Prior works have relied on different deep learning frame- works (e.g., DistBelief [7], Caffe [18], Torch [4]) and dif- ferent hardware. Some papers have optimized for accuracy; others for speed. And ï¬nally, in some cases, metrics are reported using slightly different training sets (e.g., COCO training set vs. combined training+validation sets).
In order to better perform apples-to-apples comparisons, we have created a detection platform in Tensorï¬ow [1] and have recreated training pipelines for SSD, Faster R-CNN and R-FCN meta-architectures on this platform. Having a uniï¬ed framework has allowed us to easily swap feature ex- tractor architectures, loss functions, and having it in Ten- sorï¬ow allows for easy portability to diverse platforms for deployment. In the following we discuss ways to conï¬gure model architecture, loss function and input on our platform â knobs that can be used to trade speed and accuracy.
# 3.1. Architectural conï¬guration
# 3.1.1 Feature extractors.
In all of the meta-architectures, we ï¬rst apply a convolu- tional feature extractor to the input image to obtain high- level features. The choice of feature extractor is crucial as the number of parameters and types of layers directly affect memory, speed, and performance of the detector. We have selected six representative feature extractors to compare in this paper and, with the exception of MobileNet [14], all have open source Tensorï¬ow implementations and have had sizeable inï¬uence on the vision community.
In more detail, we consider the following six feature ex- tractors. We use VGG-16 [37] and Resnet-101 [13], both of which have won many competitions such as ILSVRC and COCO 2015 (classiï¬cation, detection and segmentation). We also use Inception v2 [16], which set the state of the art in the ILSVRC 2014 classiï¬cation and detection challenges, as well as its successor Inception v3 [42]. Both of the In- ception networks employed âInception unitsâ which made it possible to increase the depth and width of a network with- out increasing its computational budget. Recently, Szegedy et al. [38] proposed Inception Resnet (v2), which combines the optimization beneï¬ts conferred by residual connections with the computation efï¬ciency of Inception units. Fi- nally, we compare against the new MobileNet network [14], which has been shown to achieve VGG-16 level accuracy on Imagenet with only 1/30 of the computational cost and model size. MobileNet is designed for efï¬cient inference in various mobile vision applications. Its building blocks are
4
depthwise separable convolutions which factorize a stan- dard convolution into a depthwise convolution and a 1 Ã 1 convolution, effectively reducing both computational cost and number of parameters.
For each feature extractor, there are choices to be made in order to use it within a meta-architecture. For both Faster R-CNN and R-FCN, one must choose which layer to use for predicting region proposals. In our experiments, we use the choices laid out in the original papers when possible. For example, we use the âconv5â layer from VGG-16 [31] and the last layer of conv 4 x layers in Resnet-101 [13]. For other feature extractors, we have made analogous choices. See supplementary materials for more details.
Liu et al. [26] showed that in the SSD setting, using multiple feature maps to make location and conï¬dence pre- dictions at multiple scales is critical for good performance. For VGG feature extractors, they used conv4 3, fc7 (con- verted to a convolution layer), as well as a sequence of added layers. In our experiments, we follow their method- ology closely, always selecting the topmost convolutional feature map and a higher resolution feature map at a lower level, then adding a sequence of convolutional layers with spatial resolution decaying by a factor of 2 with each addi- tional layer used for prediction. However unlike [26], we use batch normalization in all additional layers.
For comparison, feature extractors used in previous works are shown in Table 1. In this work, we evaluate all combinations of meta-architectures and feature extractors, most of which are novel. Notably, Inception networks have never been used in Faster R-CNN frameworks and until re- cently were not open sourced [36]. Inception Resnet (v2) and MobileNet have not appeared in the detection literature to date.
# 3.1.2 Number of proposals.
For Faster R-CNN and R-FCN, we can also choose the number of region proposals to be sent to the box classiï¬er at test time. Typically, this number is 300 in both settings, but an easy way to save computation is to send fewer boxes po- tentially at the risk of reducing recall. In our experiments, we vary this number of proposals between 10 and 300 in order to explore this trade-off.
# 3.1.3 Output stride settings for Resnet and Inception Resnet.
Our implementation of Resnet-101 is slightly modiï¬ed from the original to have an effective output stride of 16 instead of 32; we achieve this by modifying the conv5 1 layer to have stride 1 instead of 2 (and compensating for re- duced stride by using atrous convolutions in further layers) as in [6]. For Faster R-CNN and R-FCN, in addition to the
default stride of 16, we also experiment with a (more ex- pensive) stride 8 Resnet-101 in which the conv4 1 block is additionally modiï¬ed to have stride 1. Likewise, we exper- iment with stride 16 and stride 8 versions of the Inception Resnet network. We ï¬nd that using stride 8 instead of 16 improves the mAP by a factor of 5%1, but increased run- ning time by a factor of 63%.
# 3.2. Loss function conï¬guration
Beyond selecting a feature extractor, there are choices in conï¬guring the loss function (Equation 1) which can impact training stability and ï¬nal performance. Here we describe the choices that we have made in our experiments and Ta- ble 1 again compares how similar loss functions are conï¬g- ured in other works.
# 3.2.1 Matching.
Determining classiï¬cation and regression targets for each anchor requires matching anchors to groundtruth instances. Common approaches include greedy bipartite matching (e.g., based on Jaccard overlap) or many-to-one matching strategies in which bipartite-ness is not required, but match- ings are discarded if Jaccard overlap between an anchor and groundtruth is too low. We refer to these strategies as Bipartite or Argmax, respectively. In our experiments we use Argmax matching throughout with thresholds set as suggested in the original paper for each meta-architecture. After matching, there is typically a sampling procedure de- signed to bring the number of positive anchors and negative anchors to some desired ratio. In our experiments, we also ï¬x these ratios to be those recommended by the paper for each meta-architecture.
# 3.2.2 Box encoding.
To encode a groundtruth box with respect to its matching anchor, we use the box encoding function Ï(ba; a) = [10 · xc , 5·log w, 5·log h] (also used by [11, 10, 31, 26]). wa Note that the scalar multipliers 10 and 5 are typically used in all of these prior works, even if not explicitly mentioned.
# 3.2.3 Location loss (¢;,,.).
Following [10, 31, 26], we use the Smooth L1 (or Hu- ber [15]) loss function in all experiments.
# 3.3. Input size conï¬guration.
In Faster R-CNN and R-FCN, models are trained on im- ages scaled to M pixels on the shorter edge whereas in SSD, images are always resized to a ï¬xed shape M à M . We explore evaluating each model on downscaled images as
1 i.e., (map8 - map16) / map16 = 0.05.
5
a way to trade accuracy for speed. In particular, we have trained high and low-resolution versions of each model. In the âhigh-resolutionâ settings, we set M = 600, and in the âlow-resolutionâ setting, we set M = 300. In both cases, this means that the SSD method processes fewer pix- els on average than a Faster R-CNN or R-FCN model with all other variables held constant.
# 3.4. Training and hyperparameter tuning
We jointly train all models end-to-end using asyn- chronous gradient updates on a distributed cluster [7]. For Faster RCNN and R-FCN, we use SGD with momentum with batch sizes of 1 (due to these models being trained using different image sizes) and for SSD, we use RM- SProp [43] with batch sizes of 32 (in a few exceptions we reduced the batch size for memory reasons). Finally we manually tune learning rate schedules individually for each feature extractor. For the model conï¬gurations that match works in literature ([31, 6, 13, 26]), we have reproduced or surpassed the reported mAP results.2
Note that for Faster R-CNN and R-FCN, this end-to- end approach is slightly different from the 4-stage train- ing procedure that is typically used. Additionally, in- stead of using the ROI Pooling layer and Position-sensitive ROI Pooling layers used by [31, 6], we use Tensorï¬owâs âcrop and resizeâ operation which uses bilinear interpola- tion to resample part of an image onto a ï¬xed sized grid. This is similar to the differentiable cropping mechanism of [5], the attention model of [12] as well as the Spatial Transformer Network [17]. However we disable backpropa- gation with respect to bounding box coordinates as we have found this to be unstable during training.
Our networks are trained on the COCO dataset, using all training images as well as a subset of validation images, holding out 8000 examples for validation.3 Finally at test time, we post-process detections with non-max suppression using an IOU threshold of 0.6 and clip all boxes to the image window. To evaluate our ï¬nal detections, we use the ofï¬cial COCO API [23], which measures mAP averaged over IOU thresholds in [0.5 : 0.05 : 0.95], amongst other metrics.
# 3.5. Benchmarking procedure
To time our models, we use a machine with 32GB RAM, Intel Xeon E5-1650 v2 processor and an Nvidia GeForce GTX Titan X GPU card. Timings are reported on GPU for a batch size of one. The images used for timing are resized so that the smallest size is at least k and then cropped to
2In the case of SSD with VGG, we have reproduced the number re- ported in the ECCV version of the paper, but the most recent version on ArXiv uses an improved data augmentation scheme to obtain somewhat higher numbers, which we have not yet experimented with.
3We remark that this dataset is similar but slightly smaller than the trainval35k set that has been used in several papers, e.g., [2, 26].
k à k where k is either 300 or 600 based on the model. We average the timings over 500 images.
We include postprocessing in our timing (which includes non-max suppression and currently runs only on the CPU). Postprocessing can take up the bulk of the running time for the fastest models at â¼ 40ms and currently caps our maximum framerate to 25 frames per second. Among other things, this means that while our timing results are compa- rable amongst each other, they may not be directly compara- ble to other reported speeds in the literature. Other potential differences include hardware, software drivers, framework (Tensorï¬ow in our case), and batch size (e.g., the Liu et al. [26] report timings using batch sizes of 8). Finally, we use tfprof [27] to measure the total memory demand of the models during inference; this gives a more platform inde- pendent measure of memory demand. We also average the memory measurements over three images.
# 3.6. Model Details
Table 2 summarizes the feature extractors that we use. All models are pretrained on ImageNet-CLS. We give de- tails on how we train the object detectors using these feature extractors below.
# 3.6.1 Faster R-CNN
implementation of Faster We use Tensorï¬owâs RCNN [31] closely, âcrop and resizeâ operation instead of standard ROI pooling . Except for VGG, all the feature extractors use batch normalization after convolutional layers. We freeze the batch normalization parameters to be those estimated during ImageNet pretraining. We train faster RCNN with asynchronous SGD with momentum of 0.9. The initial learning rates depend on which feature extractor we used, as explained below. We reduce the learning rate by 10x after 900K iterations and another 10x after 1.2M iterations. 9 GPU workers are used during asynchronous training. Each GPU worker takes a single image per iteration; the minibatch size for RPN training is 256, while the minibatch size for box classiï¬er training is 64.
⢠VGG [37]: We extract features from the âconv5â layer whose stride size is 16 pixels. Similar to [5], we crop and resize feature maps to 14x14 then maxpool to 7x7. The initial learning rate is 5e-4.
⢠Resnet 101 [13]: We extract features from the last layer of the âconv4â block. When operating in atrous mode, the stride size is 8 pixels, otherwise it is 16 pix- els. Feature maps are cropped and resized to 14x14 then maxpooled to 7x7. The initial learning rate is 3e- 4.
⢠Inception V2 [16]: We extract features from the âMixed 4eâ layer whose stride size is 16 pixels. Fea-
6
Model VGG-16 MobileNet Inception V2 ResNet-101 Inception V3 Inception Resnet V2 14,714,688 3,191,072 10,173,112 42,605,504 21,802,784 54,336,736
Table 2: Properties of the 6 feature extractors that we use. Top-1 accuracy is the classiï¬cation accuracy on ImageNet.
ture maps are cropped and resized to 14x14. The initial learning rate is 2e-4.
⢠Inception V3 [42]: We extract features from the âMixed 6eâ layer whose stride size is 16 pixels. Fea- ture maps are cropped and resized to 17x17. The initial learning rate is 3e-4.
⢠Inception Resnet [38]: We extract features the from âMixed 6aâ layer including its associated residual lay- ers. When operating in atrous mode, the stride size is 8 pixels, otherwise is 16 pixels. Feature maps are cropped and resized to 17x17. The initial learning rate is 1e-3. ⢠MobileNet
features from the âConv2d 11â layer whose stride size is 16 pixels. Fea- ture maps are cropped and resized to 14x14. The initial learning rate is 3e-3.
# 3.6.2 R-FCN
We follow the implementation of R-FCN [6] closely, but use Tensorï¬owâs âcrop and resizeâ operation instead of ROI pooling to crop regions from the position-sensitive score maps. All feature extractors use batch normalization after convolutional layers. We freeze the batch normalization pa- rameters to be those estimated during ImageNet pretraining. We train R-FCN with asynchronous SGD with momentum of 0.9. 9 GPU workers are used during asynchronous train- ing. Each GPU worker takes a single image per iteration; the minibatch size for RPN training is 256. As of the time of this submission, we do not have R-FCN results for VGG or Inception V3 feature extractors.
⢠Resnet 101 [13]: We extract features from âblock3â layer. When operating in atrous mode, the stride size is 8 pixels, otherwise it is 16 pixels. Position-sensitive score maps are cropped with spatial bins of size 7x7 and resized to 21x21. We use online hard example mining to sample a minibatch of size 128 for training the box classiï¬er. The initial learning rate is 3e-4. It is reduced by 10x after 1M steps and another 10x after 1.2M steps.
from âMixed 4eâ layer whose stride size is 16 pixels. Position-sensitive score maps are cropped with spatial bins of size 3x3 and resized to 12x12. We use online hard example mining to sample a minibatch of size 128 for training the box classiï¬er. The initial learning rate is 2e-4. It is reduced by 10x after 1.8M steps and an- other 10x after 2M steps.
⢠Inception Resnet [38]: We extract features from âMixed 6aâ layer including its associated residual lay- ers. When operating in atrous mode, the stride size is 8 pixels, otherwise it is 16 pixels. Position-sensitive score maps are cropped with spatial bins of size 7x7 and resized to 21x21. We use all proposals from RPN for box classiï¬er training. The initial learning rate is 7e-4. It is reduced by 10x after 1M steps and another 10x after 1.2M steps.
from âConv2d 11â layer whose stride size is 16 pix- els. Position-sensitive score maps are cropped with spatial bins of size 3x3 and resized to 12x12. We use online hard example mining to sample a minibatch of size 128 for training the box classiï¬er. The initial learning rate is 2e-3. Learning rate is reduced by 10x after 1.6M steps and another 10x after 1.8M steps.
# 3.6.3 SSD
As described in the main paper, we follow the methodol- ogy of [26] closely, generating anchors in the same way and selecting the topmost convolutional feature map and a higher resolution feature map at a lower level, then adding a sequence of convolutional layers with spatial resolution decaying by a factor of 2 with each additional layer used for prediction. The feature map selection for Resnet101 is slightly different, as described below.
Unlike [26], we use batch normalization in all additional layers, and initialize weights with a truncated normal distri- bution with a standard deviation of Ï = .03. With the ex- ception of VGG, we also do not perform âlayer normaliza- tionâ (as suggested in [26]) as we found it not to be neces- sary for the other feature extractors. Finally, we employ dis- tributed training with asynchronous SGD using 11 worker machines. Below we discuss the speciï¬cs for each feature extractor that we have considered. As of the time of this submission, we do not have SSD results for the Inception V3 feature extractor and we only have results for high reso- lution SSD models using the Resnet 101 and Inception V2 feature extractors.
⢠VGG [37]: Following the paper, we use conv4 3, and fc7 layers, appending ï¬ve additional convolutional lay- ers with decaying spatial resolution with depths 512,
7
256, 256, 256, 256, respectively. We apply L2 normal- ization to the conv4 3 layer, scaling the feature norm at each location in the feature map to a learnable scale, s, which is initialized to 20.0.
During training, we use a base learning rate of lrbase = .0003, but use a warm-up learning rate scheme in which we ï¬rst train with a learning rate of 0.82 · lrbase for 10K iterations followed by 0.8 · lrbase for another 10K iterations.
⢠Resnet 101 [13]: We use the feature map from the last layer of the âconv4â block. When operating in atrous mode, the stride size is 8 pixels, otherwise it is 16 pix- els. Five additional convolutional layers with decay- ing spatial resolution are appended, which have depths 512, 512, 256, 256, 128, respectively. We have exper- imented with including the feature map from the last layer of the âconv5â block. With âconv5â features, the mAP numbers are very similar, but the computational costs are higher. Therefore we choose to use the last layer of the âconv4â block. During training, a base learning rate of 3e-4 is used. We use a learning rate warm up strategy similar to the VGG one.
⢠Inception V2 [16]: We use Mixed 4c and Mixed 5c, appending four additional convolutional layers with decaying resolution with depths 512, 256, 256, 128 re- spectively. We use ReLU6 as the non-linear activation function for each conv layer. During training, we use a base learning rate of 0.002, followed by learning rate decay of 0.95 every 800k steps.
[38]: We use Mixed 6a and Conv2d 7b, appending three additional convolutional layers with decaying resolution with depths 512, 256, 128 respectively. We use ReLU as the non-linear acti- vation function for each conv layer. During training, we use a base learning rate of 0.0005, followed by learning rate decay of 0.95 every 800k steps.
⢠MobileNet [14]: We use conv 11 and conv 13, ap- pending four additional convolutional layers with de- caying resolution with depths 512, 256, 256, 128 re- spectively. The non-linear activation function we use is ReLU6 and both batch norm parameters β and γ are trained. During training, we use a base learning rate of 0.004, followed by learning rate decay of 0.95 every 800k steps.
# 4. Results
In this section we analyze the data that we have collected by training and benchmarking detectors, sweeping over model conï¬gurations as described in Section 3. Each such model conï¬guration includes a choice of meta-architecture, feature extractor, stride (for Resnet and Inception Resnet) as
40 Faster R-CNN w/ResNet, Hi Res, 50 Proposals @ Faster RCNN 35 renw/ ResNet, Hi Res, 100 Proposals Be oe © % ce Ce 30 fe @ E 2 =25 + ea o id e > oP, 20 ? I SSD w/Inception V2, Lo Res 15 SSD w/MobileNet, Lo Res 10 0 200 400 Meta Architecture R-FCN HS rT OTT TOT Te @ ssD Faster R-CNN w/Inception Resnet, Hi Res, 300 Proposals, Stride 8 Feature Extractor Inception Resnet V2 Inception V2 Inception V3 MobileNet Resnet 101 VGG 600 800 1000 GPU Time
Figure 2: Accuracy vs time, with marker shapes indicating meta-architecture and colors indicating feature extractor. Each (meta-architecture, feature extractor) pair can correspond to multiple points on this plot due to changing input sizes, stride, etc.
minival mAP 19.3 22 32 30.4 35.7
test-dev mAP 18.8 21.6 31.9 30.3 35.6
# Table 3: Test-dev performance of the âcriticalâ points along our optimality frontier.
well as input resolution and number of proposals (for Faster R-CNN and R-FCN).
For each such model conï¬guration, we measure timings on GPU, memory demand, number of parameters and ï¬oat- ing point operations as described below. We make the entire table of results available in the supplementary material, not- ing that as of the time of this submission, we have included 147 model conï¬gurations; models for a small subset of ex- perimental conï¬gurations (namely some of the high resolu- tion SSD models) have yet to converge, so we have for now omitted them from analysis.
to almost 1 second. Generally we observe that R-FCN and SSD models are faster on average while Faster R-CNN tends to lead to slower but more accurate models, requir- ing at least 100 ms per image. However, as we discuss be- low, Faster R-CNN models can be just as fast if we limit the number of regions proposed. We have also overlaid an imaginary âoptimality frontierâ representing points at which better accuracy can only be attained within this fam- ily of detectors by sacriï¬cing speed. In the following, we highlight some of the key points along the optimality fron- tier as the best detectors to use and discuss the effect of the various model conï¬guration options in isolation.
# 4.1. Analyses
# 4.1.1 Accuracy vs time
# 4.1.2 Critical points on the optimality frontier.
Figure 2 is a scatterplot visualizing the mAP of each of our model conï¬gurations, with colors representing feature ex- tractors, and marker shapes representing meta-architecture. Running time per image ranges from tens of milliseconds
(Fastest: SSD w/MobileNet): On the fastest end of this op- timality frontier, we see that SSD models with Inception v2 and Mobilenet feature extractors are most accurate of the fastest models. Note that if we ignore postprocessing
8
32 Meta Architecture @ Faster RCNN 28 @ R-FCN 30 e ssD a, 26 < E24 = 5 22 8 e g js s © 20 e 5 gz 3 18 3 8 3 £ e ° 16 3) 2 ° 14 70 72 74 e a nN 3 = 2 â g 5 3 $ cre ome 5 ono he â g g = = E J: 76 78 80 82 Feature Extractor Accuracy
Figure 3: Accuracy of detector (mAP on COCO) vs accuracy of feature extractor (as measured by top-1 accuracy on ImageNet-CLS). To avoid crowding the plot, we show only the low resolution models.
60 TE Overall mAP TE A? (large) a AP (medium) a mA? (small) 50 40 30 20 10 0 Faster Faster Faster Faster Faster RCNN | ssD | RCNN | RFCN | ssD | RCNN | R-FCN | ssp | RCNN | RFCN | ssD | RCNN | R-FCN | SSD VGG MobileNet Inception V2 Resnet 101 Inception Resnet V2
Figure 4: Accuracy stratiï¬ed by object size, meta-architecture and feature extractor, We ï¬x the image resolution to 300.
costs, Mobilenet seems to be roughly twice as fast as In- ception v2 while being slightly worse in accuracy. (Sweet Spot: R-FCN w/Resnet or Faster R-CNN w/Resnet and only 50 proposals): There is an âelbowâ in the middle of the optimality frontier occupied by R-FCN models using Residual Network feature extractors which seem to strike the best balance between speed and accuracy among our model conï¬gurations. As we discuss below, Faster R-CNN w/Resnet models can attain similar speeds if we limit the number of proposals to 50. (Most Accurate: Faster R-CNN w/Inception Resnet at stride 8): Finally Faster R-CNN with dense output Inception Resnet models attain the best pos-
sible accuracy on our optimality frontier, achieving, to our knowledge, the state-of-the-art single model performance. However these models are slow, requiring nearly a second of processing time. The overall mAP numbers for these 5 models are shown in Table 3.
# 4.1.3 The effect of the feature extractor.
Intuitively, stronger performance on classiï¬cation should be positively correlated with stronger performance on COCO detection. To verify this, we investigate the relationship be- tween overall mAP of different models and the Top-1 Ima- genet classiï¬cation accuracy attained by the pretrained fea-
9
40 Meta Architecture @ Faster RCNN fi R-FCN @ ssp @ C) e 35 ow e@ ee o * Ms 30 ge e t +4 ° = 25 ec a g Cd r) e fs) o,! $ 20 @ 8 15 e O Resolution @ 300 @ 600 10 ie} 200 400 600 800 1000 GPU Time
Figure 5: Effect of image resolution.
ture extractor used to initialize each model. Figure 3 in- dicates that there is indeed an overall correlation between classiï¬cation and detection performance. However this cor- relation appears to only be signiï¬cant for Faster R-CNN and R-FCN while the performance of SSD appears to be less re- liant on its feature extractorâs classiï¬cation accuracy.
objects, conï¬rms that high resolution models lead to signif- icantly better mAP results on small objects (by a factor of 2 in many cases) and somewhat better mAP results on large objects as well. We also see that strong performance on small objects implies strong performance on large objects in our models, (but not vice-versa as SSD models do well on large objects but not small).
4.1.4 The effect of object size. Figure 4 shows performance for different models on dif- ferent sizes of objects. Not surprisingly, all methods do much better on large objects. We also see that even though SSD models typically have (very) poor performance on small objects, they are competitive with Faster RCNN and R-FCN on large objects, even outperforming these meta- architectures for the faster and more lightweight feature ex- tractors.
# 4.1.5 The effect of image size.
It has been observed by other authors that input resolution can signiï¬cantly impact detection accuracy. From our ex- periments, we observe that decreasing resolution by a fac- tor of two in both dimensions consistently lowers accuracy (by 15.88% on average) but also reduces inference time by a relative factor of 27.4% on average.
One reason for this effect is that high resolution inputs allow for small objects to be resolved. Figure 5 compares detector performance on large objects against that on small
# 4.1.6 The effect of the number of proposals.
For Faster R-CNN and R-FCN, we can adjust the number of proposals computed by the region proposal network. The authors in both papers use 300 boxes, however, our experi- ments suggest that this number can be signiï¬cantly reduced without harming mAP (by much). In some feature extrac- tors where the âbox classiï¬erâ portion of Faster R-CNN is expensive, this can lead to signiï¬cant computational sav- ings. Figure 6a visualizes this trade-off curve for Faster R- CNN models with high resolution inputs for different fea- ture extractors. We see that Inception Resnet, which has 35.4% mAP with 300 proposals can still have surprisingly high accuracy (29% mAP) with only 10 proposals. The sweet spot is probably at 50 proposals, where we are able to obtain 96% of the accuracy of using 300 proposals while reducing running time by a factor of 3. While the compu- tational savings are most pronounced for Inception Resnet, we see that similar tradeoffs hold for all feature extractors. Figure 6b visualizes the same trade-off curves for R-
10
(a) FRCNN (b) RFCN
Figure 6: Effect of proposing increasing number of regions on mAP accuracy (solid lines) and GPU inference time (dotted). Surprisingly, for Faster R-CNN with Inception Resnet, we obtain 96% of the accuracy of using 300 proposals by using only 50 proposals, which reduces running time by a factor of 3.
400 200 150 101 3} w ° Faster RCNN Faster RCNN Faster R-FCN RCNN VGG MobileNet R-FCN Inception v2 GPU time (ms) for Resolution=300 fm GPU Time Faster RCNN Faster R-FCN RCNN R-FCN Resnet 101 Inception Resnet V2
Figure 7: GPU time (milliseconds) for each model, for image resolution of 300.
FCN models and shows that the computational savings from using fewer proposals in the R-FCN setting are minimal â this is not surprising as the box classiï¬er (the expen- sive part) is only run once per image. We see in fact that at 100 proposals, the speed and accuracy for Faster R-CNN models with ResNet becomes roughly comparable to that of equivalent R-FCN models which use 300 proposals in both mAP and GPU speed.
4.1.7 FLOPs analysis. Figure 7 plots the GPU time for each model combination. However, this is very platform dependent. Counting FLOPs (multiply-adds) gives us a platform independent measure of computation, which may or may not be linear with respect to actual running times due to a number of issues such as caching, I/O, hardware optimization etc,
Figures 8a and 8b plot the FLOP count against observed wallclock times on the GPU and CPU respectively. Inter- estingly, we observe in the GPU plot (Figure 8a) that each
11
Meta Architecture 800 @ Faster RCNN @ R-FCN @ ssD e @ Q @ 600 e 2 100 cis = = 8 al ry e oom Feature Extractor 200 â Sap a © _ Inception Resnet V2 oo 8 @ = Inception v2 @ = Inception V3 8 8 © MobileNet oO @ = Resnet 101 @ vGG oO 200 400 600 800 1000 GPU Time
Meta Architecture 800 @ = Faster RCNN m R-FCN @ ssD e @ Q e 600 e 2 | Ly Feature Extractor @ Inception Resnet V2 @ Inception v2 @ = Inception V3 © MobileNet @ = Resnet 101 @ vGG o 2000 4000 6000 8000 10000 12000 CPU Time
Meta Architecture Meta Architecture 800 @ Faster RCNN @ R-FCN @ ssD e 800 @ = Faster RCNN m R-FCN @ ssD e @ @ Q @ Q e 600 600 e e 2 100 = = 8 al | Ly ry e oom Feature Extractor Feature Extractor 200 â Sap a © _ Inception Resnet V2 @ Inception Resnet oo 8 @ = Inception v2 @ Inception v2 @ = Inception V3 @ = Inception V3 8 8 © MobileNet © MobileNet oO @ = Resnet 101 @ = Resnet 101 @ vGG @ vGG oO 200 400 600 800 1000 o 2000 4000 6000 8000 10000 GPU Time CPU Time (a) GPU. (b) CPU.
# (a) GPU.
(b) CPU.
Figure 8: FLOPS vs time.
# Memory (MB) for Resolution=300
10000
a Memory 8000 6000 4000 2000 Faster Faster Faster | Faster Faster RCNN SsD RCNN R-FCN ssD RCNN R-FCN ssD RCNN R-FCN ssD RCNN R-FCN ssD VGG MobileNet Inception V2 Resnet 101 Inception Resnet V2
Figure 9: Memory (Mb) usage for each model. Note that we measure total memory usage rather than peak memory usage. Moreover, we include all data points corresponding to the low-resolution models here. The error bars reï¬ect variance in memory usage by using different numbers of proposals for the Faster R-CNN and R-FCN models (which leads to the seemingly considerable variance in the Faster-RCNN with Inception Resnet bar).
model has a different average ratio of ï¬ops to observed run- ning time in milliseconds. For denser block models such as Resnet 101, FLOPs/GPU time is typically greater than 1, perhaps due to efï¬ciency in caching. For Inception and Mo- bilenet models, this ratio is typically less than 1 â we con- jecture that this could be that factorization reduces FLOPs, but adds more overhead in memory I/O or potentially that current GPU instructions (cuDNN) are more optimized for dense convolution.
Figure 9 plots some of the same information in more detail, drilling down by meta-architecture and feature extractor se- lection. As with speed, Mobilenet is again the cheapest, re- quiring less than 1Gb (total) memory in almost all settings.
# 4.1.9 Good localization at .75 IOU means good local- ization at all IOU thresholds.
# 4.1.8 Memory analysis.
For memory benchmarking, we measure total usage rather than peak usage. Figures 10a, 10b plot memory usage against GPU and CPU wallclock times. Overall, we observe high correlation with running time with larger and more powerful feature extractors requiring much more memory.
While slicing the data by object size leads to interesting insights, it is also worth nothing that slicing data by IOU threshold does not give much additional information. Fig- ure 11 shows in fact that both mAP@.5 and mAP@.75 performances are almost perfectly linearly correlated with mAP@[.5:.95]. Thus detectors that have poor performance at the higher IOU thresholds always also show poor perfor- mance at the lower IOU thresholds. This being said, we also observe that mAP@.75 is slightly more tightly corre-
12
(a) GPU (b) CPU . . Figure 10: Memory (Mb) vs time.
20000 Meta Architecture @ FasterRCNN mm RFCN @ SSD 15000 e 10000 -- Memory (MB) Feature Extractor Inception Resnet V2 Inception V2 Inception V3. MobileNet Resnet 101 ves 600 700 800 900
20000 Meta Architecture @ FasterRCNN mm RFCN @ SSD CS) 15000 ° a = 2 10000 -- 5 £ 5 © Inception Resnet V2 5000 @ Inception v2 @ Inception V3 @ MobileNet @ Reset 101 @ vcG 0 0 2000 4000 6000 8000 10000 12000 CPU Time
lated with mAP@[.5:.95] (with R2 > .99), so if we were to replace the standard COCO metric with mAP at a single IOU threshold, we would likely choose IOU=.75.
COCO category for each model and declared two models to be too similar if their category-wise AP vectors had cosine distance greater than some threshold.
# 4.2. State-of-the-art detection on COCO
Finally, we brieï¬y describe how we ensembled some of our models to achieve the current state of the art perfor- mance on the 2016 COCO object detection challenge. Our model attains 41.3% mAP@[.5, .95] on the COCO test set and is an ensemble of ï¬ve Faster R-CNN models based on Resnet and Inception Resnet feature extractors. This outper- forms the previous best result (37.1% mAP@[.5, .95]) by MSRA, which used an ensemble of three Resnet-101 mod- els [13]. Table 4 summarizes the performance of our model and highlights how our model has improved on the state-of- the-art across all COCO metrics. Most notably, our model achieves a relative improvement of nearly 60% on small ob- ject recall over the previous best result. Even though this ensemble with state-of-the-art numbers could be viewed as an extreme point on the speed/accuracy tradeoff curves (re- quires â¼50 end-to-end network evaluations per image), we have chosen to present this model in isolation since it is not comparable to the âsingle modelâ results that we focused on in the rest of the paper.
To construct our ensemble, we selected a set of ï¬ve mod- els from our collection of Faster R-CNN models. Each of the models was based on Resnet and Inception Resnet fea- ture extractors with varying output stride conï¬gurations, re- trained using variations on the loss functions, and different random orderings of the training data. Models were se- lected greedily using their performance on a held-out val- idation set. However, in order to take advantage of models with complementary strengths, we also explicitly encour- age diversity by pruning away models that are too similar to previously selected models (c.f., [21]). To do this, we computed the vector of average precision results across each
Table 5 summarizes the ï¬nal selected model speciï¬ca- tions as well as their individual performance on COCO as single models.4 Ensembling these ï¬ve models using the procedure described in [13] (Appendix A) and using multi- crop inference then yielded our ï¬nal model. Note that we do not use multiscale training, horizontal ï¬ipping, box reï¬ne- ment, box voting, or global context which are sometimes used in the literature. Table 6 compares a single modelâs performance against two ways of ensembling, and shows that (1) encouraging for diversity did help against a hand selected ensemble, and (2) ensembling and multicrop were responsible for almost 7 points of improvement over a sin- gle model.
# 4.3. Example detections
In Figures 12 to 17 we visualize detections on images from the COCO dataset, showing side-by-side comparisons of ï¬ve of the detectors that lie on the âoptimality frontierâ of the speed-accuracy trade-off plot. To visualize, we select detections with score greater than a threshold and plot the top 20 detections in each image. We use a threshold of .5 for Faster R-CNN and R-FCN and .3 for SSD. These thresh- olds were hand-tuned for (subjective) visual attractiveness and not using rigorous criteria so we caution viewers from reading too much into the tea leaves from these visualiza- tions. This being said, we see that across our examples, all of the detectors perform reasonably well on large objects â SSD only shows its weakness on small objects, missing some of the smaller kites and people in the ï¬rst image as well as the smaller cups and bottles on the dining table in
4Note that these numbers were computed on a held-out validation set and are not strictly comparable to the ofï¬cial COCO test-dev data results (though they are expected to be very close).
13
60 MAP Subset ooâ 50. @ mAP@.50IOU co © mAP@.7510U e 40 COP < 30 20 10 0 10 15 20 25 30 35 40 Overall mAP
Figure 11: Overall COCO mAP (@[.5:.95]) for all experiments plotted against corresponding mAP@.50IOU and mAP@.75IOU. It is unsurprising that these numbers are correlated, but it is interesting that they are almost perfectly correlated so for these models, it is never the case that a model has strong performance at 50% IOU but weak performance at 75% IOU.
Ours MSRA2015 Trimps-Soushen AP 0.413 0.371 0.359 AP@.50IOU 0.62 0.588 0.58 AP@.75IOU 0.45 0.398 0.383 APsmall 0.231 0.173 0.158 APmed 0.436 0.415 0.407 APlarge 0.547 0.525 0.509 AR@100 0.604 0.489 0.497 ARsmall 0.424 0.267 0.269 ARmed 0.641 0.552 0.557 ARlarge 0.748 0.679 0.683
Table 4: Performance on the 2016 COCO test-challenge dataset. AP and AR refer to (mean) average precision and average recall respectively. Our model achieves a relative improvement of nearly 60% on small objects recall over the previous state-of-the-art COCO detector.
AP 32.93 33.3 34.75 35.0 35.64 Feature Extractor Resnet 101 Resnet 101 Inception Resnet (v2) Inception Resnet (v2) Inception Resnet (v2) Output stride 8 8 16 16 8 loss ratio 3:1 1:1 1:1 2:1 1:1
Table 5: Summary of single models that were automatically selected to be part of the diverse ensemble. Loss ratio refers to the multipliers α, β for location and classiï¬cation losses, respectively.
Faster RCNN with Inception Resnet (v2) Hand selected Faster RCNN ensemble w/multicrop Diverse Faster RCNN ensemble w/multicrop AP 0.347 0.41 0.416 AP@.50IOU 0.555 0.617 0.619 AP@.75IOU 0.367 0.449 0.454 APsmall 0.135 0.236 0.239 APmed 0.381 0.43 0.435 APlarge 0.52 0.542 0.549
Table 6: Effects of ensembling and multicrop inference. Numbers reported on COCO test-dev dataset. Second row (hand selected ensemble) consists of 6 Faster RCNN models with 3 Resnet 101 (v1) and 3 Inception Resnet (v2) and the third row (diverse ensemble) is described in detail in Table 5.
the last image.
# Acknowledgements
# 5. Conclusion
We would like to thank the following people for their advice and sup- port throughout this project: Tom Duerig, Dumitru Erhan, Jitendra Ma- lik, George Papandreou, Dominik Roblek, Chuck Rosenberg, Nathan Sil- berman, Abhinav Srivastava, Rahul Sukthankar, Christian Szegedy, Jasper Uijlings, Jay Yagnik, Xiangxin Zhu.
We have performed an experimental comparison of some of the main aspects that inï¬uence the speed and accuracy of modern object detectors. We hope this will help practi- tioners choose an appropriate method when deploying ob- ject detection in the real world. We have also identiï¬ed some new techniques for improving speed without sacri- ï¬cing much accuracy, such as using many fewer proposals than is usual for Faster R-CNN.
# References
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. Tensorï¬ow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorï¬ow. org, 1, 2015. 4
14
(a) SSD+Mobilenet, lowres
(b) SSD+InceptionV2, lowres
(c) FRCNN+Resnet101, 100 proposals
(d) RFCN+Resnet10, 300 proposals
(e) FRCNN+IncResnetV2, 300 proposals
Figure 12: Example detections from 5 different models.
Inside- outside net: Detecting objects in context with skip arXiv preprint pooling and recurrent neural networks. arXiv:1512.04143, 2015. 3, 5
[3] A. Canziani, A. Paszke, and E. Culurciello. An analysis of deep neural network models for practical applications. arXiv preprint arXiv:1605.07678, 2016. 1
[6] J. Dai, Y. Li, K. He, and J. Sun. R-fcn: Object detection via region-based fully convolutional networks. arXiv preprint arXiv:1605.06409, 2016. 1, 2, 3, 4, 5, 6
[7] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, A. Senior, P. Tucker, K. Yang, Q. V. Le, et al. Large scale dis- tributed deep networks. In Advances in neural information processing systems, pages 1223â1231, 2012. 4, 5
[4] R. Collobert, K. Kavukcuoglu, and C. Farabet. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, number EPFL-CONF-192376, 2011. 4
[8] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov. Scal- In Pro- able object detection using deep neural networks. ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2147â2154, 2014. 2
Instance-aware semantic seg- mentation via multi-task network cascades. arXiv preprint arXiv:1512.04412, 2015. 3, 5, 6
[9] C.-Y. Fu, W. Liu, A. Ranga, A. Tyagi, and A. C. Berg. Dssd: Deconvolutional single shot detector. arXiv preprint arXiv:1701.06659, 2017. 3
15
(a) SSD+Mobilenet, lowres
(b) SSD+InceptionV2, lowres
(c) FRCNN+Resnet101, 100 proposals
(d) RFCN+Resnet10, 300 proposals
(e) FRCNN+IncResnetV2, 300 proposals
Figure 13: Example detections from 5 different models.
[10] R. Girshick. Fast r-cnn. In Proceedings of the IEEE Inter- national Conference on Computer Vision, pages 1440â1448, 2015. 2, 3, 5
generation. In Proceedings of The 32nd International Con- ference on Machine Learning, pages 1462â1471, 2015. 5
[11] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- ture hierarchies for accurate object detection and semantic In Proceedings of the IEEE conference on segmentation. computer vision and pattern recognition, pages 580â587, 2014. 2, 5
I. Danihelka, A. Graves, D. Rezende, and D. Wierstra. Draw: A recurrent neural network for image
[13] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn- ing for image recognition. arXiv preprint arXiv:1512.03385, 2015. 3, 4, 5, 6, 7, 13
[14] A. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. Mobilenets: Efï¬- cient convolutional neural networks for mobile vision appli- cations. arXiv preprint arXiv:1704.04861, 2017. 4, 6, 7
16
(a) SSD+Mobilenet, lowres
(b) SSD+InceptionV2, lowres
(c) FRCNN+Resnet101, 100 proposals
(d) RFCN+Resnet10, 300 proposals
(e) FRCNN+IncResnetV2, 300 proposals
Figure 14: Example detections from 5 different models.
[15] P. J. Huber et al. Robust estimation of a location parameter. The Annals of Mathematical Statistics, 35(1):73â101, 1964. 5
[19] K.-H. Kim, S. Hong, B. Roh, Y. Cheon, and M. Park. Pvanet: Deep but lightweight neural networks for real-time object de- tection. arXiv preprint arXiv:1608.08021, 2016. 3
[16] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. 4, 6, 7
[17] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial In Advances in Neural Information transformer networks. Processing Systems, pages 2017â2025, 2015. 5
[18] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Gir- shick, S. Guadarrama, and T. Darrell. Caffe: Convolu- tional architecture for fast feature embedding. In Proceed- ings of the 22nd ACM international conference on Multime- dia, pages 675â678. ACM, 2014. 4
Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012. 2
[21] S. Lee, S. Purushwalkam, M. Cogswell, D. Crandall, and D. Batra. Why M heads are better than one: Training a di- verse ensemble of deep networks. 19 Nov. 2015. 13
[22] Y. Li, H. Qi, J. Dai, X. Ji, and W. Yichen. Translation- aware fully convolutional instance segmentation. https: //github.com/daijifeng001/TA-FCN, 2016. 3
17
(a) SSD+Mobilenet, lowres
(b) SSD+InceptionV2, lowres
(c) FRCNN+Resnet101, 100 proposals
(d) RFCN+Resnet10, 300 proposals
(e) FRCNN+IncResnetV2, 300 proposals
Figure 15: Example detections from 5 different models.
[23] T. Y. Lin and P. Dollar. Ms coco api. https://github. com/pdollar/coco, 2016. 5
[24] T.-Y. Lin, P. Doll´ar, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. arXiv preprint arXiv:1612.03144, 2016. 3
[25] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. Lawrence Zitnick. Microsoft COCO: Common objects in context. In ECCV, 1 May 2014. 1, 4
[26] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.- Y. Fu, and A. C. Berg. Ssd: Single shot multibox detector. In European Conference on Computer Vision, pages 21â37. Springer, 2016. 1, 2, 3, 4, 5, 6, 7
tfprof: A proï¬ling tool for tensorï¬ow mod- [27] X. Pan. https://github.com/tensorflow/ els. tensorflow/tree/master/tensorflow/tools/ tfprof, 2016. 6
[28] P. Poirson, P. Ammirato, C.-Y. Fu, W. Liu, J. Kosecka, and A. C. Berg. Fast single shot detection and pose estimation.
18
bird: 35% i
(a) SSD+Mobilenet, lowres
(b) SSD+InceptionV2, lowres
rar go: 90% lap i
aaa aa iol i
(c) FRCNN+Resnet101, 100 proposals
(d) RFCN+Resnet10, 300 proposals
fraser filer 0% lap
(e) FRCNN+IncResnetV2, 300 proposals
Figure 16: Example detections from 5 different models.
arXiv preprint arXiv:1609.05590, 2016. 3
[29] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You only look once: Uniï¬ed, real-time object detection. arXiv preprint arXiv:1506.02640, 2015. 1, 3
[30] J. Redmon and A. Farhadi. Yolo9000: Better, faster, stronger. arXiv preprint arXiv:1612.08242, 2016. 3
[31] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91â99, 2015. 1, 2, 3, 4, 5, 6
[32] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, Imagenet large scale visual recognition challenge. et al.
International Journal of Computer Vision, 115(3):211â252, 2015. 4
[33] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229, 2013. 3
[34] A. Shrivastava and A. Gupta. Contextual priming and feed- back for faster r-cnn. In European Conference on Computer Vision, pages 330â348. Springer, 2016. 3
[35] A. Shrivastava, A. Gupta, and R. Girshick. Training region- based object detectors with online hard example mining. arXiv preprint arXiv:1604.03540, 2016. 3
Tf-slim: A high library to deï¬ne complex models in tensorï¬ow.
19
(a) SSD+Mobilenet, lowres
(b) SSD+InceptionV2, lowres
(c) FRCNN+Resnet101, 100 proposals
(d) RFCN+Resnet10, 300 proposals
(e) FRCNN+IncResnetV2, 300 proposals
Figure 17: Example detections from 5 different models.
https://research.googleblog.com/2016/08/ tf-slim-high-level-library-to-define. html, 2016. [Online; accessed 6-November-2016]. 4
[40] C. Szegedy, S. Reed, D. Erhan, and D. Anguelov. arXiv preprint Scalable, high-quality object detection. arXiv:1412.1441, 2014. 1, 2, 3
[37] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 4, 6, 7
[41] C. Szegedy, A. Toshev, and D. Erhan. Deep neural networks for object detection. In Advances in Neural Information Pro- cessing Systems, pages 2553â2561, 2013. 2
Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016. 4, 6, 7
[42] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015. 4, 6
[39] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1â9, 2015. 3
[43] T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4(2), 2012. 5
20
[44] P. Viola and M. J. Jones. Robust real-time face detection. International journal of computer vision, 57(2):137â154, 2004. 2
[45] B. Yang, J. Yan, Z. Lei, and S. Z. Li. Craft objects from im- ages. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6043â6051, 2016. 3
[46] S. Zagoruyko, A. Lerer, T.-Y. Lin, P. O. Pinheiro, S. Gross, S. Chintala, and P. Doll´ar. A multipath network for object detection. arXiv preprint arXiv:1604.02135, 2016. 3
[47] A. Zhai, D. Kislyuk, Y. Jing, M. Feng, E. Tzeng, J. Donahue, Y. L. Du, and T. Darrell. Visual discovery at pinterest. arXiv preprint arXiv:1702.04680, 2017. 3
21 | {
"id": "1512.00567"
} |
1611.09823 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | 7 1 0 2
# n a J
3 1
] I A . s c [ 3 v 3 2 8 9 0 . 1 1 6 1 : v i X r a
# Under review as a conference paper at ICLR 2017
# DIALOGUE LEARNING WITH HUMAN-IN-THE-LOOP
Jiwei Li, Alexander H. Miller, Sumit Chopra, MarcâAurelio Ranzato, Jason Weston Facebook AI Research, New York, USA {jiwel,ahm,spchopra,ranzato,jase}@fb.com
# ABSTRACT
An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from ï¬xed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives fol- lowing its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.
# INTRODUCTION
A good conversational agent (which we sometimes refer to as a learner or bot1) should have the ability to learn from the online feedback from a teacher: adapting its model when making mistakes and reinforcing the model when the teacherâs feedback is positive. This is particularly important in the situation where the bot is initially trained in a supervised way on a ï¬xed synthetic, domain- speciï¬c or pre-built dataset before release, but will be exposed to a different environment after release (e.g., more diverse natural language utterance usage when talking with real humans, different distributions, special cases, etc.). Most recent research has focused on training a bot from ï¬xed training sets of labeled data but seldom on how the bot can improve through online interaction with humans. Human (rather than machine) language learning happens during communication (Bassiri, 2011; Werts et al., 1995), and not from labeled datasets, hence making this an important subject to study.
In this work, we explore this direction by training a bot through interaction with teachers in an online fashion. The task is formalized under the general framework of reinforcement learning via the teacherâs (dialogue partnerâs) feedback to the dialogue actions from the bot. The dialogue takes place in the context of question-answering tasks and the bot has to, given either a short story or a set of facts, answer a set of questions from the teacher. We consider two types of feedback: explicit numerical rewards as in conventional reinforcement learning, and textual feedback which is more natural in human dialogue, following (Weston, 2016). We consider two online training scenarios: (i) where the task is built with a dialogue simulator allowing for easy analysis and repeatability of experiments; and (ii) where the teachers are real humans using Amazon Mechanical Turk.
We explore important issues involved in online learning such as how a bot can be most efï¬ciently trained using a minimal amount of teacherâs feedback, how a bot can harness different types of feedback signal, how to avoid pitfalls such as instability during online learing with different types of feedback via data balancing and exploration, and how to make learning with real humans feasible via data batching. Our ï¬ndings indicate that it is feasible to build a pipeline that starts from a model trained with ï¬xed data and then learns from interactions with humans to improve itself.
1In this paper, we refer to a learner (either a human or a bot/dialogue agent which is a machine learning algorithm) as the student, and their more knowledgeable dialogue partner as the teacher.
1
# Under review as a conference paper at ICLR 2017
# 2 RELATED WORK
Reinforcement learning has been widely applied to dialogue, especially in slot ï¬lling to solve domain-speciï¬c tasks (Walker, 2000; Schatzmann et al., 2006; Singh et al., 2000; 2002). Efforts include Markov Decision Processes (MDPs) (Levin et al., 1997; 2000; Walker et al., 2003; Pierac- cini et al., 2009), POMDP models (Young et al., 2010; 2013; GaËsic et al., 2013; 2014) and policy learning (Su et al., 2016). Such a line of research focuses mainly on frames with slots to ï¬ll, where the bot will use reinforcement learning to model a state transition pattern, generating dialogue ut- terances to prompt the appropriate user responses to put in the desired slots. This goal is different from ours, where we study end-to-end learning systems and also consider non-reward based setups via textual feedback.
Our work is related to the line of research that focuses on supervised learning for question answering (QA) from dialogues (Dodge et al., 2015; Weston, 2016), either given a database of knowledge (Bordes et al., 2015; Miller et al., 2016) or short texts (Weston et al., 2015; Hermann et al., 2015; Rajpurkar et al., 2016). In our work, the discourse includes the statements made in the past, the question and answer, and crucially the response from the teacher. The latter is what makes the setting different from the standard QA setting, i.e. we use methods that leverage this response also, not just answering questions. Further, QA works only consider ï¬xed datasets with gold annotations, i.e. they do not consider a reinforcement learning setting.
Our work is closely related to a recent work from Weston (2016) that learns through conducting conversations where supervision is given naturally in the response during the conversation. That work introduced the use of forward prediction that learns by predicting the teacherâs feedback, in addition to using reward-based learning of correct answers. However, two important issues were not addressed: (i) it did not use a reinforcement learning setting, but instead used pre-built datasets with ï¬xed policies given in advance; and (ii) experiments used only simulated and no real language data. Hence, models that can learn policies from real online communication were not investigated. To make the differences with our work clear, we will now detail these points further.
The experiments in (Weston, 2016) involve constructing pre-built ï¬xed datasets, rather than training the learner within a simulator, as in our work. Pre-built datasets can only be made by ï¬xing a prior in advance. They achieve this by choosing an omniscient (but deliberately imperfect) labeler that gets Ïacc examples always correct (the paper looked at values 50%, 10% and 1%). Again, this was not learned, and was ï¬xed to generate the datasets. Note that the paper refers to these answers as coming from âthe learnerâ (which should be the model), but since the policy is ï¬xed it actually does not depend on the model. In a realistic setting one does not have access to an omniscient labeler, one has to learn a policy completely from scratch, online, starting with a random policy, so their setting was not practically viable. In our work, when policy training is viewed as batch learning over iterations of the dataset, updating the policy on each iteration, (Weston, 2016) can be viewed as training only one iteration, whereas we perform multiple iterations. This is explained further in Sections 4.2 and 5.1. We show in our experiments that performance improves over the iterations, i.e. it is better than the ï¬rst iteration. We show that such online learning works for both reward- based numerical feedback and for forward prediction methods using textual feedback (under certain conditions which are detailed). This is a key contribution of our work.
Finally, (Weston, 2016) only conducted experiments on synthetic or templated language, and not real language, especially the feedback from the teacher was scripted. While we believe that synthetic datasets are very important for developing understanding (hence we develop a simulator and conduct experiments also with synthetic data), for a new method to gain traction it must be shown to work on real data. We hence employ Mechanical Turk to collect real language data for the questions and importantly for the teacher feedback and construct experiments in this real setting.
# 3 DATASET AND TASKS
We begin by describing the data setup we use. In our ï¬rst set of experiments we build a simulator as a testbed for learning algorithms. In our second set of experiments we use Mechanical Turk to provide real human teachers giving feedback.
2
# Under review as a conference paper at ICLR 2017
# 3.1 SIMULATOR
The simulator adapts two existing ï¬xed datasets to our online setting. Following Weston (2016), we use (i) the single supporting fact problem from the bAbI datasets (Weston et al., 2015) which consists of 1000 short stories from a simulated world interspersed with questions; and (ii) the WikiMovies dataset (Weston et al., 2015) which consists of roughly 100k (templated) questions over 75k entities based on questions with answers in the open movie database (OMDb). Each dialogue takes place between a teacher, scripted by the simulation, and a bot. The communication protocol is as follows: (1) the teacher ï¬rst asks a question from the ï¬xed set of questions existing in the dataset, (2) the bot answers the question, and ï¬nally (3) the teacher gives feedback on the botâs answer.
We follow the paradigm deï¬ned in (Weston, 2016) where the teacherâs feedback takes the form of either textual feedback, a numerical reward, or both, depending on the task. For each dataset, there are ten tasks, which are further described in Sec. A and illustrated in Figure 5 of the appendix. We also refer the readers to (Weston, 2016) for more detailed descriptions and the motivation behind these tasks. In the main text of this paper we only consider Task 6 (âpartial feedbackâ): the teacher replies with positive textual feedback (6 possible templates) when the bot answers correctly, and positive reward is given only 50% of the time. When the bot is wrong, the teacher gives textual feedback containing the answer. Descriptions and experiments on the other tasks are detailed in the appendix. Example dialogues are given in Figure 1.
The difference between our simulation and the original ï¬xed tasks of Weston (2016) is that models are trained on-the-ï¬y. After receiving feedback and/or rewards, we update the model (policy) and then deploy it to collect teacherâs feedback in the next episode or batch. This means the modelâs policy affects the data which is used to train it, which was not the case in the previous work.
Figure 1: Simulator sample dialogues for the bAbI task (left) and WikiMovies (right). We consider 10 different tasks following Weston (2016) but here describe only Task 6; other tasks are detailed in the appendix. The teacherâs dialogue is in black and the bot is in red. (+) indicates receiving positive reward, given only 50% of the time even when correct.
bAbI Task 6: Partial Rewards Mary went to the hallway. John moved to the bathroom. Mary travelled to the kitchen. Where is Mary? Yes, thatâs right! Where is John? Yes, thatâs correct! (+) kitchen bathroom WikiMovies Task 6: Partial Rewards What ï¬lms are about Hawaii? Correct! Who acted in Licence to Kill? No, the answer is Timothy Dalton. What genre is Saratoga Trunk in? Yes! (+) . . . 50 First Dates Billy Madison Drama
Figure 2: Human Dialogue from Mechanical Turk (based on WikiMovies) The human teacherâs dialogue is in black and the bot is in red. We show examples where the bot answers correctly (left) and incorrectly (right). Real humans provide more variability of language in both questions and textual feedback than in the simulator setup (cf. Figure 1).
Sample dialogues with correct answers from the bot: Who wrote the Linguini Incident ? Richard Shepard is one of the right answers here. What year did The World Before Her premiere? Yep! Thatâs when it came out. Which are the movie genres of Mystery of the 13th Guest? Right, it can also be categorized as a mystery. Sample dialogues with incorrect answers from the bot: What are some movies about a supermarket ? There were many options and this one was not among them. Which are the genres of the ï¬lm Juwanna Mann ? That is incorrect. Remember the question asked for a genre not name. Who wrote the story of movie Coraline ? fantasy Thatâs a movie genre and not the name of the writer. A better answer would of been Henry Selick or Neil Gaiman.
3
# Under review as a conference paper at ICLR 2017
3.2 MECHANICAL TURK EXPERIMENTS
Finally, we extended WikiMovies using Mechanical Turk so that real human teachers are giving feedback rather than using a simulation. As both the questions and feedback are templated in the simulation, they are now both replaced with natural human utterances. Rather than having a set of simulated tasks, we have only one task, and we gave instructions to the teachers that they could give feedback as they see ï¬t. The exact instructions given to the Turkers is given in Appendix B. In general, each independent response contains feedback like (i) positive or negative sentences; or (ii) a phrase containing the answer or (iii) a hint, which are similar to setups deï¬ned in the simulator. However, some human responses cannot be so easily categorized, and the lexical variability is much larger in human responses. Some examples of the collected data are given in Figure 2.
# 4 METHODS
4.1 MODEL ARCHITECTURE
In our experiments, we used variants of the End-to-End Memory Network (MemN2N) model (Sukhbaatar et al., 2015) as our underlying architecture for learning from dialogue.
The input to MemN2N is the last utterance of the dialogue history x as well as a set of memories (context) C=c1, c2, ..., cN . The memory C encodes both short-term memory, e.g., dialogue histories between the bot and the teacher, and long-term memories, e.g., the knowledge base facts that the bot has access to. Given the input x and C, the goal is to produce an output/label a.
In the ï¬rst step, the query x is transformed to a vector representation u0 by summing up its con- stituent word embeddings: u0 = Ax. The input x is a bag-of-words vector and A is the d à V word embedding matrix where d denotes the emebbding dimension and V denotes the vocabulary size. Each memory ci is similarly transformed to a vector mi. The model will read information from the memory by comparing input representation u0 with memory vectors mi using softmax weights:
o1 = p1 i mi i = softmax(uT p1 0 mi) i (1)
This process selects memories relevant to the last utterance x, i.e., the memories with large values of p1 i . The returned memory vector o1 is the weighted sum of memory vectors. This process can be repeated to query the memory N times (so called âhopsâ) by adding on to the original input, u1 = o1 + u0, or to the previous state, un = on + unâ1, and then using un to query the memories again.
In the end, uN is input to a softmax function for the ï¬nal prediction:
N y1, uT where y1, . . . , yL denote the set of candidate answers. If the answer is a word, yi is the corresponding word embedding. If the answer is a sentence, yi is the embedding for the sentence achieved in the same way that we obtain embeddings for query x and memory C.
The standard way MemN2N is trained is via a cross entropy criterion on known input-output pairs, which we refer to as supervised or imitation learning. As our work is in a reinforcement learning setup where our model must make predictions to learn, this procedure will not work, so we instead consider reinforcement learning algorithms which we describe next.
4.2 REINFORCEMENT LEARNING
In this section, we present the algorithms we used to train MemN2N in an online fashion. Our learn- ing setup can be cast as a particular form of Reinforcement Learning. The policy is implemented by the MemN2N model. The state is the dialogue history. The action space corresponds to the set of answers the MemN2N has to choose from to answer the teacherâs question. In our setting, the policy chooses only one action for each episode. The reward is either 1 (a reward from the teacher when the bot answers correctly) or 0 otherwise. Note that in our experiments, a reward equal to 0 might mean that the answer is incorrect or that the positive reward is simply missing. The overall setup is closest to standard contextual bandits, except that the reward is binary.
4
# Under review as a conference paper at ICLR 2017
When working with real human dialogues, e.g. collecting data via Mechanical Turk, it is easier to set up a task whereby a bot is deployed to respond to a large batch of utterances, as opposed to a single one. The latter would be more difï¬cult to manage and scale up since it would require some form of synchronization between the model replicas interacting with each human.
This is comparable to the real world situation where a teacher can either ask a student a single question and give feedback right away, or set up a test that contains many questions and grade all of them at once. Only after the learner completes all questions, it can hear feedback from the teacher.
We use batch size to refer to how many dialogue episodes the current model is used to collect feedback before updating its parameters. In the Reinforcement Learning literature, batch size is related to off-policy learning since the MemN2N policy is trained using episodes collected with a stale version of the model. Our experiments show that our model and base algorithms are very robust to the choice of batch size, alleviating the need for correction terms in the learning algorithm (Bottou et al., 2013).
We consider two strategies: (i) online batch size, whereby the target policy is updated after doing a single pass over each batch (a batch size of 1 reverts to the usual on-policy online learning); and (ii) dataset-sized batch, whereby training is continued to convergence on the batch which is the size of the dataset, and then the target policy is updated with the new model, and a new batch is drawn and the procedure iterates. These strategies can be applied to all the methods we use, described below.
Next, we discuss the learning algorithms we considered in this work.
4.2.1 REWARD-BASED IMITATION (RBI)
The simplest algorithm we ï¬rst consider is the one employed in Weston (2016). RBI relies on positive rewards provided by the teacher. It is trained to imitate the correct behavior of the learner, i.e., learning to predict the correct answers (with reward 1) at training time and disregarding the other ones. This is implemented by using a MemN2N that maps a dialogue input to a prediction, i.e. using the cross entropy criterion on the positively rewarded subset of the data.
In order to make this work in the online setting which requires exploration to find the correct answer, we employ an e-greedy strategy: the learner makes a prediction using its own model (the answer assigned the highest probability) with probability 1 â ¢, otherwise it picks a random answer with probability «. The teacher will then give a reward of +1 if the answer is correct, otherwise 0. The bot will then learn to imitate the correct answers: predicting the correct answers while ignoring the incorrect ones.
# 4.2.2 REINFORCE
The second algorithm we use is the REINFORCE algorithm (Williams, 1992), which maximizes the expected cumulative reward of the episode, in our case the expected reward provided by the teacher. The expectation is approximated by sampling an answer from the model distribution. Let a denote the answer that the learner gives, p(a) denote the probability that current model assigns to a, r denote the teacherâs reward, and J(θ) denote the expectation of the reward. We have:
âJ(θ) â â log p(a)[r â b] (3)
where b is the baseline value, which is estimated using a linear regression model that takes as input the output of the memory network after the last hop, and outputs a scalar b denoting the estimation of the future reward. The baseline model is trained by minimizing the mean squared loss between the estimated reward b and actual reward r, ||r â b||2. We refer the readers to (Ranzato et al., 2015; Zaremba & Sutskever, 2015) for more details. The baseline estimator model is independent from the policy model, and its error is not backpropagated through the policy model.
The major difference between RBI and REINFORCE is that (i) the learner only tries to imitate correct behavior in RBI while in REINFORCE it also leverages the incorrect behavior, and (ii) the learner explores using an e-greedy strategy in RBI while in REINFORCE it uses the distribution over actions produced by the model itself.
5
# Under review as a conference paper at ICLR 2017
4.2.3 FORWARD PREDICTION (FP)
FP (Weston, 2016) handles the situation where a numerical reward for a botâs answer is not available, meaning that there are no +1 or 0 labels available after a studentâs utterance. Instead, the model assumes the teacher gives textual feedback t to the botâs answer, taking the form of a dialogue utterance, and the model tries to predict this instead. Suppose that x denotes the teacherâs question and C=c1, c2, ..., cN denotes the dialogue history as before. In FP, the model ï¬rst maps the teacherâs initial question x and dialogue history C to a vector representation u using a memory network with multiple hops. Then the model will perform another hop of attention over all possible studentâs answers in A, with an additional part that incorporates the information of which candidate (i.e., a) was actually selected in the dialogue:
pËa = softmax(uT yËa) o = pËa(yËa + β · 1[Ëa = a]) ËaâA (4)
where yËa denotes the vector representation for the studentâs answer candidate Ëa. β is a (learned) d-dimensional vector to signify the actual action a that the student chooses. o is then combined with u to predict the teacherâs feedback t using a softmax: u1 = o + u t = softmax(uT
(5) where xri denotes the embedding for the ith response. In the online setting, the teacher will give textual feedback, and the learner needs to update its model using the feedback. It was shown in Weston (2016) that in an off-line setting this procedure can work either on its own, or in conjunction with a method that uses numerical rewards as well for improved performance. In the online setting, we consider two simple extensions:
e e-greedy exploration: with probability ⬠the student will give a random answer, and with probability 1 â ¢ it will give the answer that its model assigns the largest probability. This method enables the model to explore the space of actions and to potentially discover correct answers.
⢠data balancing: cluster the set of teacher responses t and then balance training across the clusters equally.2 This is a type of experience replay (Mnih et al., 2013) but sampling with an evened distribution. Balancing stops part of the distribution dominating the learning. For example, if the model is not exposed to sufï¬cient positive and negative feedback, and one class overly dominates, the learning process degenerates to a model that always predicts the same output regardless of its input.
# 5 EXPERIMENTS
Experiments are ï¬rst conducted using our simulator, and then using Amazon Mechanical Turk with real human subjects taking the role of the teacher3.
5.1 SIMULATOR
Online Experiments In our first experiments, we considered both the bAbI and WikiMovies tasks and varied batch size, random exploration rate â¬, and type of model. Figure [3]and Figure [4] shows (Task 6) results on bAbI and WikiMovies. Other tasks yield similar conclusions and are reported in the appendix.
Overall, we obtain the following conclusions:
⢠In general RBI and FP do work in a reinforcement learning setting, but can perform better with random exploration.
⢠In particular RBI can fail without exploration. RBI needs random noise for exploring labels otherwise it can get stuck predicting a subset of labels and fail.
2In the simulated data, because the responses are templates, this can be implemented by ï¬rst randomly sampling the response, and then randomly sampling a story with that response; we keep the history of all stories seen from which we sample. For real data slightly more sophisticated clustering should be used.
3 Code and data are available at https://github.com/facebook/MemNN/tree/master/HITL.
6
# Under review as a conference paper at ICLR 2017
Random Exploration for FP SACP ci Sata cE RS 0.9) 0.8) > 0.7 UV £ 0.6 a U oe g 0.5) â 0.4 oa -_ 0.3] o-~ 0.2 _â 60 80 ie) 20 40 60 80 Epoch Epoch Random Exploration for FP with Balancin 1,0,__Comparing RBI, FP and REINFORCE 0.9) 0.9) 0.8) 0.8] 0.7 > 9-7] fs) © 0.6) £06 5 gos £05 0.4| 0.4 i ] @âe REINFORCE 0.3 0.3] a RBI 0.2 0.2) ma FP i?) 20 40 60 80 i¢) 20 40 60 80 Epoch Epoch 1.0, RBI (eps=0.6) Varying Batch Size FP (eps=0.6) Varying Batch Size 0.9| 0.9) 0.8} 0.8} 5, 0.7 50.7 8 8 © 0.6 § 0.6 Sos 3g gt 05 0.4 ee batch 20 e@âe batch 20 a batch 80 0.4 4 batch 80 0.3 mm batch 320 03 mm batch 320 o2 |e batch 1000 : + batch 1000 i) 20 40 60 80 100 i) 20 40 60 80 100 Epoch Epoch
Figure 3: Training epoch vs. test accuracy for bAbI (Task 6) varying exploration ¢ and batch size. Random exploration is important for both reward-based (RBI) and forward prediction (FP). Performance is largely independent of batch size, and RBI performs similarly to REINFORCE. Note that supervised, rather than reinforcement learning, with gold standard labels achieves 100% accuracy on this task.
e REINFORCE obtains similar performance to RBI with optimal e.
e FP with balancing or with exploration via ⬠both outperform FP alone.
⢠For both RBI and FP, performance is largely independent of online batch size.
Dataset Batch Size Experiments Given that larger online batch sizes appear to work well, and that this could be important in a real-world data collection setup where the same model is deployed to gather a large amount of feedback from humans, we conducted further experiments where the batch size is exactly equal to the dataset size and for each batch training is completed to convergence.
7
# Under review as a conference paper at ICLR 2017
Random Exploration for RBI Random Exploration for FP 0.7| 0.7} 06 0.6, 0.5| a Zo. £04 one £ = Fe} â J 0.4 â g03 _ g = _ 0.34 _ 0.2| oo oo 01 9 0.2 9 as as 0.1 0 5 10 15 20 0 5 10 15 20 Epoch Epoch RBI (eps=0.5) Varying Batch Size Comparing RBI, FP and REINFORCE 0.7 0.71 0.6] 0.6 > 0.5 >o.5) foal £ 5% 3 0.4 uu uu 203 = batch 32 2 03 sa batch 320 0.2 ma batch 3200 02 ee REINFORCE = batch 32000 oa RBI 0.1 © full dataset 0.1 mm FP ) 5 10 15 20 () 5 10 15 20 Epoch Epoch
Figure 4: WikiMovies: Training epoch vs. test accuracy on Task 6 varying (top left panel) explo- ration rate ⬠while setting batch size to 32 for RBI, (top right panel) for FP, (bottom left) batch size for RBI, and (bottom right) comparing RBI, REINFORCE and FP with ⬠= 0.5. The model is robust to the choice of batch size. RBI and REINFORCE perform comparably. Note that supervised, rather than reinforcement learning, with gold standard labels achieves 80% accuracy on this task (
After the model has been trained on the dataset, it is deployed to collect a new dataset of questions and answers, and the process is repeated. Table 1 reports test error at each iteration of training, using the bAbI Task 6 as the case study (see the appendix for results on other tasks). The following conclusions can be made for this setting:
⢠RBI improves in performance as we iterate. Unlike in the online case, RBI does not need random exploration. We believe this is because the ï¬rst batch, which is collected with a randomly initialized model, contains enough variety of examples with positive rewards that the model does not get stuck predicting a subset of labels.
⢠FP is not stable in this setting. This is because once the model gets very good at making predictions (at the third iteration), it is not exposed to a sufï¬cient number of negative re- sponses anymore. From that point on, learning degenerates and performance drops as the model always predicts the same responses. At the next iteration, it will recover again since it has a more balanced training set, but then it will collapse again in an oscillating behavior.
e FP does work if extended with balancing or random exploration with sufficiently large e.
⢠RBI+FP also works well and helps with the instability of FP, alleviating the need for random exploration and data balancing.
Overall, our simulation results indicate that while a bot can be effectively trained fully online from bot-teacher interactions, collecting real dialogue data in batches (which is easier to collect and iterate experiments over) is also a viable approach. We hence pursue the latter approach in our next set of experiments.
8
# Under review as a conference paper at ICLR 2017
Iteration 1 2 3 4 5 6 Imitation Learning 0.24 | 0.23 | 0.23 | 0.22 | 0.23 | 0.23 Reward Based Imitation (RBI) | 0.74 | 0.87 | 0.90 | 0.96 | 0.96 | 0.98 Forward Pred. (FP) 0.99 | 0.96 | 1.00 | 0.30 | 1.00 | 0.29 RBI+FP 0.99 | 0.96 | 0.97 | 0.95 | 0.94 | 0.97 FP (balanced) 0.99 | 0.97 | 0.97 | 0.97 | 0.97 | 0.97 FP (rand. exploration ⬠= 0.25) | 0.96 | 0.88 | 0.94 | 0.26 | 0.64 | 0.99 FP (rand. exploration ⬠= 0.5) | 0.98 | 0.98 | 0.99 | 0.98 | 0.95 | 0.99
Table 1: Test accuracy of various models per iteration in the dataset batch size case (using batch size equal to the size of the full training set) for bAbI, Task 6. Results > 0.95 are in bold.
Relation to experiments in Weston (2016) As described in detail in Section 2 the datasets we use in our experiments were introduced in (Weston et al., 2015). However, that work involved constructing pre-built ï¬xed policies (and hence, datasets), rather than training the learner in a rein- forcement/interactive learning using a simulator, as in our work. They achieved this by choosing an omniscient (but deliberately imperfect) labeler that gets Ïacc examples always correct (the paper looked at values 1%, 10% and 50%). In a realistic setting one does not have access to an omniscient labeler, one has to learn a policy completely from scratch, online, starting with a random policy, as we do here. Nevertheless, it is possible to compare our learnt policies to those results because we use the same train/valid/test splits.
The clearest comparison comparison is via Table 1, where the policy is learnt using batch iterations of the dataset, updating the policy on each iteration. Weston et al. (2015) can be viewed as training only one iteration, with a pre-built policy, as explained above, where 59%, 81% and 99% accuracy was obtained for RBI for Ïacc with 1%, 10% and 50% respectively4. While Ïacc of 50% is good In this work a random policy begins with 74% enough to solve the task, lower values are not. accuracy on the ï¬rst iteration, but importantly on each iteration the policy is updated and improves, with values of 87%, 90% on iterations 2 and 3 respectively, and 98% on iteration 6. This is a key differentiator to the work of (Weston et al., 2015) where such improvement was not shown. We show that such online learning works for both reward-based numerical feedback and for forward prediction methods using textual feedback (as long as balancing or random exploration is performed sufï¬ciently). The ï¬nal performance outperforms most values of Ïacc from Weston et al. (2015) unless Ï is so large that the task is already solved. This is a key contribution of our work.
Similar conclusions can be made for Figures Bjand 4} Despite our initial random policy starting at close to 0% accuracy, if random exploration ⬠> 0.2 is employed then after a number of epochs the performance is better than most values of tacc from {Weston et al.| (2015), e.g. compare the accuracies given in the previous paragraph (59%, 81% and 99%) to Figure|3} top left.
5.2 HUMAN FEEDBACK
We employed Turkers to both ask questions and then give textual feedback on the botâs answers, as described in Section 3.2. Our experimental protocol was as follows. We ï¬rst trained a MemN2N using supervised (i.e., imitation) learning on a training set of 1000 questions produced by Turkers and using the known correct answers provided by the original dataset (and no textual feedback). Next, using the trained policy, we collected textual feedback for the responses of the bot for an additional 10,000 questions. Examples from the collected dataset are given in Figure 2. Given this dataset, we compare various models: RBI, FP and FP+RBI. As we know the correct answers to the additional questions, we can assign a positive reward to questions the bot got correct. We hence measure the impact of the sparseness of this reward signal, where a fraction r of additional examples have rewards. The models are tested on a test set of â¼8,000 questions (produced by Turkers), and hyperparameters are tuned on a similarly sized validation set. Note this is a harder task than the WikiMovies task in the simulator due to the use natural language from Turkers, hence lower test performance is expected.
4Note, this is not the same as a randomly initialized neural network policy, because due to the synthetic construction with an omniscient labeler the labels will be balanced. In our work, we learn the policy from randomly initialized weights which are updated as we learn the policy.
9
# Under review as a conference paper at ICLR 2017
Results are given in Table 2. They indicate that both RBI and FP are useful. When rewards are sparse, FP still works via the textual feedback while RBI can only use the initial 1000 examples when r = 0. As FP does not use numericalrewards at all, it is invariant to the parameter r. The combination of FP and RBI outperforms either alone.
Model Reward Based Imitation (RBI) Forward Prediction (FP) RBI+FP r = 0 0.333 0.358 0.431 r = 0.1 0.340 0.358 0.438 r = 0.5 0.365 0.358 0.443 r = 1 0.375 0.358 0.441
Table 2: Incorporating Feedback From Humans via Mechanical Turk. Textual feedback is provided for 10,000 model predictions (from a model trained with 1k labeled training examples), and additional sparse binary rewards (fraction r of examples have rewards). Forward Prediction and Reward-based Imitation are both useful, with their combination performing best.
We also conducted additional experiments comparing with (i) synthetic feedback and (ii) the fully supervised case which are given in Appendix C.1. They show that the results with human feedback are competitive with these approaches.
# 6 CONCLUSION
We studied dialogue learning of end-to-end models using textual feedback and numerical rewards. Both fully online and iterative batch settings are viable approaches to policy learning, as long as possible instabilities in the learning algorithms are taken into account. Secondly, we showed for the ï¬rst time that the recently introduced FP method can work in both an online setting and on real human feedback. Overall, our results indicate that it is feasible to build a practical pipeline that starts with a model trained on an initial ï¬xed dataset, which then learns from interactions with humans in a (semi-)online fashion to improve itself. Future research should work towards doing this in a never-ending learning setup.
# REFERENCES
Mohammad Amin Bassiri. Interactional feedback and the impact of attitude and motivation on noticing l2 form. English Language and Literature Studies, 1(2):61, 2011.
Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075, 2015.
Leon Bottou, Jonas Peters, Denis X. Quionero-Candela, Joaquin amd Charles, D. Max Chicker- ing, Elon Portugaly, Dipankar Ray, Patrice Simard, and Ed Snelson. Counterfactual reasoning and learning systems: The example of computational advertising. Journal of Machine Learning Research, 14:3207â3260, 2013.
Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander Miller, Arthur Szlam, and Jason Weston. Evaluating prerequisite qualities for learning end-to-end dialog sys- tems. arXiv preprint arXiv:1511.06931, 2015.
Milica GaËsic, Catherine Breslin, Matthew Henderson, Dongho Kim, Martin Szummer, Blaise Thom- son, Pirros Tsiakoulis, and Steve Young. Pomdp-based dialogue manager adaptation to extended domains. In Proceedings of SIGDIAL, 2013.
Milica GaËsic, Dongho Kim, Pirros Tsiakoulis, Catherine Breslin, Matthew Henderson, Martin Szummer, Blaise Thomson, and Steve Young. Incremental on-line adaptation of pomdp-based dialogue managers to extended domains. In Proceedings on InterSpeech, 2014.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa In Advances in Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. Neural Information Processing Systems, pp. 1693â1701, 2015.
10
# Under review as a conference paper at ICLR 2017
Esther Levin, Roberto Pieraccini, and Wieland Eckert. Learning dialogue strategies within the markov decision process framework. In Automatic Speech Recognition and Understanding, 1997. Proceedings., 1997 IEEE Workshop on, pp. 72â79. IEEE, 1997.
Esther Levin, Roberto Pieraccini, and Wieland Eckert. A stochastic model of human-machine in- teraction for learning dialog strategies. IEEE Transactions on speech and audio processing, 8(1): 11â23, 2000.
Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Ja- arXiv preprint son Weston. Key-value memory networks for directly reading documents. arXiv:1606.03126, 2016.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier- stra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
Roberto Pieraccini, David Suendermann, Krishna Dayanidhi, and Jackson Liscombe. Are we there yet? research in commercial spoken dialog systems. In International Conference on Text, Speech and Dialogue, pp. 3â13. Springer, 2009.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016.
MarcâAurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732, 2015.
Jost Schatzmann, Karl Weilhammer, Matt Stuttle, and Steve Young. A survey of statistical user sim- ulation techniques for reinforcement-learning of dialogue management strategies. The knowledge engineering review, 21(02):97â126, 2006.
Satinder Singh, Michael Kearns, Diane J Litman, Marilyn A Walker, et al. Empirical evaluation of a reinforcement learning spoken dialogue system. In AAAI/IAAI, pp. 645â651, 2000.
Satinder Singh, Diane Litman, Michael Kearns, and Marilyn Walker. Optimizing dialogue man- agement with reinforcement learning: Experiments with the njfun system. Journal of Artiï¬cial Intelligence Research, 16:105â133, 2002.
Pei-Hao Su, Milica Gasic, Nikola Mrksic, Lina Rojas-Barahona, Stefan Ultes, David Vandyke, Tsung-Hsien Wen, and Steve Young. Continuously learning neural dialogue management. arXiv preprint arXiv:1606.02689, 2016.
Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advances in neural information processing systems, pp. 2440â2448, 2015.
Marilyn A. Walker. An application of reinforcement learning to dialogue strategy selection in a spoken dialogue system for email. Journal of Artiï¬cial Intelligence Research, 12:387â416, 2000.
Marilyn A Walker, Rashmi Prasad, and Amanda Stent. A trainable generator for recommendations in multimodal dialog. In INTERSPEECH, 2003.
Margaret G Werts, Mark Wolery, Ariane Holcombe, and David L Gast. Instructive feedback: Review of parameters and effects. Journal of Behavioral Education, 5(1):55â75, 1995.
Jason Weston. Dialog-based language learning. arXiv preprint arXiv:1604.06045, 2016.
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri¨enboer, Armand Joulin, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015.
Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
Steve Young, Milica GaËsi´c, Simon Keizer, Franc¸ois Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu. The hidden information state model: A practical framework for pomdp-based spoken dialogue management. Computer Speech & Language, 24(2):150â174, 2010.
11
# Under review as a conference paper at ICLR 2017
Steve Young, Milica GaËsi´c, Blaise Thomson, and Jason D Williams. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160â1179, 2013.
Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprint arXiv:1505.00521, 362, 2015.
12
# Under review as a conference paper at ICLR 2017
# A FURTHER SIMULATOR TASK DETAILS
The tasks in Weston (2016) were speciï¬cally: - Task 1: The teacher tells the student exactly what they should have said (supervised baseline). - Task 2: The teacher replies with positive textual feedback and reward, or negative textual feedback. - Task 3: The teacher gives textual feedback containing the answer when the bot is wrong. - Task 4: The teacher provides a hint by providing the class of the correct answer, e.g., âNo itâs a movieâ for the question âwhich movie did Forest Gump star in?â. - Task 5: The teacher provides a reason why the studentâs answer is wrong by pointing out the relevant supporting fact from the knowledge base. - Task 6: The teacher gives positive reward only 50% of the time. - Task 7: Rewards are missing and the teacher only gives natural language feedback. - Task 8: Combines Tasks 1 and 2 to see whether a learner can learn successfully from both forms of supervision at once. - Task 9: The bot asks questions of the teacher about what it has done wrong. - Task 10: The bot will receive a hint rather than the correct answer after asking for help.
We refer the readers to (Weston, 2016) for more detailed descriptions and the motivation behind these tasks. The difference in our system is that the model can be trained on-the-ï¬y via the simulator: after receiving feedback and/or rewards, the model can update itself and apply its learning to the next episode. We present results on Tasks 2, 3 and 4 in this appendix
# B INSTRUCTIONS GIVEN TO TURKERS
These are the instructions given for the textual feedback mechanical turk task (we also constructed a separate task to collect the initial questions, not described here):
Title: Write brief responses to given dialogue exchanges (about 15 min)
Description: Write a brief response to a studentâs answer to a teacherâs question, providing feedback to the student on their answer.
Instructions:
Each task consists of the following triplets:
1. a question by the teacher
2. the correct answer(s) to the question (separated by âORâ)
3. a proposed answer in reply to the question from the student
Consider the scenario where you are the teacher and have already asked the question, and received the reply from the student. Please compose a brief response giving feedback to the student about their answer. The correct answers are provided so that you know whether the student was correct or not.
For example, given 1) question: âwhat is a color in the united states ï¬ag?â; 2) correct answer: âwhite, blue, redâ; 3) student reply: âredâ, your response could be something like âthatâs right!â; for 3) reply: âgreenâ, you might say âno thatâs not rightâ or ânope, a correct answer is actually whiteâ.
Please vary responses and try to minimize spelling mistakes. If the same responses are copied/pasted or overused, weâll reject the HIT.
Avoid naming the student or addressing âthe classâ directly.
We will consider bonuses for higher quality responses during review.
13
# Under review as a conference paper at ICLR 2017
T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Which movie did Tom Hanks star in ? S : Brad Pitt. Task 1: Imitating an Expert Student S: Forrest Gump T: (no response) Task 1: Imitating an Expert Student S: Forrest Gump T: (no response) Task 2: Positive and Negative Feedback T: Yes, thatâs right! (+) Task 2: Positive and Negative Feedback T: No, thatâs incorrect! Task 3: Answers Supplied by Teacher T: Yes, that is correct. (+) Task 3: Answers Supplied by Teacher T: No, the answer is Forrest Gump ! Task 4: Hints Supplied by Teacher T: Correct! (+) Task 4: Hints Supplied by Teacher T: No, itâs a movie ! Task 5: Supporting Facts Supplied by Teacher T: Thatâs right. (+) Task 5: Supporting Facts Supplied by Teacher T: No, because Forrest Gump starred actors Tom Hanks, Robin Wright, Gary Sinise ! Task 6: Partial Feedback if random(0,1)<0.5 then T: Thatâs correct. (+) else T: Thatâs correct. Task 6: Partial Feedback T: Sorry, wrong. Task 7: No Feedback T: Yes. Task 7: No Feedback T: No. Task 8: Imitation and Feedback Mixture if random(0,1)<0.5 then T: Yes, thatâs right! (+) Task 8: Imitation and Feedback Mixture if random(0,1)<0.5 then T: Wrong. else T: (no response) else S: Forrest Gump Task 9: Asking For Corrections T: Correct! (+) Task 9: Asking For Corrections T: No, thatâs wrong. S: Can you help me? T: Forrest Gump ! Task 10: Asking For Supporting Facts T: Yes, thatâs right! (+)
Task 10: Asking For Supporting Facts T: Sorry, thatâs not it. S: Can you help me? T: A relevant fact is that Forrest Gump starred actors Tom Hanks, Robin Wright, Gary Sinise !
Figure 5: The ten tasks our simulator implements, which evaluate different forms of teacher response and binary feedback. In each case the same example from WikiMovies is given for simplicity, where the student answered correctly for all tasks (left) or incorrectly (right). Red text denotes responses by the bot with S denoting the bot. Blue text is spoken by the teacher with T denoting the teacherâs response. For imitation learning the teacher provides the response the student should say denoted with S in Tasks 1 and 8. A (+) denotes a positive reward.
C ADDITIONAL EXPERIMENTS
Iteration 1 2 3 4 5 6 Imitation Learning 0.24 | 0.23 | 0.23 | 0.23 |] 0.25 | 0.25 Reward Based Imitation (RBI) | 0.95 | 0.99 | 0.99 | 0.99 | 1.00 | 1.00 Forward Pred. (FP) 1.00 | 0.19 | 0.86 | 0.30 | 99 | 0.22 RBI+FP 0.99 | 0.99 | 0.99 | 0.99 | 99 | 0.99 FP (balanced) 0.99 | 0.97 | 0.98 | 0.98 | 0.96 | 0.97 FP (rand. exploration ⬠= 0.25) | 0.99 | 0.91 | 0.93 | 0.88 | 0.94 | 0.94 FP (rand. exploration ⬠= 0.5) | 0.98 | 0.93 | 0.97 | 0.96 | 0.95 | 0.97
Table 3: Test accuracy of various models in the dataset batch size case (using batch size equal to the size of the full training set) for bAbI, task 3. Results > 0.95 are in bold.
14
# Under review as a conference paper at ICLR 2017
1.0, Random Exploration for RBI i?) 20 40 60 80 Epoch
i?) 20 40 60 80 Epoch i¢) ~ 20 40 60 80 Epoch 1.0, RBI (eps=0.6) Varying Batch Size
Random Exploration for FP i) 20 40 60 80 Epoch
Comparing RBI, FP and REINFORCE
@âe REINFORCE sa RBI ma FP i?) 20 40 60 80 Epoch
# FP (eps=0.6) Varying Batch Size
0.9|[e-e
# batch 20
09
# da
# batch 80
0.8
0.8))
# pm
# batch 320
50.7 20.6
Â¥ 0.5 0.4)
# ee
# batch 20
0.7||@-*
> & 0.6 FE} 90.5
0.4)
# batch 1000
# da
# batch 80
0.3
# mm
# batch 320
03 .
02
@«
# batch 1000
0.2
# i
0
20
# 40 60 Epoch
80
100
0
20
# 40 60 Epoch
80
100
Figure 6: Training epoch vs. test accuracy for bAbI (Task 2) varying exploration ¢ and batch size.
15
# Under review as a conference paper at ICLR 2017
Random Exploration for RBI Random Exploration for FP 1.0 0.9) 0.9) 0.8) 0.8) 0.7) > 0.7| > £0.6 £06 £05 g 0.5] 0.4 0.4) 0.3) 0.3 0.2 0.2 i¢) 20 40 60 0 20 40 60 80 Epoch Epoch Random Exploration for FP with Balancin 1.0-âComparing RBI, FP and REINFORCE @âe REINFORCE aa RBI mm FP i?) 20 40 60 80 i?) 20 40 60 80 Epoch Epoch 1.0, RBI (eps=0.6) Varying Batch Size FP (eps=0.6) Varying Batch Size 0.9! 0.9) 0.8] 0.8} > 0.7] > 0.7 8 8 5 0-6 5 0.6 205 3g <â to5 0.4 @âe batch 20 @âe batch 20 aa batch 80 0.4] ta batch 80 0.3 mm batch 320 mm batch 320 02 |e batch 1000 0.3 | batch 1000 i) 20 40 60 80 100 i) 20 40 60 80 100 Epoch Epoch
Figure 7: Training epoch vs. test accuracy for bAbI (Task 3) varying exploration ¢ and batch size. Random exploration is important for both reward-based (RBI) and forward prediction (FP).
16
# Under review as a conference paper at ICLR 2017
Random Exploration for RBI i¢) 20 40 60 Epoch Random Exploration for FP with Balancin 0 20 40 60 80 0 Epoch ing Batch Size Random Exploration for FP 40 Epoch 60 Comparing RBI, FP and REINFORCE ee REINFORCE oa RBI mm FP 20 FP (eps=0.6) Varying Batch Size 40 Epoch 60 80 0.9| 0.9) 0.8! 0.8} >, 0.7 507 o o £ 0.6 £ 0.6) 5 £05 go5 0.4 ee batch 20 0.4! @@ batch 20 aa batch 80 aa batch 80 0.3 ma batch 320 0.3 mm batch 320 02 | ee batch 1000 02 + batch 1000 i) 20 40 60 80 100 ie) 20 40 60 80 100 Epoch Epoch
Figure 8: Training epoch vs. test accuracy for bAbI (Task 4) varying exploration ¢ and batch size. Random exploration is important for both reward-based (RBI) and forward prediction (FP).
17
# Under review as a conference paper at ICLR 2017
Random Exploration for RBI Random Exploration for FP 07] 0.7, 0.6 0.6. > 05 > 05 u u oO oO Soa a Soa â uu ih U oh 03 _ 203 ua _ _ 0.2 o- 0.2) oo 0.1 â~ â~ a ol â 0 5 10 15 20 0 5 10 15 20 Epoch Epoch RBI (eps=0.5) Varying Batch Size Comparing RBI, FP and REINFORCE ° un ° uu Accuracy ° Ss Accuracy ° + @e batch 32 0.3 0.3 sa batch 320 0.2 ma batch 3200 02 ee REINFORCE + batch 32000 os RBI ol © full dataset 0.1 ma FP 0 5 10 15 20 0 5 10 15 20 Epoch Epoch
Figure 9: WikiMovies: Training epoch vs. test accuracy on Task 2 varying (top left panel) explo- ration rate ⬠while setting batch size to 32 for RBI, (top right panel) for FP, (bottom left) batch size for RBI, and (bottom right) comparing RBI, REINFORCE and FP setting « = 0.5. The model is robust to the choice of batch size. RBI and REINFORCE perform comparably.
18
# Under review as a conference paper at ICLR 2017
Random Exploration for RBI Random Exploration for FP 0.71 06 ost 0.5| a B05} Loa O90 £ oo a a oO â © 0.4} â 2 0.3 Ga 4 -« xt xt _ 0.34 _ 0.2| o~6 02 o~6 0.1 Vy Vy a a 0.1 0 5 10 15 20 0 5 10 15 20 Epoch Epoch RBI (eps=0.5) Varying Batch Size Comparing RBI, FP and REINFORCE 0.7 0.71 0.6] 0.6) gos 3 0.5} 50.4 5 0.4 uu U fo4 ee batch 32 {03 . aa batch 320 0.2 ma batch 3200 0.2 © REINFORCE = batch 32000 ta RBI 0.1 © full dataset 0.1 mm FP ) 5 10 15 20 0 5 10 15 20 Epoch Epoch
Figure 10: WikiMovies: Training epoch vs. test accuracy on Task 3 varying (top left pane ) explo- ration rate ⬠while setting batch size to 32 for RBI, (top right panel) for FP, (bottom left) batch size for RBI, and (bottom right) comparing RBI, REINFORCE and FP setting « = 0.5. The model is robust to the choice of batch size. RBI and REINFORCE perform comparably.
19
# Under review as a conference paper at ICLR 2017
Random Exploration for RBI Random Exploration for FP 0.7 0.7} 0.6 0.6 0.5 3 pos, Loa a £ f° S 5 0.4) uu â U â Y 03) [e) xt _ to3 _ _ _ 0.2 O° oo 0.2} 0.1 â~ â~ a 01 â 0 5 10 15 20 0 5 10 15 20 Epoch Epoch RBI (eps=0.5) Varying Batch Size Comparing RBI, FP and REINFORCE 0.7 0.7} 0.6 0.6, 705 3° 50.4 5 0. uu U fo4 ee batch 32 Zo . aa batch 320 02 ma batch 3200 02 ee REINFORCE + batch 32000 os RBI ol © full dataset 0.1 ma FP 0 5 10 15 20 0 5 10 15 20 Epoch Epoch
Figure 11: WikiMovies: Training epoch vs. test accuracy on Task 4 varying (top left panel) explo- ration rate ⬠while setting batch size to 32 for RBI, (top right panel) for FP, (bottom left) batch size for RBI, and (bottom right) comparing RBI, REINFORCE and FP setting « = 0.5. The model is robust to the choice of batch size. RBI and REINFORCE perform comparably.
20
# Under review as a conference paper at ICLR 2017
FP (eps=0.5) Varying Batch Size FP (eps=0.5) Varying Batch Size 0.7 0.7 0.6 0.6 0.5) o Fo.5| coal £ Fe} 3 0.4 fo4 ee batch 32 2 ee batch 32 aa batch 320 0.3 ta batch 320 0.21 ma batch 3200 ma batch 3200 + batch 32000 0.2 + batch 32000}; ol © full dataset © full dataset La 0.1 j 0 5 10 15 20 0 5 10 15 20 Epoch Epoch FP (eps=0.5) Varying Batch Size FP (eps=0.5) Varying Batch Size 0.7 0.7 0.6 0.6 B05 Bo.5 oO oO £ £ go4 Z 0.4 <4 © batch 32 2 ee batch 32 : a batch 320 03 sa batch 320 02 ma batch 3200 ma batch 3200 + batch 32000 0.2 + batch 32000 ol © full dataset © full dataset : : 0.1 = } 0 5 10 15 20 0 5 10 15 20 Epoch Epoch
Figure 12: WikiMovies: Training epoch vs. test accuracy with varying batch size for FP on Task 2 (top left panel), 3 (top right panel), 4 (bottom left panel) and 6 (top right panel) setting « = 0.5. The model is robust to the choice of batch size.
21
# Under review as a conference paper at ICLR 2017
C.1 ADDITIONAL EXPERIMENTS FOR MECHANICAL TURK SETUP
In the experiment in Section 5.2 we conducted experiments with real human feedback. Here, we compare this to a form of synthetic feedback, mostly as a sanity check, but also to see how much improvement we can get if the signal is simpler and cleaner (as it is synthetic). We hence constructed synthetic feedback for the 10,000 responses, using either Task 2 (positive or negative feedback), Task 3 (answers provided by teacher) or a mix (Task 2+3) where we use one or the other for each example (50% chance of each). The latter makes the synthetic data have a mixed setup of responses, which more closely mimics the real data case. The results are given in Table 4. The RBI+FP combination is better using the synthetic data than the real data with Task 2+3 or Task 3, which is to be expected, but the real data is competitive, despite the difï¬culty of dealing with its lexical and semantic variability. The real data is better than using Task 2 synthetic data.
For comparison purposes, we also ran a supervised (imitation learning) MemN2N on different sized training sets of turker authored questions with gold annotated labels (so, there are no numerical rewards or textual feedback, this is a pure supervised setting). The results are given in Table 5. They indicate that RBI+FP and even FP alone get close to the performance of fully supervised learning.
Model Reward Based Imitation (RBI) Forward Prediction (FP) [real] RBI+FP [real] Forward Prediction (FP) [synthetic Task 2] Forward Prediction (FP) [synthetic Task 2+3] Forward Prediction (FP) [synthetic Task 3] RBI+FP [synthetic Task 2] RBI+FP [synthetic Task 2+3] RBI+FP [synthetic Task 3] r = 0 0.333 0.358 0.431 0.188 0.328 0.361 0.382 0.459 0.473 r = 0.1 0.340 0.358 0.438 0.188 0.328 0.361 0.383 0.465 0.486 r = 0.5 0.365 0.358 0.443 0.188 0.328 0.361 0.407 0.464 0.490 r = 1 0.375 0.358 0.441 0.188 0.328 0.361 0.408 0.478 0.494
Table 4: Incorporating Feedback From Humans via Mechanical Turk: comparing real human feedback to synthetic feedback. Textual feedback is provided for 10,000 model predictions (from a model trained with 1k labeled training examples), and additional sparse binary rewards (fraction r of examples have rewards). We compare real feedback (rows 2 and 3) to synthetic feedback when using FP or RBI+FP (rows 4 and 5).
Train data size Supervised MemN2N 0.333 1k 5k 0.429 10k 0.476 20k 0.526 60k 0.599
# Table 5: Fully Supervised (Imitation Learning) Results on Human Questions
[r=0|r=01 | r=05|r=1 e=0 0.499 | 0.502 0.501 | 0.502 eâ¬=0.1 | 0.494 | 0.496 0.501 | 0.502 ⬠= 0.25 | 0.493 | 0.495 0.496 | 0.499 â¬=0.5 | 0.501 | 0.499 0.501 | 0.504 e=1 0.497 | 0.497 0.498 | 0.497
Table 6: Second Iteration of Feedback Using synthetic textual feedback of synthetic Task2+3 with the RBI+FP method, an additional iteration of data collection of 10k examples, varying sparse binary reward fraction r and exploration «. The performance of the first iteration model was 0.478.
C.2 SECOND ITERATION OF FEEDBACK
We conducted experiments with an additional iteration of data collection for the case of binary rewards and textual feedback using the synthetic Task 2+3 mix. We selected the best model from the previous training, using RBI+FP with r = 1 which previously gave a test accuracy of 0.478 (see Table 4). Using that model as a predictor, we collected an additional 10,000 training examples.
22
# Under review as a conference paper at ICLR 2017
We then continue to train our model using the original 1k+10k training set, plus the additional 10k. As before, we report the test accuracy varying r on the additional collected set. We also report the performance from varying ¢, the proportion of random exploration of predictions on the new set. The results are reported in Table [6] Overall, performance is improved in the second iteration, with slightly better performance for large r and ⬠= 0.5. However, the improvement is mostly invariant to those parameters, likely because FP takes advantage of feedback from incorrect predictions in any case.
23 | {
"id": "1511.06931"
} |
1611.09830 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | 7 1 0 2
b e F 7 ] L C . s c [
3 v 0 3 8 9 0 . 1 1 6 1 : v i X r a
# NEWSQA: A MACHINE COMPREHENSION DATASET
Justin Harris Alessandro Sordoni Philip Bachman Kaheer Suleman
{adam.trischler, tong.wang, eric.yuan, justin.harris, alessandro.sordoni, phil.bachman, k.suleman}@maluuba.com
# Maluuba Research Montréal, Québec, Canada
# ABSTRACT
We present NewsQA, a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and an- swers based on a set of over 10,000 news articles from CNN, with answers consist- ing of spans of text from the corresponding articles. We collect this dataset through a four-stage process designed to solicit exploratory questions that require reasoning. A thorough analysis conï¬rms that NewsQA demands abilities beyond simple word matching and recognizing textual entailment. We measure human performance on the dataset and compare it to several strong neural models. The performance gap between humans and machines (0.198 in F1) indicates that signiï¬cant progress can be made on NewsQA through future research. The dataset is freely available at https://datasets.maluuba.com/NewsQA.
# INTRODUCTION
Almost all human knowledge is recorded in the medium of text. As such, comprehension of written language by machines, at a near-human level, would enable a broad class of artiï¬cial intelligence applications. In human students we evaluate reading comprehension by posing questions based on a text passage and then assessing a studentâs answers. Such comprehension tests are appealing because they are objectively gradable and may measure a range of important abilities, from basic understanding to causal reasoning to inference (Richardson et al., 2013). To teach literacy to machines, the research community has taken a similar approach with machine comprehension (MC).
Recent years have seen the release of a host of MC datasets. Generally, these consist of (document, question, answer) triples to be used in a supervised learning framework. Existing datasets vary in size, difï¬culty, and collection methodology; however, as pointed out by Rajpurkar et al. (2016), most suffer from one of two shortcomings: those that are designed explicitly to test comprehension (Richardson et al., 2013) are too small for training data-intensive deep learning models, while those that are sufï¬ciently large for deep learning (Hermann et al., 2015; Hill et al., 2016; Bajgar et al., 2016) are generated synthetically, yielding questions that are not posed in natural language and that may not test comprehension directly (Chen et al., 2016). More recently, Rajpurkar et al. (2016) sought to overcome these deï¬ciencies with their crowdsourced dataset, SQuAD.
Here we present a challenging new largescale dataset for machine comprehension: NewsQA. NewsQA contains 119,633 natural language questions posed by crowdworkers on 12,744 news articles from CNN. Answers to these questions consist of spans of text within the corresponding article highlighted also by crowdworkers. To build NewsQA we utilized a four-stage collection process designed to encourage exploratory, curiosity-based questions that reï¬ect human information seeking. CNN articles were chosen as the source material because they have been used in the past (Hermann et al., 2015) and, in our view, machine comprehension systems are particularly suited to high-volume, rapidly changing information sources like news.
âThese three authors contributed equally.
1
As Trischler et al. (2016a), Chen et al. (2016), and others have argued, it is important for datasets to be sufï¬ciently challenging to teach models the abilities we wish them to learn. Thus, in line with Richardson et al. (2013), our goal with NewsQA was to construct a corpus of questions that necessitates reasoning-like behaviors â for example, synthesis of information across different parts of an article. We designed our collection methodology explicitly to capture such questions.
The challenging characteristics of NewsQA that distinguish it from most previous comprehension tasks are as follows:
1. Answers are spans of arbitrary length within an article, rather than single words or entities. 2. Some questions have no answer in the corresponding article (the null span). 3. There are no candidate answers from which to choose. 4. Our collection process encourages lexical and syntactic divergence between questions and
answers.
5. A signiï¬cant proportion of questions requires reasoning beyond simple word- and context- matching (as shown in our analysis).
Some of these characteristics are present also in SQuAD, the MC dataset most similar to NewsQA. However, we demonstrate through several metrics that NewsQA offers a greater challenge to existing models.
In this paper we describe the collection methodology for NewsQA, provide a variety of statistics to characterize it and contrast it with previous datasets, and assess its difï¬culty. In particular, we measure human performance and compare it to that of two strong neural-network baselines. Humans signiï¬cantly outperform powerful question-answering models. This suggests there is room for improvement through further advances in machine comprehension research.
# 2 RELATED DATASETS
NewsQA follows in the tradition of several recent comprehension datasets. These vary in size, difï¬culty, and collection methodology, and each has its own distinguishing characteristics. We agree with Bajgar et al. (2016) who have said âmodels could certainly beneï¬t from as diverse a collection of datasets as possible.â We discuss this collection below.
# 2.1 MCTEST
MCTest (Richardson et al., 2013) is a crowdsourced collection of 660 elementary-level childrenâs stories with associated questions and answers. The stories are ï¬ctional, to ensure that the answer must be found in the text itself, and carefully limited to what a young child can understand. Each question comes with a set of 4 candidate answers that range from single words to full explanatory sentences. The questions are designed to require rudimentary reasoning and synthesis of information across sentences, making the dataset quite challenging. This is compounded by the datasetâs size, which limits the training of expressive statistical models. Nevertheless, recent comprehension models have performed well on MCTest (Sachan et al., 2015; Wang et al., 2015), including a highly structured neural model (Trischler et al., 2016a). These models all rely on access to the small set of candidate answers, a crutch that NewsQA does not provide.
2.2 CNN/DAILY MAIL
The CNN/Daily Mail corpus (Hermann et al., 2015) consists of news articles scraped from those outlets with corresponding cloze-style questions. Cloze questions are constructed synthetically by deleting a single entity from abstractive summary points that accompany each article (written presumably by human authors). As such, determining the correct answer relies mostly on recognizing textual entailment between the article and the question. The named entities within an article are identiï¬ed and anonymized in a preprocessing step and constitute the set of candidate answers; contrast this with NewsQA in which answers often include longer phrases and no candidates are given.
Because the cloze process is automatic, it is straightforward to collect a signiï¬cant amount of data to support deep-learning approaches: CNN/Daily Mail contains about 1.4 million question-answer
2
pairs. However, Chen et al. (2016) demonstrated that the task requires only limited reasoning and, in fact, performance of the strongest models (Kadlec et al., 2016; Trischler et al., 2016b; Sordoni et al., 2016) nearly matches that of humans.
2.3 CHILDRENâS BOOK TEST
The Childrenâs Book Test (CBT) (Hill et al., 2016) was collected using a process similar to that of CNN/Daily Mail. Text passages are 20-sentence excerpts from childrenâs books available through Project Gutenberg; questions are generated by deleting a single word in the next (i.e., 21st) sentence. Consequently, CBT evaluates word prediction based on context. It is a comprehension task insofar as comprehension is likely necessary for this prediction, but comprehension may be insufï¬cient and other mechanisms may be more important.
2.4 BOOKTEST
Bajgar et al. (2016) convincingly argue that, because existing datasets are not large enough, we have yet to reach the full capacity of existing comprehension models. As a remedy they present BookTest. This is an extension to the named-entity and common-noun strata of CBT that increases their size by over 60 times. Bajgar et al. (2016) demonstrate that training on the augmented dataset yields a model (Kadlec et al., 2016) that matches human performance on CBT. This is impressive and suggests that much is to be gained from more data, but we repeat our concerns about the relevance of story prediction as a comprehension task. We also wish to encourage more efï¬cient learning from less data.
# 2.5 SQUAD
The comprehension dataset most closely related to NewsQA is SQuAD (Rajpurkar et al., 2016). It consists of natural language questions posed by crowdworkers on paragraphs from high-PageRank Wikipedia articles. As in NewsQA, each answer consists of a span of text from the related paragraph and no candidates are provided. Despite the effort of manual labelling, SQuADâs size is signiï¬cant and amenable to deep learning approaches: 107,785 question-answer pairs based on 536 articles.
Although SQuAD is a more realistic and more challenging comprehension task than the other largescale MC datasets, machine performance has rapidly improved towards that of humans in recent months. The SQuAD authors measured human accuracy at 0.905 in F1 (we measured human F1 at 0.807 using a different methodology); at the time of writing, the strongest published model to date achieves 0.778 F1 (Wang et al., 2016). This suggests that new, more difï¬cult alternatives like NewsQA could further push the development of more intelligent MC systems.
# 3 COLLECTION METHODOLOGY
We collected NewsQA through a four-stage process: article curation, question sourcing, answer sourcing, and validation. We also applied a post-processing step with answer agreement consolidation and span merging to enhance the usability of the dataset. These steps are detailed below.
3.1 ARTICLE CURATION
We retrieve articles from CNN using the script created by Hermann et al. (2015) for CNN/Daily Mail. From the returned set of 90,266 articles, we select 12,744 uniformly at random. These cover a wide range of topics that includes politics, economics, and current events. Articles are partitioned at random into a training set (90%), a development set (5%), and a test set (5%).
3.2 QUESTION SOURCING
It was important to us to collect challenging questions that could not be answered using straightforward word- or context-matching. Like Richardson et al. (2013) we want to encourage reasoning in comprehension models. We are also interested in questions that, in some sense, model human curiosity and reï¬ect actual human use-cases of information seeking. Along a similar line, we consider it an important (though as yet overlooked) capacity of a comprehension model to recognize when
3
given information is inadequate, so we are also interested in questions that may not have sufï¬cient evidence in the text. Our question sourcing stage was designed to solicit questions of this nature, and deliberately separated from the answer sourcing stage for the same reason.
Questioners (a distinct set of crowdworkers) see only a news articleâs headline and its summary points (also available from CNN); they do not see the full article itself. They are asked to formulate a question from this incomplete information. This encourages curiosity about the contents of the full article and prevents questions that are simple reformulations of sentences in the text. It also increases the likelihood of questions whose answers do not exist in the text. We reject questions that have signiï¬cant word overlap with the summary points to ensure that crowdworkers do not treat the summaries as mini-articles, and further discouraged this in the instructions. During collection each Questioner is solicited for up to three questions about an article. They are provided with positive and negative examples to prompt and guide them (detailed instructions are shown in Figure 3).
# 3.3 ANSWER SOURCING
A second set of crowdworkers (Answerers) provide answers. Although this separation of question and answer increases the overall cognitive load, we hypothesized that unburdening Questioners in this way would encourage more complex questions. Answerers receive a full article along with a crowdsourced question and are tasked with determining the answer. They may also reject the question as nonsensical, or select the null answer if the article contains insufï¬cient information. Answers are submitted by clicking on and highlighting words in the article, while instructions encourage the set of answer words to consist of a single continuous span (again, we give an example prompt in the Appendix). For each question we solicit answers from multiple crowdworkers (avg. 2.73) with the aim of achieving agreement between at least two Answerers.
3.4 VALIDATION
Crowdsourcing is a powerful tool but it is not without peril (collection glitches; uninterested or malicious workers). To obtain a dataset of the highest possible quality we use a validation process that mitigates some of these issues. In validation, a third set of crowdworkers sees the full article, a question, and the set of unique answers to that question. We task these workers with choosing the best answer from the candidate set or rejecting all answers. Each article-question pair is validated by an average of 2.48 crowdworkers. Validation was used on those questions without answer-agreement after the previous stage, amounting to 43.2% of all questions.
3.5 ANSWER MARKING AND CLEANUP
After validation, 86.0% of all questions in NewsQA have answers agreed upon by at least two separate crowdworkersâeither at the initial answer sourcing stage or in the top-answer selection. This improves the datasetâs quality. We choose to include the questions without agreed answers in the corpus also, but they are specially marked. Such questions could be treated as having the null answer and used to train models that are aware of poorly posed questions.
As a ï¬nal cleanup step we combine answer spans that are less than 3 words apart (punctuation is discounted). We ï¬nd that 5.68% of answers consist of multiple spans, while 71.3% of multi-spans are within the 3-word threshold. Looking more closely at the data reveals that the multi-span answers often represent lists. These may present an interesting challenge for comprehension models moving forward.
# 4 DATA ANALYSIS
We provide a thorough analysis of NewsQA to demonstrate its challenge and its usefulness as a machine comprehension benchmark. The analysis focuses on the types of answers that appear in the dataset and the various forms of reasoning required to solve it.1
1Additional statistics are available at https://datasets.maluuba.com/NewsQA/stats.
4
Table 1: The variety of answer types appearing in NewsQA, with proportion statistics and examples.
Answer type Example Proportion (%) Date/Time Numeric Person Location Other Entity Common Noun Phr. Adjective Phr. Verb Phr. Clause Phr. Prepositional Phr. Other March 12, 2008 24.3 million Ludwig van Beethoven Torrance, California Pew Hispanic Center federal prosecutors 5-hour suffered minor damage trampling on human rights in the attack nearly half 2.9 9.8 14.8 7.8 5.8 22.2 1.9 1.4 18.3 3.8 11.2
4.1 ANSWER TYPES
Following Rajpurkar et al. (2016), we categorize answers based on their linguistic type (see Table 1). This categorization relies on Stanford CoreNLP to generate constituency parses, POS tags, and NER tags for answer spans (see Rajpurkar et al. (2016) for more details). From the table we see that the majority of answers (22.2%) are common noun phrases. Thereafter, answers are fairly evenly spread among the clause phrase (18.3%), person (14.8%), numeric (9.8%), and other (11.2%) types. Clearly, answers in NewsQA are linguistically diverse.
The proportions in Table 1 only account for cases when an answer span exists. The complement of this set comprises questions with an agreed null answer (9.5% of the full corpus) and answers without agreement after validation (4.5% of the full corpus).
4.2 REASONING TYPES
The forms of reasoning required to solve NewsQA directly inï¬uence the abilities that models will learn from the dataset. We stratiï¬ed reasoning types using a variation on the taxonomy presented by Chen et al. (2016) in their analysis of the CNN/Daily Mail dataset. Types are as follows, in ascending order of difï¬culty:
1. Word Matching: Important words in the question exactly match words in the immediate context of an answer span, such that a keyword search algorithm could perform well on this subset.
2. Paraphrasing: A single sentence in the article entails or paraphrases the question. Para- phrase recognition may require synonymy and world knowledge.
3. Inference: The answer must be inferred from incomplete information in the article or by recognizing conceptual overlap. This typically draws on world knowledge.
4. Synthesis: The answer can only be inferred by synthesizing information distributed across multiple sentences.
5. Ambiguous/Insufï¬cient: The question has no answer or no unique answer in the article.
For both NewsQA and SQuAD, we manually labelled 1,000 examples (drawn randomly from the respective development sets) according to these types and compiled the results in Table 2. Some examples fall into more than one category, in which case we defaulted to the more challenging type. We can see from the table that word matching, the easiest type, makes up the largest subset in both datasets (32.7% for NewsQA and 39.8% for SQuAD). Paraphrasing constitutes a larger proportion in SQuAD than in NewsQA (34.3% vs 27.0%), possibly a result from the explicit encouragement of lexical variety in SQuAD question sourcing. However, NewsQA signiï¬cantly outnumbers SQuAD on the distribution of the more difï¬cult forms of reasoning: synthesis and inference make up a combined 33.9% of the data in contrast to 20.5% in SQuAD.
5
Table 2: Reasoning mechanisms needed to answer questions. For each we show an example question with the sentence that contains the answer span. Words relevant to the reasoning type are in bold. The corresponding proportion in the human-evaluated subset of both NewsQA and SQuAD (1,000 samples each) is also given.
Reasoning Example Proportion (%) NewsQA SQuAD Word Matching Q: When were the ï¬ndings published? S: Both sets of research ï¬ndings were published Thursday... 32.7 39.8 Paraphrasing Q: Who is the struggle between in Rwanda? S: The struggle pits ethnic Tutsis, supported by Rwanda, against ethnic Hutu, backed by Congo. 27.0 34.3 Inference Q: Who drew inspiration from presidents? S: Rudy Ruiz says the lives of US presidents can make them positive role models for students. 13.2 8.6 Synthesis Q: Where is Brittanee Drexel from? S: The mother of a 17-year-old Rochester, New York high school student ... says she did not give her daughter permission to go on the trip. Brittanee Marie Drexelâs mom says... 20.7 11.9 Ambiguous/Insufï¬cient Q: Whose mother is moving to the White House? S: ... Barack Obamaâs mother-in-law, Marian Robinson, will join the Obamas at the familyâs private quarters at 1600 Pennsylvania Avenue. [Michelle is never mentioned] 6.4 5.4
# 5 BASELINE MODELS
We test the performance of three comprehension systems on NewsQA: human data analysts and two neural models. The ï¬rst neural model is the match-LSTM (mLSTM) system of Wang & Jiang (2016b). The second is a model of our own design that is similar but computationally cheaper. We describe these models below but omit the personal details of our analysts. Implementation details of the models are described in Appendix A.
# 5.1 MATCH-LSTM
We selected the mLSTM model because it is straightforward to implement and offers strong, though not state-of-the-art, performance on the similar SQuAD dataset. There are three stages involved in the mLSTM. First, LSTM networks encode the document and question (represented by GloVe word embeddings (Pennington et al., 2014)) as sequences of hidden states. Second, an mLSTM network (Wang & Jiang, 2016a) compares the document encodings with the question encodings. This network processes the document sequentially and at each token uses an attention mechanism to obtain a weighted vector representation of the question; the weighted combination is concatenated with the encoding of the current token and fed into a standard LSTM. Finally, a Pointer Network uses the hidden states of the mLSTM to select the boundaries of the answer span. We refer the reader to Wang & Jiang (2016a;b) for full details.
5.2 THE BILINEAR ANNOTATION RE-ENCODING BOUNDARY (BARB) MODEL
The match-LSTM is computationally intensive since it computes an attention over the entire question at each document token in the recurrence. To facilitate faster experimentation with NewsQA we developed a lighter-weight model (BARB) that achieves similar results on SQuAD2. Our model consists of four stages:
Encoding All words in the document and question are mapped to real-valued vectors using the GloVe embeddings W â R|V |Ãd. This yields d1, . . . , dn â Rd and q1, . . . , qm â Rd. A bidirec-
2With the conï¬gurations for the results reported in Section 6.2, one epoch of training on NewsQA takes about 3.9k seconds for BARB and 8.1k seconds for mLSTM.
6
tional GRU network (Bahdanau et al., 2015) encodes di into contextual states hi â RD1 for the document. The same encoder is applied to qj to derive contextual states kj â RD1 for the question.3
Bilinear Annotation Next we compare the document and question encodings using a set of C bilinear transformations,
i T[1:C]kj, Tc â RD1ÃD1 , gij â RC,
which we use to produce an (n à m à C)-dimensional tensor of annotation scores, G = [gij]. We take the maximum over the question-token (second) dimension and call the columns of the resulting matrix gi â RC. We use this matrix as an annotation over the document word dimension. In contrast with the more typical multiplicative application of attention vectors, this annotation matrix is concatenated to the encoder RNN input in the re-encoding stage.
Re-encoding For each document word, the input of the re-encoding RNN (another biGRU) consists of three components: the document encodings hi, the annotation vectors gi, and a binary feature qi indicating whether the document word appears in the question. The resulting vectors fi = [hi; gi; qi] are fed into the re-encoding RNN to produce D2-dimensional encodings ei for the boundary-pointing stage.
Boundary pointing Finally, we search for the boundaries of the answer span using a convolutional network (in a process similar to edge detection). Encodings e; are arranged in matrix E ⬠R?2*â. E is convolved with a bank of n¢ filters, Fi ⬠R?2*â, where w is the filter width, k indexes the different filters, and ¢ indexes the layer of the convolutional network. Each layer has the same number of filters of the same dimensions. We add a bias term and apply a nonlinearity (ReLU) following each convolution, with the result an (ny x )-dimensional matrix Be.
We use two convolutional layers in the boundary-pointing stage. Given B, and Bog, the answer spanâs start- and end-location probabilities are computed using p(s) o exp (v7 Bi + bs) and p(e) x exp (v? Bz + be) , respectively. We also concatenate p(s) to the input of the second convolutional layer (along the n-dimension) so as to condition the end-boundary pointing on the start-boundary. Vectors vs, Ve ⬠RâS and scalars b,, be ⬠R are trainable parameters. We also provide an intermediate level of âguidanceâ to the annotation mechanism by first reducing the feature dimension C' in G with mean-pooling, then maximizing the softmax probabilities in the resulting (n-dimensional) vector corresponding to the answer word positions in each document. This auxiliary task is observed empirically to improve performance.
# 6 EXPERIMENTS4
6.1 HUMAN EVALUATION
We tested four English speakers on a total of 1,000 questions from the NewsQA development set. We used four performance measures: F1 and exact match (EM) scores (the same measures used by SQuAD), as well as BLEU and CIDEr5. BLEU is a precision-based metric popular in machine translation that uses a weighted average of variable length phrase matches (n-grams) against the reference sentence (Papineni et al., 2002). CIDEr was designed to correlate better with human judgements of sentence similarity, and uses tf-idf scores over n-grams (Vedantam et al., 2015).
As given in Table 4, humans averaged 0.694 F1 on NewsQA. The human EM scores are relatively low at 0.465. These lower scores are a reï¬ection of the fact that, particularly in a dataset as complex as NewsQA, there are multiple ways to select semantically equivalent answers, e.g., â1996â versus âin 1996â. Although these answers are equally correct they would be measured at 0.5 F1 and 0.0 EM.
3A bidirectional GRU concatenates the hidden states of two GRU networks running in opposite directions. Each of these has hidden size 1 2 D1.
4All experiments in this section use the subset of NewsQA dataset with answer agreements (92,549 samples for training, 5,166 for validation, and 5,126 for testing). We leave the challenge of identifying the unanswerable questions for future work.
5We use https://github.com/tylin/coco-caption to calculate these two scores.
7
Table 3: Model performance on SQuAD and NewsQA datasets. Random are taken from Rajpurkar et al. (2016), and mLSTM from Wang & Jiang (2016b).
SQuAD Exact Match F1 NewsQA Exact Match F1 Model Dev Test Dev Test Model Dev Test Dev Test Random 0.11 mLSTM 0.591 0.591 BARB 0.13 0.595 - 0.41 0.700 0.709 0.43 0.703 - Random 0.00 mLSTM 0.344 0.361 BARB 0.00 0.349 0.341 0.30 0.496 0.496 0.30 0.500 0.482
Table 4: Human performance on SQuAD and NewsQA datasets. The ï¬rst row is taken from Rajpurkar et al. (2016), and the last two rows correspond to machine performance (BARB) on the human- evaluated subsets.
Dataset Exact Match F1 BLEU CIDEr SQuAD SQuAD (ours) NewsQA 0.803 0.650 0.465 0.905 0.807 0.694 - 0.625 0.560 - 3.998 3.596 SQuADBARB NewsQABARB 0.553 0.340 0.685 0.501 0.366 0.081 2.845 2.431
This suggests that simpler automatic metrics are not equal to the task of complex MC evaluation, a problem that has been noted in other domains (Liu et al., 2016). Therefore we also measure according to BLEU and CIDEr: humans score 0.560 and 3.596 on these metrics, respectively.
The original SQuAD evaluation of human performance compares distinct answers given by crowd- workers according to EM and F1; for a closer comparison with NewsQA, we replicated our human test on the same number of validation data (1,000) with the same humans. We measured human answers against the second group of crowdsourced responses in SQuADâs development set, yielding 0.807 F1, 0.625 BLEU, and 3.998 CIDEr. Note that the F1 score is close to the top single-model performance of 0.778 achieved in Wang et al. (2016).
We ï¬nally compared human performance on the answers that had crowdworker agreement with and without validation, ï¬nding a difference of only 1.4 percentage points F1. This suggests our validation stage yields good-quality answers.
6.2 MODEL PERFORMANCE
Performance of the baseline models and humans is measured by EM and F1 with the ofï¬cial evaluation script from SQuAD and listed in Table 4. We supplement these with BLEU and CIDEr measures on the 1,000 human-annotated dev questions. Unless otherwise stated, hyperparameters are determined by hyperopt (Appendix A). The gap between human and machine performance on NewsQA is a striking 0.198 points F1 â much larger than the gap on SQuAD (0.098) under the same human evaluation scheme. The gaps suggest a large margin for improvement with machine comprehension methods.
Figure 1 stratiï¬es model (BARB) performance according to answer type (left) and reasoning type (right) as deï¬ned in Sections 4.1 and 4.2, respectively. The answer-type stratiï¬cation suggests that the model is better at pointing to named entities compared to other types of answers. The reasoning- type stratiï¬cation, on the other hand, shows that questions requiring inference and synthesis are, not surprisingly, more difï¬cult for the model. Consistent with observations in Table 4, stratiï¬ed performance on NewsQA is signiï¬cantly lower than on SQuAD. The difference is smallest on word matching and largest on synthesis. We postulate that the longer stories in NewsQA make synthesizing information from separate sentences more difï¬cult, since the relevant sentences may be farther apart. This requires the model to track longer-term dependencies. It is also interesting to observe that on SQuAD, BARB outperforms human annotators in answering ambiguous questions or those with incomplete information.
8
Datenime Numeric Word Person Matching âAdjective Phrase Paraphrasing Location Propositional Phrase Inference âCommon Noun Phrase thor Other entity Synthesis Clause Phrase Ambiguous! nsutfelent I NewsQA Verb Phrase = EM insufficient = SQUAD ° 02 oO 06 oe 0.000 0.180 0.300 0.450 0.600 0.750
Figure 1: Left: BARB performance (F1 and EM) stratiï¬ed by answer type on the full development set of NewsQA. Right: BARB performance (F1) stratiï¬ed by reasoning type on the human-assessed subset on both NewsQA and SQuAD. Error bars indicate performance differences between BARB and human annotators.
# Table 5: Sentence-level accuracy on artiï¬cially-lengthened SQuAD documents.
SQuAD NewsQA # documents Avg # sentences isf 1 4.9 14.3 23.2 31.8 40.3 79.6 74.9 73.0 72.3 71.0 3 5 7 9 1 30.7 35.4
# 6.3 SENTENCE-LEVEL SCORING
We propose a simple sentence-level subtask as an additional quantitative demonstration of the relative difï¬culty of NewsQA. Given a document and a question, the goal is to ï¬nd the sentence containing the answer span. We hypothesize that simple techniques like word-matching are inadequate to this task owing to the more involved reasoning required by NewsQA.
We employ a technique that resembles inverse document frequency (idf ), which we call inverse sentence frequency (isf ). Given a sentence Si from an article and its corresponding question Q, the isf score is given by the sum of the idf scores of the words common to Si and Q (each sentence is treated as a document for the idf computation). The sentence with the highest isf is taken as the answer sentence Sâ, that is,
Sâ = arg max isf (w). i wâSiâ©Q
The isf method achieves an impressive 79.4% sentence-level accuracy on SQuADâs development set but only 35.4% accuracy on NewsQAâs development set, highlighting the comparative difï¬culty of the latter. To eliminate the difference in article length as a possible cause of the performance gap, we also artiï¬cially increased the article lengths in SQuAD by concatenating adjacent SQuAD articles from the same Wikipedia article. Accuracy decreases as expected with the increased SQuAD article length, yet remains signiï¬cantly higher than on NewsQA with comparable or even greater article length (see Table 5).
# 7 CONCLUSION
We have introduced a challenging new comprehension dataset: NewsQA. We collected the 100,000+ examples of NewsQA using teams of crowdworkers, who variously read CNN articles or highlights, posed questions about them, and determined answers. Our methodology yields diverse answer types and a signiï¬cant proportion of questions that require some reasoning ability to solve. This makes the corpus challenging, as conï¬rmed by the large performance gap between humans and deep neural models (0.198 F1, 0.479 BLEU, 1.165 CIDEr). By its size and complexity, NewsQA makes a signiï¬cant extension to the existing body of comprehension datasets. We hope that our corpus will spur further advances in machine comprehension and guide the development of literate artiï¬cial intelligence.
9
0.800
# ACKNOWLEDGMENTS
The authors would like to thank ÃaËglar Gülçehre, Sandeep Subramanian and Saizheng Zhang for helpful discussions.
# REFERENCES
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015.
Ondrej Bajgar, Rudolf Kadlec, and Jan Kleindienst. Embracing data abundance: Booktest dataset for reading comprehension. arXiv preprint arXiv:1610.00956, 2016.
J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde- Farley, and Y. Bengio. Theano: a CPU and GPU math expression compiler. In In Proc. of SciPy, 2010.
Danqi Chen, Jason Bolton, and Christopher D. Manning. A thorough examination of the cnn / daily mail reading comprehension task. In Association for Computational Linguistics (ACL), 2016.
# François Chollet. keras. https://github.com/fchollet/keras, 2015.
Xavier Glorot and Yoshua Bengio. Understanding the difï¬culty of training deep feedforward neural networks. In Aistats, volume 9, pp. 249â256, 2010.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pp. 1684â1692, 2015.
Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading childrenâs books with explicit memory representations. ICLR, 2016.
Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with the attention sum reader network. arXiv preprint arXiv:1603.01547, 2016.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2015.
Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. arXiv preprint arXiv:1603.08023, 2016.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311â318. Association for Computational Linguistics, 2002.
Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difï¬culty of training recurrent neural networks. ICML (3), 28:1310â1318, 2013.
Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In EMNLP, volume 14, pp. 1532â43, 2014.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016.
Matthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP, volume 1, pp. 2, 2013.
Mrinmaya Sachan, Avinava Dubey, Eric P Xing, and Matthew Richardson. Learning answerentailing structures for machine comprehension. In Proceedings of ACL, 2015.
Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013.
10
Alessandro Sordoni, Philip Bachman, and Yoshua Bengio. Iterative alternating neural attention for machine reading. arXiv preprint arXiv:1606.02245, 2016.
Adam Trischler, Zheng Ye, Xingdi Yuan, Jing He, Philip Bachman, and Kaheer Suleman. A parallel- hierarchical model for machine comprehension on sparse data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016a.
Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehension with the epireader. In EMNLP, 2016b.
Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4566â4575, 2015.
Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. Machine comprehension with syntax, frames, and semantics. In Proceedings of ACL, Volume 2: Short Papers, pp. 700, 2015.
Shuohang Wang and Jing Jiang. Learning natural language inference with lstm. NAACL, 2016a.
Shuohang Wang and Jing Jiang. Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905, 2016b.
Zhiguo Wang, Haitao Mi, Wael Hamza, and Radu Florian. Multi-perspective context matching for machine comprehension. arXiv preprint arXiv:1612.04211, 2016.
11
APPENDICES
# A IMPLEMENTATION DETAILS
Both mLSTM and BARB are implemented with the Keras framework (Chollet, 2015) using the Theano (Bergstra et al., 2010) backend. Word embeddings are initialized using GloVe vectors (Pennington et al., 2014) pre-trained on the 840-billion Common Crawl corpus. The word embeddings are not updated during training. Embeddings for out-of-vocabulary words are initialized with zero.
For both models, the training objective is to maximize the log likelihood of the boundary pointers. Optimization is performed using stochastic gradient descent (with a batch-size of 32) with the ADAM optimizer (Kingma & Ba, 2015). The initial learning rate is 0.003 for mLSTM and 0.0005 for BARB. The learning rate is decayed by a factor of 0.7 if validation loss does not decrease at the end of each epoch. Gradient clipping (Pascanu et al., 2013) is applied with a threshold of 5.
Parameter tuning is performed on both models using hyperopt6. For each model, conï¬gurations for the best observed performance are as follows:
# mLSTM
Both the pre-processing layer and the answer-pointing layer use bi-directional RNN with a hidden size of 192. These settings are consistent with those used by Wang & Jiang (2016b).
Model parameters are initialized with either the normal distribution (N (0, 0.05)) or the orthogonal initialization (O, Saxe et al. 2013) in Keras. All weight matrices in the LSTMs are initialized with O. In the Match-LSTM layer, W q, W p, and W r are initialized with O, bp and w are initialized with N , and b is initialized as 1. In the answer-pointing layer, V and W a are initialized with O, ba and v are initialized with N , and c is initialized as 1.
# BARB
For BARB, the following hyperparameters are used on both SQuAD and NewsQA: d = 300, D1 = 128, C = 64, D2 = 256, w = 3, and nf = 128. Weight matrices in the GRU, the bilinear models, as well as the boundary decoder (vs and ve) are initialized with O. The ï¬lter weights in the boundary decoder are initialized with glorot_uniform (Glorot & Bengio 2010, default in Keras). The bilinear biases are initialized with N , and the boundary decoder biases are initialized with 0.
# B DATA COLLECTION USER INTERFACE
Here we present the user interfaces used in question sourcing, answer sourcing, and question/answer validation.
6https://github.com/hyperopt/hyperopt
12
Highlights e Three women to jointly receive the 2011 Nobel Peace Prize ¢ Prize recognizes non-violent struggle of safety of women and women's rights. e Prize winners to be honored with a concert on Sunday hosted by Helen Mirren Qi: Who were the prize winners Q2: { What country were the prize winners from4 ] Q3: [ Write a question that relates to a highlight. }
Qi: Who were the prize winners Q2: { What country were the prize winners from4 ] Q3: [ Write a question that relates to a highlight. } Question What is the age of Patrick McGoohan? © Click here if the question does not make sense or is not a question. (CNN) -- Emmy-winning Patrick McGoohan, the actor who created one of British television's most surreal thrillers, has died aged 8OJaccording to British media reports. Fans holding placards of Patrick McGoohan recreate a scene from âThe Prisonerâ to celebrate the 40th anniversary of the show in 2007. The Press Association, quoting his son-in-law Cleve Landsberg, reported he died in Los Angeles after a short illness. McGoohan, star of the 1960s show âThe Danger Man, is best remembered for writing and starring in 'The Prisonerâ about a former spy locked away in an isolated village who tries to escape each episode. Question When was the lockdown initiated? Select the best answer: Tucson, Arizona, © 10:30am. -- liam, * Allanswers are very bad. * The question doesn't make sense. Story (for your convenience) (CNN) -- U.S. Air Force officials called off their response late Friday afternoon at a Tucson, Arizona, base after reports that an armed man had entered an office building, the U.S. military branch said in a statement. Earlier in the day, a U.S. military official told CNN that a gunman was believed to be holed up in a building at the Davis-Monthan Air Force Base. This precipitated the Air Force
Question What is the age of Patrick McGoohan? © Click here if the question does not make sense or is not a question. (CNN) -- Emmy-winning Patrick McGoohan, the actor who created one of British television's most surreal thrillers, has died aged 8OJaccording to British media reports. Fans holding placards of Patrick McGoohan recreate a scene from âThe Prisonerâ to celebrate the 40th anniversary of the show in 2007. The Press Association, quoting his son-in-law Cleve Landsberg, reported he died in Los Angeles after a short illness. McGoohan, star of the 1960s show âThe Danger Man, is best remembered for writing and starring in 'The Prisonerâ about a former spy locked away in an isolated village who tries to escape each episode.
Question When was the lockdown initiated? Select the best answer: Tucson, Arizona, © 10:30am. -- liam, * Allanswers are very bad. * The question doesn't make sense. Story (for your convenience) (CNN) -- U.S. Air Force officials called off their response late Friday afternoon at a Tucson, Arizona, base after reports that an armed man had entered an office building, the U.S. military branch said in a statement. Earlier in the day, a U.S. military official told CNN that a gunman was believed to be holed up in a building at the Davis-Monthan Air Force Base. This precipitated the Air Force to call for a lock-down -- which began at 10:30 a.m. following the unconfirmed sighting of" such a man. No shots were ever fired and law enforcement teams are on site, said the official, who had direct knowledge of the situation from conversations with base officials but did not want to be identified. In fact, at 6 p.m., Col. John Cherrey -- who commands the Air Force's 355th Fighter Wing -- told reporters that no gunman or weapon was ever found. He added that the building, where the gunman was once thought to
Figure 2: Examples of user interfaces for question sourcing, answer sourcing, and validation.
13
Write Questions From A Summary Instructions + Overview Write questions about the highlights of a story. Steps 1. Read the highlights 2. Write questions about the highlights Example Highlights * Sarah Palin from Alaska meets with McCain e Fareed Zakaria says John McCain did not put country first with his choice © Zakaria: This is "hell of a time" for Palin to start thinking about national, global issues Questions The questions can refer directly to the highlights, for example: © Where is Palin from? © What did Fareed say about John McCain's choice? e Whois thinking about global issues? Questions must always be related to the highlights but their answers don't have to be in the highlights. You can assume that the highlights summarize a document which can answer other questions for example: e What was the meeting about? * What was McCain's choice? © What issues is Palin thinking about? Other Rules * Donot re-use the same or very similar questions. ® Questions should be written to have short answers. ¢ Donot write "how" nor "why" type questions since their answers are not short. "How far/long/many/much" are okay.
Figure 3: Question sourcing instructions for the crowdworkers.
14 | {
"id": "1606.02245"
} |
1611.09268 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | We introduce a large scale MAchine Reading COmprehension dataset, which we
name MS MARCO. The dataset comprises of 1,010,916 anonymized
questions---sampled from Bing's search query logs---each with a human generated
answer and 182,669 completely human rewritten generated answers. In addition,
the dataset contains 8,841,823 passages---extracted from 3,563,535 web
documents retrieved by Bing---that provide the information necessary for
curating the natural language answers. A question in the MS MARCO dataset may
have multiple answers or no answers at all. Using this dataset, we propose
three different tasks with varying levels of difficulty: (i) predict if a
question is answerable given a set of context passages, and extract and
synthesize the answer as a human would (ii) generate a well-formed answer (if
possible) based on the context passages that can be understood with the
question and passage context, and finally (iii) rank a set of retrieved
passages given a question. The size of the dataset and the fact that the
questions are derived from real user search queries distinguishes MS MARCO from
other well-known publicly available datasets for machine reading comprehension
and question-answering. We believe that the scale and the real-world nature of
this dataset makes it attractive for benchmarking machine reading comprehension
and question-answering models. | http://arxiv.org/pdf/1611.09268 | Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang | cs.CL, cs.IR | null | null | cs.CL | 20161128 | 20181031 | 8 1 0 2
t c O 1 3 ] L C . s c [
3 v 8 6 2 9 0 . 1 1 6 1 : v i X r a
# MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, and Tong Wang Microsoft AI & Research
# Abstract
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questionsâ sampled from Bingâs search query logsâeach with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passagesâextracted from 3,563,535 web documents retrieved by Bingâthat provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difï¬culty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and ï¬nally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
# Introduction
Building intelligent agents with machine reading comprehension (MRC) or open-domain question answering (QA) capabilities using real world data is an important goal of artiï¬cial intelligence. Progress in developing these capabilities can be of signiï¬cant consumer value if employed in automated assistantsâe.g., Cortana [Cortana], Siri [Siri], Alexa [Amazon Alexa], or Google Assistant [Google Assistant]âon mobile devices and smart speakers, such as Amazon Echo [Amazon Echo]. Many of these devices rely heavily on recent advances in speech recognition technology powered by neural models with deep architectures [Hinton et al., 2012, Dahl et al., 2012]. The rising popularity of spoken interfaces makes it more attractive for users to use natural language dialog for question- answering and information retrieval from the web as opposed to viewing traditional search result pages on a web browser [Gao et al., 2018]. Chatbots and other messenger based intelligent agents are also becoming popular in automating business processesâe.g., answering customer service requests. All of these scenarios can beneï¬t from fundamental improvements in MRC models. However, MRC in the wild is extremely challenging. Successful MRC systems should be able to learn good representations from raw text, infer and reason over learned representations, and ï¬nally generate a summarized response that is correct in both form and content.
The public availability of large datasets has been instrumental in many AI research breakthroughs [Wissner-Gross, 2016]. For example, ImageNetâs [Deng et al., 2009] release of 1.5 million labeled
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
examples with 1000 object categories led to the development of object classiï¬cation models that perform better than humans on the ImageNet task [He et al., 2015]. Similarly, the large speech database collected over 20 years by DARPA enabled new breakthroughs in speech recognition performance from deep learning models Deng and Huang [2004]. Several MRC and QA datasets have also recently emerged. However, many of these existing datasets are not sufï¬ciently large to train deep neural models with large number of parameters. Large scale existing MRC datasets, when available, are often synthetic. Furthermore, a common characteristic, shared by many of these datasets, is that the questions are usually generated by crowd workers based on provided text spans or documents. In MS MARCO, in contrast, the questions correspond to actual search queries that users submitted to Bing, and therefore may be more representative of a ânaturalâ distribution of information need that users may want to satisfy using, say, an intelligent assistant.
Real-world text is messy: they may include typos or abbreviationsâand transcription errors in case of spoken interfaces. The text from different documents may also often contain conï¬icting information. Most existing datasets, in contrast, often contain high-quality stories or text spans from sources such as Wikipedia. Real-world MRC systems should be benchmarked on realistic datasets where they need to be robust to noisy and problematic inputs.
Finally, another potential limitation of existing MRC tasks is that they often require the model to operate on a single entity or a text span. Under many real-world application settings, the information necessary to answer a question may be spread across different parts of the same document, or even across multiple documents. It is, therefore, important to test an MRC model on its ability to extract information and support for the ï¬nal answer from multiple passages and documents.
In this paper, we introduce Microsoft MAchine Reading Comprehension (MS MARCO)âa large scale real-world reading comprehension datasetâwith the goal of addressing many of the above mentioned shortcomings of existing MRC and QA datasets. The dataset comprises of anonymized search queries issued through Bing or Cortana. We annotate each question with segment information as we describe in Section 3. Corresponding to each question, we provide a set of extracted passages from documents retrieved by Bing in response to the question. The passages and the documents may or may not actually contain the necessary information to answer the question. For each question, we ask crowd-sourced editors to generate answers based on the information contained in the retrieved passages. In addition to generating the answer, the editors are also instructed to mark the passages containing the supporting informationâalthough we do not enforce these annotations to be exhaustive. The editors are allowed to mark a question as unanswerable based on the passages provided. We include these unanswerable questions in our dataset because we believe that the ability to recognize insufï¬cient (or conï¬icting) information that makes a question unanswerable is important to develop for an MRC model. The editors are strongly encouraged to form answers in complete sentences. In total, the MS MARCO dataset contains 1,010,916 questions, 8,841,823 companion passages extracted from 3,563,535 web documents, and 182,669 editorially generated answers. Using this dataset, we propose three different tasks with varying levels of difï¬culty:
(i) Predict if a question is answerable given a set of context passages, and extract relevant information and synthesize the answer.
(ii) Generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context.
(iii) Rank a set of retrieved passages given a question.
We describe the dataset and the proposed tasks in more details in the rest of this paper and present some preliminary benchmarking results on these tasks.
# 2 Related work
Machine reading comprehension and open domain question-answering are challenging tasks [Weston et al., 2015]. To encourage more rapid progress, the community has made several different datasets and tasks publicly available for benchmarking. We summarize some of them in this section.
The Stanford Question Answering Dataset (SQuAD) Rajpurkar et al. [2016] consists of 107,785 question-answer pairs from 536 articles, where each answer is a text span. The key distinction between SQUAD and MS MARCO are:
2
Table 1: Comparison of MS MARCO and some of the other MRC datasets.
# Questions # Documents Span of words 100k Human generated 200k Human generated 46,765 Span of words 140k 97k 7,787 100K
10k 1M 1,572 stories 6.9M passages 28k 14M sentences 536 8.8M passages, 3.2m docs.
1. The MS MARCO dataset is more than ten times larger than SQuADâwhich is an important consideration if we want to benchmark large deep learning models [Frank, 2017].
2. The questions in SQuAD are editorially generated based on selected answer spans, while in MS MARCO they are sampled from Bingâs query logs.
3. The answers in SQuAD consists of spans of texts from the provided passages while the answers in MS MARCO are editorially generated.
4. Originally SQuAD contained only answerable questions, although this changed in the more recent edition of the task [Rajpurkar et al., 2018].
NewsQA [Trischler et al., 2017] is a MRC dataset with over 100,000 question and span-answer pairs based off roughly 10,000 CNN news articles. The goal of the NewsQA task is to test MRC models on reasoning skillsâbeyond word matching and paraphrasing. Crowd-sourced editors created the questions from the title of the articles and the summary points (provided by CNN) without access to the article itself. A 4-stage collection methodology was employed to generate a more challenging MRC task. More than 44% of the NewsQA questions require inference and synthesis, compared to SQuADâs 20%.
DuReader [He et al., 2017] is a Chinese MRC dataset built with real application data from Baidu search and Baidu Zhidaoâa community question answering website. It contains 200,000 questions and 420,000 answers from 1,000,000 documents. In addition, DuReader provides additional annotations of the answersâlabelling them as either fact based or opinionative. Within each category, they are further divided into entity, yes/no, and descriptive answers.
NarrativeQA [Kociský et al., 2017] dataset contains questions created by editors based on sum- maries of movie scripts and books. The dataset contains about 45,000 question-answer pairs over 1,567 stories, evenly split between books and movie scripts. Compared to the news corpus used in NewsQA, the collection of movie scripts and books are more complex and diverseâallowing the editors to create questions that may require more complex reasoning. The movie scripts and books are also longer documents than the news or wikipedia article, as is the case with NewsQA and SQuAD, respectively.
SearchQA [Dunn et al., 2017] takes questions from the American TV quiz show, Jeopardy1 and submits them as queries to Google to extract snippets from top 40 retrieved documents that may contain the answers to the questions. Document snippets not containing answers are ï¬ltered out, leaving more than 140K questions-answer pairs and 6.9M snippets. The answers are short exact spans of text averaging between 1-2 tokens. MS MARCO, in contrast, focuses more on longer natural language answer generation, and the questions correspond to Bing search queries instead of trivia questions.
RACE [Lai et al., 2017] contains roughly 100,000 multiple choice questions and 27,000 passages from standardized tests for Chinese students learning English as a foreign language. The dataset is split up into: RACE-M, which has approximately 30,000 questions targeted at middle school students aged 12-15, and RACE-H, which has approximately 70,000 questions targeted at high school students aged 15 to 18. Lai et al. [2017] claim that current state of the art neural models at the time of their publishing were performing at 44% accuracy while the ceiling human performance was 95%.
AI2 Reasoning Challenge (ARC) [Clark et al., 2018] by Allen Institute for Artiï¬cial Intelligence consists of 7,787 grade-school multiple choice science questionsâtypically with 4 possible answers. The answers generally require external knowledge or complex reasoning. In addition,
# 1https://www.jeopardy.com/
3
@ will i quality for osap ifm new in canada Candidate passages hc passage 2 acto alata, Selected passages is in order to apply online for funding consideration from The Ontario Student Assistance (PROGRAM), ofap you must first register as a new use to this. website ce: hitpsJ/osap.gov.on.ca/OSAPSecuttyWeb/publiciagreementahtm) Visit the OSAP website for application deadlines To get OSAP, you have to be eligible. You can apply using an online form. oF you can print off the application forms. you submit a paper application. you âmust pay an application fee. assstance-for-post-secondary-education/how-do-i-apply-for-the-onfario-shudent-assistance- program. sand fie. You ven
Figure 1: Simpliï¬ed passage selection and answer summarization UI for human editors.
ARC provides a corpus of 14M science-related sentences with knowledge relevant to the challenge. However, the training of the models does not have to include, nor be limited to, this corpus.
ReCoRD [Zhang et al., 2018] contains 12,000 Cloze-style question-passage pairs extracted from CNN/Daily Mail news articles. For each pair in this dataset, the question and the passage are selected from the same news article such that they have minimal text overlapâmaking them unlikely to be paraphrases of each otherâbut refer to at least one common named entity. The focus of this dataset is on evaluating MRC models on their common-sense reasoning capabilities.
# 3 The MS Marco dataset
To generate the 1,010,916 questions with 1,026,758 unique answers we begin by sampling queries from Bingâs search logs. We ï¬lter out any non-question queries from this set. We retrieve relevant documents for each question using Bing from its large-scale web index. Then we automatically extract relevant passages from these documents. Finally, human editors annotate passages that contain useful and necessary information for answering the questionsâand compose a well-formed natural language answers summarizing the said information. Figure 1 shows the user interface for a web-based tool that the editors use for completing these annotation and answer composition tasks. During the editorial annotation and answer generation process, we continuously audit the data being generated to ensure accuracy and quality of answersâand verify that the guidelines are appropriately followed.
As previously mentioned, the questions in MS MARCO correspond to user submitted queries from Bingâs query logs. The question formulations, therefore, are often complex, ambiguous, and may even contain typographical and other errors. An example of such a question issued to Bing is: âin what type of circulation does the oxygenated blood ï¬ow between the heart and the cells of the body?â. We believe that these questions, while sometimes not well-formatted, are more representative of human information seeking behaviour. Another example of a question from our dataset includes: âwill I qualify for osap if iâm new in Canadaâ. As shown in ï¬gure 1, one of the relevant passages include: âYou must be a 1. Canadian citizen, 2. Permanent Resident or 3. Protected personâ. When auditing our editorial process, we observe that even the human editors ï¬nd the task of answering these questions to be sometimes difï¬cultâespecially when the question is in a domain the editor is unfamiliar with. We, therefore, believe that the MS MARCO presents a challenging dataset for benchmarking MRC models.
The MS MARCO dataset that we are publishing consists of six major components:
1. Questions: These are a set of anonymized question queries from Bingâs search logs, where the user is looking for a speciï¬c answer. Queries with navigational and other intents are
4
Table 2: Distribution of questions based on answer-type classiï¬er
Question segment Question contains YesNo What How Where When Why Who Which Other Question classiï¬cation Description Numeric Entity Location Person 7.46% 34.96% 16.8% 3.46% 2.71% 1.67% 3.33% 1.79% 27.83% 53.12% 26.12% 8.81% 6.17% 5.78%
# Percentage of question
excluded from our dataset. This ï¬ltering of question queries is performed automatically by a machine learning based classiï¬er trained previously on human annotated data. Selected questions are further annotated by editors based on whether they are answerable using the passages provided.
2. Passages: For each question, on average we include a set of 10 passages which may contain the answer to the question. These passages are extracted from relevant web documents. They are selected by a state-of-the-art passage retrieval system at Bing. The editors are instructed to annotate the passages they use to compose the ï¬nal answer as is_selected. For questions, where no answer was present in any of the passages, they should all be annotated by setting is_selected to 0.
3. Answers: For each question, the dataset contains zero, or more answers composed manually by the human editors. The editors are instructed to read and understand the questions, inspect the retrieved passages, and then synthesize a natural language answer with the correct information extracted strictly from the passages provided.
4. Well-formed Answers: For some question-answer pairs, the data also contains one or more answers that are generated by a post-hoc review-and-rewrite process. This process involves a separate editor reviewing the provided answer and rewriting it if: (i) it does not have proper grammar, (ii) there is a high overlap in the answer and one of the provided passages (indicating that the original editor may have copied the passage directly), or (iii) the answer can not be understood without the question and the passage context. e.g., given the question âtablespoon in cupâ and the answer â16â, the well-formed answer should be âThere are 16 tablespoons in a cup.â.
5. Document: For each of the documents from which the passages were originally extracted from, we include: (i) the URL, (ii) the body text, and (iii) the title. We extracted these documents from Bingâs index as a separate post-processing step. Roughly 300,000 docu- ments could not be retrieved because they were no longer in the index and for the remaining it is possibleâeven likelyâthat the content may have changed since the passages were originally extracted.
6. Question type: Each question is further automatically annotated using a machine learned classiï¬er with one of the following segment labels: (i) NUMERIC, (ii) ENTITY, (iii) LOCA- TION, (iv) PERSON, or (v) DESCRIPTION (phrase). Table 2 lists the relative size of the different question segments and compares it with the proportion of questions that explicitly contain words like âwhatâ and â"whereâ. Note that because the questions in our dataset are based on web search queries, we are may observe a question like âwhat is the age of barack obamaâ be expressed simply as âbarack obama ageâ in our dataset.
5
Table 3: The MS MARCO dataset format.
# Field Description Query A question query issued to Bing.
Passages Top 10 passages from Web documents as retrieved by Bing. The passages are presented in ranked order to human editors. The passage that the editor uses to compose the answer is annotated as is_selected: 1.
Document URLs URLs of the top ranked documents for the question from Bing. The passages are extracted from these documents.
Answer(s) Answers composed by human editors for the question, automatically ex-
tracted passages and their corresponding documents.
Well Formed Answer(s) Well-formed answer rewritten by human editors, and the original answer. Segment QA classiï¬cation. E.g., tallest mountain in south america belongs to the ENTITY segment because the answer is an entity (Aconcagua).
Table 3 describes the ï¬nal dataset format for MS MARCO. Inspired by [Gebru et al., 2018] we also release our datasetâs datasheet on our website. Finally, we summarize the key distinguishing features of the MS MARCO dataset as follows:
1. The questions are anonymized user queries issued to the Bing. 2. All questions are annotated with segment information. 3. The context passagesâfrom which the answers are derivedâare extracted from real web
documents.
4. The answers are composed by human editors. 5. A subset of the questions have multiple answers. 6. A subset of the questions have no answers.
# 3.1 The passage ranking dataset
To facilitate the benchmarking of ML based retrieval models that beneï¬t from supervised training on large datasets, we are releasing a passage collectionâconstructed by taking the union of all the passages in the MS MARCO datasetâand a set of relevant question and passage identiï¬er pairs. To identify the relevant passages, we use the is_selected annotation provided by the editors. As the editors were not required to annotate every passage that were retrieved for the question, this annotation should be considered as incompleteâi.e., there are likely passages in the collection that contain the answer to a question but have not been annotated as is_selected: 1. We use this dataset to propose a re-ranking challenge as described in Section 4. Additionally, we are organizing a âDeep Learningâ track at the 2019 edition of TREC2 where we use these passage and question collections to setup an ad-hoc retrieval task.
# 4 The challenges
Using the MS MARCO dataset, we propose three machine learning tasks of diverse difï¬culty levels:
The novice task requires the system to ï¬rst predict whether a question can be answered based only on the information contained in the provided passages. If the question cannot be answered, then the system should return âNo Answer Presentâ as response. If the question can be answered, then the system should generate the correct answer.
The intermediate task is similar to the novice task, except that the generated answer should be well-formedâsuch that, if the answer is read-aloud then it should make sense even without the context of the question and retrieved passages.
The passage re-ranking task is an information retrieval (IR) challenge. Given a question and a set of 1000 retrieved passages using BM25 [Robertson et al., 2009], the system must produce a
2https://trec.nist.gov/
6
ranking of the said passages based on how likely they are to contain information relevant to answer the question. This task is targeted to provide a large scale dataset for benchmarking emerging neural IR methods [Mitra and Craswell, 2018].
# 5 The benchmarking results
We continue to develop and reï¬ne the MS MARCO dataset iteratively. Presented at NIPS 2016 the V1.0 dataset was released and recieved with enthusiasm In January 2017, we publicly released the 1.1 version of the dataset. In Section 5.1, we present our initial benchmarking results based on this dataset. Subsequently, we release 2.0 the v2.1 version of the MS MARCO dataset in March 2018 and April 2018 respectively. Section 5.2 covers the experimental results on the update dataset. Finally, in October 2018, we released additional data ï¬les for the passage ranking task.
# 5.1 Experimental results on v1.1 dataset
We group the questions in MS MARCO by the segment annotation, as described in Section 3. The complexity of the answers varies signiï¬cantly between categories. For example, the answers to Yes/No questions are binary. The answers to entity questions can be a single entity name or phraseâ e.g., the answer "Rome" for the question what is the capital of Italy". However, for descriptive questions, a longer textual answer is often necessaryâe.g., "What is the agenda for Hollandeâs state visit to Washington?". The evaluation strategy that is appropriate for Yes/No answer questions may not be appropriate for benchmarking on questions that require longer answer generation. Therefore, in our experiments we employ different evaluation metrics for different categories, building on metrics proposed initially by [Mitra et al., 2016]. We use accuracy and precision-recall measures for numeric answers and apply metrics like ROUGE-L [Lin, 2004] and phrasing-aware evaluation framework [Mitra et al., 2016] for long textual answers. The phrasing-aware evaluation framework aims to deal with the diversity of natural language in evaluating long textual answers. The evaluation requires several reference answers per question that are each curated by a different human editor, thus providing a natural way to estimate how diversely a group of individuals may phrase the answer to the same question. A family of pairwise similarity-based metrics can used to incorporate consensus between different reference answers for evaluation. These metrics are simple modiï¬cations to metrics like BLEU [Papineni et al., 2002] and METEOR [Banerjee and Lavie, 2005] and are shown to achieve better correlation with human judgments. Accordingly, as part of our experiments, a subset of MS MARCO where each question has multiple answers is used to evaluate model performance with both BLEU and pa-BLEU as metrics.
# 5.1.1 Generative Model Experiments
The following experiments were run on the V1.1 dataset
Recurrent Neural Networks (RNNs) are capable of predicting future elements from sequence prior. It is often used as a generative language model for various NLP tasks, such as machine translation [Bahdanau et al., 2014] and question-answering [Hermann et al., 2015a]. In this QA experiment setup, we target training and evaluation of such generative models which predict the human-generated answers given questions and/or contextual passages as model input.
Sequence-to-Sequence (Seq2Seq) Model. We train a vanilla Seq2Seq [Sutskever et al., 2014] model with the question-answer pair as source-target sequences.
Memory Networks Model. We adapt end-to-end memory networks [Sukhbaatar et al., 2015]âthat has previously demonstrated good performance on other QA tasksâby using summed memory representation as the initial state of the RNN decoder.
Discriminative Model. For comparison, we also train a discriminative model to rank provided passages as a baseline. This is a variant of [Huang et al., 2013] where we use LSTM [Hochreiter and Schmidhuber, 1997] in place of multi-layer perceptron (MLP).
Table 4 shows the preformance of these models using ROUGE-L metric. Additionally, we evaluate memory networks model on an MS MARCO subset where questions have multiple answers. Table 5 shows the performance of the model as measured by BLEU and its pairwise variant pa-BLEU [Mitra et al., 2016].
7
Table 4: ROUGE-L of Different QA Models Tested against a Subset of MS MARCO
Description Best ROUGE-L of any passage A DSSM-alike passage ranking model Best Passage Passage Ranking Sequence to Sequence Vanilla seq2seq model predicting answers from questions Memory Network Seq2seq model with MemNN for passages
Table 5: BLEU and pa-BLEU on a Multi-Answer Subset of MS MARCO
BLEU pa-BLEU Best Passage 0.359 Memory Network 0.340
# 5.1.2 Cloze-Style Model Experiments
In Cloze-style tests, a model is required to predict missing words in a text sequence by considering contextual information in textual format. CNN and Daily Mail dataset [Hermann et al., 2015b] is an example of such a cloze-style QA dataset. In this section, we present the performance of two MRC models using both CNN test dataset and a MS MARCO subset. The subset is ï¬ltered to numeric answer type category, to which cloze-style test is applicable.
⢠Attention Sum Reader (AS Reader): AS Reader [Kadlec et al., 2016] is a simple model that uses attention to directly pick the answer from the context.
⢠ReasoNet: ReasoNet [Shen et al., 2016] also relies on attention, but is also a dynamic multi-turn model that attempts to exploit and reason over the relation among questions, contexts, and answers.
We show model accuracy numbers on both datasets in table 6, and precision-recall curves on MS MARCO subset in ï¬gure 2.
# 5.2 Experimental results on v2.1 dataset
The human baseline on our v1.1 benchmark was surpassed by competing machine learned models in approximately 15 months. For the v2.1 dataset, we revisit our approach to generating the human baseline. We select ï¬ve top performing editorsâbased on their performance on a set of auditing questionsâto create a human baseline task group. We randomly sample 1,427 questions from our evaluation set and ask each of these editors to produce a new assessment. Then, we compare all our editorial answers to the ground truth and select the answer with the best ROUGE-L score as the candidate answer. Table 7 shows the results. We evaluate the answer set on both the novice and the intermediate task and we include questions that have no answer.
To provide a competitive experimental baseline for our dataset, we trained the model introduced in [Clark and Gardner, 2017]. This model uses recent ideas in reading comprehension research, like self-attention [Cheng et al., 2016] and bi-directional attention [Seo et al., 2016]. Our goal is to train this model such that, given a question and a passage that contains an answer to the question, the model identiï¬es the answer (or span) in the passage. This is similar to the task in SQuAD [Rajpurkar et al., 2016]. First, we select the question-passage pairs where the passage contains an answer to the question and the answer is a contiguous set of words from the passage. Then, we train the model to predict a span for each question-passage pair and output a conï¬dence score. To evaluate the model,
Table 6: Accuracy of MRC Models on Numeric Segment of MS MARCO
Accuracy MS MARCO CNN (test) AS Reader ReasoNet 55.0 58.9 69.5 74.7
8
1 AS Reader ReasoNet 0.9 n o i s i c e r P 0.8 0.7 0.6 0 0.2 0.4 0.6 0.8 1
# Recall
Figure 2: Precision-Recall of Machine Reading Comprehension Models on MS MARCO Subset of Numeric Category
Table 7: Performance of MRC Span Model and Human Baseline on MS Marco Tasks
ROUGE-L BLEU-1 BLEU-2 BLEU-3 BLEU-4 Task 0.094 0.268 BiDaF on Original 0.46771 Human Ensemble on Novice 0.73703 0.45439 Human Ensemble on Intermediate 0.63044 0.094 BiDaF on V2 Novice 0.070 BiDaF on V2 Intermediate
for each question we chose our model generated answer that has the highest conï¬dence score among all passages available for that question. To compare model performance across datasets we run this exact setup (training and evaluation) on the original dataset and the new V2 Tasks. Table 7 shows the results. The results indicate that the new v2.1 dataset is more difï¬cult than the previous v1.1 version. On the novice task BiDaF cannot determine when the question is not answerable and thus performs substantially worse compared to on the v1.1 dataset. On the intermediate task, BiDaF performance once again drops because the model only uses vocabulary present in the passage whereas the well-formed answers may include words from the general vocabulary.
# 6 Future Work and Conclusions
The process of developing the MS MARCO dataset and making it publicly available has been a tremendous learning experience. Between the ï¬rst version of the dataset and the most recent edition, we have signiï¬cantly modiï¬ed how we collect and annotate the data, the deï¬nition of our tasks, and even broadened our scope to cater to the neural IR community. The future of this dataset will depend largely on how the broader academic community makes use of this dataset. For example, we believe that the size and the underlying use of Bingâs search queries and web documents in the construction of the dataset makes it particularly attractive for benchmarking new machine learning models for MRC and neural IR. But in addition to improving these ML models, the dataset may also prove to be useful for exploring new metricsâe.g., ROUGE-2 [Ganesan, 2018] and ROUGE-AR[Maples, 2017]âand robust evaluation strategies. Similarly, combining MS MARCO with other existing MRC datasets may also be interesting in the context of multi-task and cross domain learning. We want to engage with the community to get their feedback and guidance on how we can make it easier to enable such new explorations using the MS MARCO data. If there is enough interest, we may also consider generating similar datasets in other languages in the futureâor augment the existing dataset with other information from the web.
9
# References
Amazon Alexa. Amazon alexa. http://alexa.amazon.com/, 2018.
Amazon Echo. Amazon echo. https://en.wikipedia.org/wiki/Amazon_Echo, 2018.
D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
S. Banerjee and A. Lavie. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, volume 29, pages 65â72, 2005.
J. Cheng, L. Dong, and M. Lapata. Long short-term memory-networks for machine reading. CoRR, abs/1601.06733, 2016. URL http://arxiv.org/abs/1601.06733.
C. Clark and M. Gardner. Simple and effective multi-paragraph reading comprehension. CoRR, abs/1710.10723, 2017. URL http://arxiv.org/abs/1710.10723.
P. Clark, I. Cowhey, O. Etzioni, T. Khot, A. Sabharwal, C. Schoenick, and O. Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. 2018.
Cortana. Cortana personal assistant. http://www.microsoft.com/en-us/mobile/experiences/ cortana/, 2018.
G. Dahl, D. Yu, L. Deng, and A. Acero. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Transactions on Audio, Speech, and Language Processing, 20(1):30â42, 2012.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fe. Imagenet: Alarge-scalehierarchicalimagedatabas. CVPR, 2009. URL http://www.image-net.org/papers/imagenet_cvpr09.pdf.
L. Deng and X. Huang. Challenges in adopting speech recognition. Communications of the ACM, 47(1):69â75, 2004.
M. Dunn, L. Sagun, M. Higgins, V. U. Güney, V. Cirik, and K. Cho. Searchqa: A new q&a dataset augmented with context from a search engine. CoRR, abs/1704.05179, 2017.
B. H. Frank. Google brain chief: Deep learning takes at least 100,000 examples. https://venturebeat.com/ 2017/10/23/google-brain-chief-says-100000-examples-is-enough-data-for-deep-learning/, 2017.
K. Ganesan. Rouge 2.0: Updated and improved measures for evaluation of summarization tasks. 2018.
J. Gao, M. Galley, and L. Li. Neural approaches to conversational ai. arXiv preprint arXiv:1809.08267, 2018.
T. Gebru, J. Morgenstern, B. Vecchione, J. W. Vaughan, H. Wallach, H. D. III, and K. Crawford. Datasheets for datasets. 2018.
Google Assistant. Google assistant. https://assistant.google.com/, 2018.
K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. 2015. URL https: //arxiv.org/abs/1512.03385.
W. He, K. Liu, Y. Lyu, S. Zhao, X. Xiao, Y. Liu, Y. Wang, H. Wu, Q. She, X. Liu, T. Wu, and H. Wang. Dureader: a chinese machine reading comprehension dataset from real-world applications. CoRR, abs/1711.05073, 2017.
K. M. Hermann, T. Kociský, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom. Teaching machines to read and comprehend. 2015a. URL https://arxiv.org/abs/1506.03340.
K. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1693â1701, 2015b.
G. Hinton, L. Deng, D. Yu, G. Dalh, and A. Mohamed. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82â97, 2012.
S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997.
10
P.-S. Huang, X. He, J. Gao, L. Deng, A. Acero, and L. Heck. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM international conference on Conference on information & knowledge management, pages 2333â2338. ACM, 2013.
R. Kadlec, M. Schmid, O. Bajgar, and J. Kleindienst. Text understanding with the attention sum reader network. arXiv preprint arXiv:1603.01547, 2016.
T. Kociský, J. Schwarz, P. Blunsom, C. Dyer, K. M. Hermann, G. Melis, and E. Grefenstette. The narrativeqa reading comprehension challenge. CoRR, abs/1712.07040, 2017.
G. Lai, Q. Xie, H. Liu, Y. Yang, and E. H. Hovy. Race: Large-scale reading comprehension dataset from examinations. In EMNLP, 2017.
C.-Y. Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out: Proceedings of the ACL-04 workshop, volume 8. Barcelona, Spain, 2004.
S. Maples. The rouge-ar: A proposed extension to the rouge evaluation metric for abstractive text summarization. 2017.
B. Mitra and N. Craswell. An introduction to neural information retrieval. Foundations and Trends® in Information Retrieval (to appear), 2018.
B. Mitra, G. Simon, J. Gao, N. Craswell, and L. J. Deng. A proposal for evaluating answer distillation from web data. 2016.
K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311â318. Association for Computational Linguistics, 2002.
P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. Squad: 100,000+ questions for machine comprehension of text. 2016. URL https://arxiv.org/abs/1606.05250.
P. Rajpurkar, R. Jia, and P. Liang. Know what you donât know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822, 2018.
S. Robertson, H. Zaragoza, et al. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends®) in Information Retrieval, 3(4):333-389, 2009.
M. J. Seo, A. Kembhavi, A. Farhadi, and H. Hajishirzi. Bidirectional attention ï¬ow for machine comprehension. CoRR, abs/1611.01603, 2016.
Y. Shen, P.-S. Huang, J. Gao, and W. Chen. Reasonet: Learning to stop reading in machine comprehension. arXiv preprint arXiv:1609.05284, 2016.
Siri. Siri personal assistant. http://www.apple.com/ios/siri/, 2018.
S. Sukhbaatar, J. Weston, R. Fergus, et al. End-to-end memory networks. In Advances in neural information processing systems, pages 2440â2448, 2015.
I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. CoRR, abs/1409.3215, 2014. URL http://arxiv.org/abs/1409.3215.
A. Trischler, T. Wang, X. Yuan, J. Harris, A. Sordoni, P. Bachman, and K. Suleman. Newsqa: A machine comprehension dataset. In Rep4NLP@ACL, 2017.
J. Weston, A. Bordes, S. Chopra, A. M. Rush, B. van Merrienboer, A. Joulin, and T. Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks. 2015. URL https://arxiv.org/abs/ 1502.05698.
A. Wissner-Gross. Datasets over algorithms. Edge. com. Retrieved, 8, 2016.
S. Zhang, X. Liu, J. Liu, J. Gao, K. Duh, and B. Van Durmeâ . Record: Bridging the gap between human and machine commonsense reading comprehension. arXiv preprint arXiv:1810.12885, 2018.
11 | {
"id": "1810.12885"
} |
1611.08669 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | 7 1 0 2
g u A 1 ] V C . s c [
5 v 9 6 6 8 0 . 1 1 6 1 : v i X r a
# Visual Dialog
Abhishek Das1, Satwik Kottur2, Khushi Gupta2*, Avi Singh3*, Deshraj Yadav4, José M.F. Moura2, Devi Parikh1, Dhruv Batra1 1Georgia Institute of Technology, 2Carnegie Mellon University, 3UC Berkeley, 4Virginia Tech 2{skottur, khushig, moura}@andrew.cmu.edu 1{abhshkdz, parikh, dbatra}@gatech.edu
3avisingh@cs.berkeley.edu 4deshraj@vt.edu
visualdialog.org
# Abstract
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natu- ral, conversational language about visual content. Speciï¬- cally, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accu- rately. Visual Dialog is disentangled enough from a speciï¬c downstream task so as to serve as a general test of ma- chine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Di- alog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on â¼120k images from COCO, with a total of â¼1.2M dialog question- answer pairs.
âcat drinking water out of a coffee mug What color is the mug? White and red Are there any pictures on it? No, something is there can't tell what itis Is the mug and cat on a table? Yes, they are Are there other items on the tableâ (ea) eo) ca] (co) Yes, magazines, books, toaster and basket, and a plate
We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders â Late Fusion, Hierarchi- cal Recurrent Encoder and Memory Network â and 2 de- coders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval- based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and eval- uated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the ï¬rst âvisual chat- botâ! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org.
Figure 1: We introduce a new AI task â Visual Dialog, where an AI agent must hold a dialog with a human about visual content. We introduce a large-scale dataset (VisDial), an evaluation protocol, and novel encoder-decoder models for this task.
tion [63], object detection [34] â to âhigh-levelâ AI tasks such as learning to play Atari video games [42] and Go [55], answering reading comprehension questions by understand- ing short stories [21, 65], and even answering questions about images [6, 39, 49, 71] and videos [57, 58]! What lies next for AI? We believe that the next genera- tion of visual intelligence systems will need to posses the ability to hold a meaningful dialog with humans in natural language about visual content. Applications include:
# 1. Introduction
We are witnessing unprecedented advances in computer vi- sion (CV) and artiï¬cial intelligence (AI) â from âlow-levelâ AI tasks such as image classiï¬cation [20], scene recogni-
Aiding visually impaired users in understanding their sur- roundings [7] or social media content [66] (AI: âJohn just uploaded a picture from his vacation in Hawaiiâ, Human: âGreat, is he at the beach?â, AI: âNo, on a mountainâ). ⢠Aiding analysts in making decisions based on large quan- tities of surveillance data (Human: âDid anyone enter this room last week?â, AI: âYes, 27 instances logged on cam- eraâ, Human: âWere any of them carrying a black bag?â),
*Work done while KG and AS were interns at Virginia Tech.
1
Captioning âTwo people are ina wheelchair and one is holding a racket. Visual Dialog Q: How many people are on Visual Dialog VQa wheelchairs ? Q: What is the gender of the Q: How many people Two âone in the white shirt ? âon wheelchairs ? What are their genders ? She is a woman A: Two A Q: A A: One male and one female | Q:; What is she doing ? Q: Which one is holding a A: Playing a Wii game racket ? Q: Is that a man to her right The woman A: No, it's a woman Q: How many wheelchairs ? A: One A
Figure 2: Differences between image captioning, Visual Question Answering (VQA) and Visual Dialog. Two (partial) dialogs are shown from our VisDial dataset, which is curated from a live chat between two Amazon Mechanical Turk workers (Sec. 3).
⢠Interacting with an AI assistant (Human: âAlexa â can you see the baby in the baby monitor?â, AI: âYes, I canâ, Human: âIs he sleeping or playing?â).
⢠Robotics applications (e.g. search and rescue missions) where the operator may be âsituationally blindâ and oper- ating via language [40] (Human: âIs there smoke in any room around you?â, AI: âYes, in one roomâ, Human: âGo there and look for peopleâ).
Despite rapid progress at the intersection of vision and lan- guage â in particular, in image captioning and visual ques- tion answering (VQA) â it is clear that we are far from this grand goal of an AI agent that can âseeâ and âcommunicateâ. In captioning, the human-machine interaction consists of the machine simply talking at the human (âTwo people are in a wheelchair and one is holding a racketâ), with no dia- log or input from the human. While VQA takes a signiï¬cant step towards human-machine interaction, it still represents only a single round of a dialog â unlike in human conver- sations, there is no scope for follow-up questions, no mem- ory in the system of previous questions asked by the user nor consistency with respect to previous answers provided by the system (Q: âHow many people on wheelchairs?â, A: âTwoâ; Q: âHow many wheelchairs?â, A: âOneâ). As a step towards conversational visual AI, we introduce a novel task â Visual Dialog â along with a large-scale dataset, an evaluation protocol, and novel deep models. Task Deï¬nition. The concrete task in Visual Dialog is the following â given an image I, a history of a dialog con- sisting of a sequence of question-answer pairs (Q1: âHow many people are in wheelchairs?â, A1: âTwoâ, Q2: âWhat are their genders?â, A2: âOne male and one femaleâ), and a natural language follow-up question (Q3: âWhich one is holding a racket?â), the task for the machine is to answer the question in free-form natural language (A3: âThe womanâ). This task is the visual analogue of the Turing Test.
Consider the Visual Dialog examples in Fig. 2. The ques- tion âWhat is the gender of the one in the white shirt?â requires the machine to selectively focus and direct atten-
2
requires tion to a relevant region. co-reference resolution (whom does the pronoun âsheâ re- fer to?), âIs that a man to her right?â further requires the machine to have visual memory (which object in the im- age were we talking about?). Such systems also need to be consistent with their outputs â âHow many people are in wheelchairs?â, âTwoâ, âWhat are their genders?â, âOne male and one femaleâ â note that the number of genders be- ing speciï¬ed should add up to two. Such difï¬culties make the problem a highly interesting and challenging one. Why do we talk to machines? Prior work in language-only (non-visual) dialog can be arranged on a spectrum with the following two end-points: goal-driven dialog (e.g. booking a ï¬ight for a user) ââ goal-free dialog (or casual âchit-chatâ with chatbots). The two ends have vastly differing purposes and conï¬icting evaluation criteria. Goal-driven dialog is typically evalu- ated on task-completion rate (how frequently was the user able to book their ï¬ight) or time to task completion [14, 44] â clearly, the shorter the dialog the better. In contrast, for chit-chat, the longer the user engagement and interaction, the better. For instance, the goal of the 2017 $2.5 Million Amazon Alexa Prize is to âcreate a socialbot that converses coherently and engagingly with humans on popular topics for 20 minutes.â
We believe our instantiation of Visual Dialog hits a sweet It is disentangled enough from a spot on this spectrum. speciï¬c downstream task so as to serve as a general test of machine intelligence, while being grounded enough in vi- sion to allow objective evaluation of individual responses and benchmark progress. The former discourages task- engineered bots for âslot ï¬llingâ [30] and the latter discour- ages bots that put on a personality to avoid answering ques- tions while keeping the user engaged [64]. Contributions. We make the following contributions: ⢠We propose a new AI task: Visual Dialog, where a ma- chine must hold dialog with a human about visual content. ⢠We develop a novel two-person chat data-collection pro- tocol to curate a large-scale Visual Dialog dataset (Vis- Dial). Upon completion1, VisDial will contain 1 dialog each (with 10 question-answer pairs) on â¼140k images from the COCO dataset [32], for a total of â¼1.4M dialog question-answer pairs. When compared to VQA [6], Vis- Dial studies a signiï¬cantly richer task (dialog), overcomes a âvisual priming biasâ in VQA (in VisDial, the questioner does not see the image), contains free-form longer an- swers, and is an order of magnitude larger.
1VisDial data on COCO-train (â¼83k images) and COCO- val (â¼40k images) is already available for download at https:// visualdialog.org. Since dialog history contains the ground-truth cap- tion, we will not be collecting dialog data on COCO-test. Instead, we will collect dialog data on 20k extra images from COCO distribution (which will be provided to us by the COCO team) for our test set.
⢠We introduce a family of neural encoder-decoder models
for Visual Dialog with 3 novel encoders â Late Fusion: that embeds the image, history, and ques- tion into vector spaces separately and performs a âlate fusionâ of these into a joint embedding.
â Hierarchical Recurrent Encoder: that contains a dialog- level Recurrent Neural Network (RNN) sitting on top of a question-answer (QA)-level recurrent block. In each QA-level recurrent block, we also include an attention- over-history mechanism to choose and attend to the round of the history relevant to the current question.
â Memory Network: that treats each previous QA pair as a âfactâ in its memory bank and learns to âpollâ the stored facts and the image to develop a context vector. We train all these encoders with 2 decoders (generative and discriminative) â all settings outperform a number of sophisticated baselines, including our adaption of state-of- the-art VQA models to VisDial.
⢠We propose a retrieval-based evaluation protocol for Vi- sual Dialog where the AI agent is asked to sort a list of candidate answers and evaluated on metrics such as mean- reciprocal-rank of the human response.
We conduct studies to quantify human performance. ⢠Putting it all together, on the project page we demonstrate
the ï¬rst visual chatbot!
# 2. Related Work
Vision and Language. A number of problems at the inter- section of vision and language have recently gained promi- nence â image captioning [15, 16, 27, 62], video/movie description [51, 59, 60], text-to-image coreference/ground- ing [10, 22, 29, 45, 47, 50], visual storytelling [4, 23], and of course, visual question answering (VQA) [3, 6, 12, 17, 19, 37â39, 49, 69]. However, all of these involve (at most) a single-shot natural language interaction â there is no dialog. Concurrent with our work, two recent works [13, 43] have also begun studying visually-grounded dialog. Visual Turing Test. Closely related to our work is that of Geman et al. [18], who proposed a fairly restrictive âVisual Turing Testâ â a system that asks templated, binary ques- tions. In comparison, 1) our dataset has free-form, open- ended natural language questions collected via two subjects chatting on Amazon Mechanical Turk (AMT), resulting in a more realistic and diverse dataset (see Fig. 5). 2) The dataset in [18] only contains street scenes, while our dataset has considerably more variety since it uses images from COCO [32]. Moreover, our dataset is two orders of mag- nitude larger â 2,591 images in [18] vs â¼140k images, 10 question-answer pairs per image, total of â¼1.4M QA pairs. Text-based Question Answering. Our work is related to text-based question answering or âreading comprehen- sionâ tasks studied in the NLP community. Some recent
3
large-scale datasets in this domain include the 30M Fac- toid Question-Answer corpus [52], 100K SimpleQuestions dataset [8], DeepMind Q&A dataset [21], the 20 artiï¬cial tasks in the bAbI dataset [65], and the SQuAD dataset for reading comprehension [46]. VisDial can be viewed as a fusion of reading comprehension and VQA. In VisDial, the machine must comprehend the history of the past dialog and then understand the image to answer the question. By de- sign, the answer to any question in VisDial is not present in the past dialog â if it were, the question would not be asked. The history of the dialog contextualizes the question â the question âwhat else is she holding?â requires a machine to comprehend the history to realize who the question is talk- ing about and what has been excluded, and then understand the image to answer the question. Conversational Modeling and Chatbots. Visual Dialog is the visual analogue of text-based dialog and conversation modeling. While some of the earliest developed chatbots were rule-based [64], end-to-end learning based approaches are now being actively explored [9, 14, 26, 31, 53, 54, 61]. A recent large-scale conversation dataset is the Ubuntu Dia- logue Corpus [35], which contains about 500K dialogs ex- tracted from the Ubuntu channel on Internet Relay Chat (IRC). Liu et al. [33] perform a study of problems in exist- ing evaluation protocols for free-form dialog. One impor- tant difference between free-form textual dialog and Vis- Dial is that in VisDial, the two participants are not symmet- ric â one person (the âquestionerâ) asks questions about an image that they do not see; the other person (the âanswererâ) sees the image and only answers the questions (in otherwise unconstrained text, but no counter-questions allowed). This role assignment gives a sense of purpose to the interaction (why are we talking? To help the questioner build a men- tal model of the image), and allows objective evaluation of individual responses.
# 3. The Visual Dialog Dataset (VisDial)
We now describe our VisDial dataset. We begin by describ- ing the chat interface and data-collection process on AMT, analyze the dataset, then discuss the evaluation protocol.
Consistent with previous data collection efforts, we collect visual dialog data on images from the Common Objects in Context (COCO) [32] dataset, which contains multiple ob- jects in everyday scenes. The visual complexity of these images allows for engaging and diverse conversations. Live Chat Interface. Good data for this task should in- clude dialogs that have (1) temporal continuity, (2) ground- ing in the image, and (3) mimic natural âconversationalâ exchanges. To elicit such responses, we paired 2 work- ers on AMT to chat with each other in real-time (Fig. 3). Each worker was assigned a speciï¬c role. One worker (the âquestionerâ) sees only a single line of text describing an im-
Caption: A sink and toilet in a small room. You have to ASK questions about the image. fp oe U Is this a bathroom ? ; Fellow Turker connected |_Now you can send messages. | jrionTe | Fes. it a athroom 2.You l wit color isthe room 2 > 2. = "2
questions about the image. Caption: A sink and toilet in a small room. You have to ANSWER Fellow Turker connected Now you can send messages. fi : 1S this bathroom ? 1.You: l yes, its a bathroom fa Je.Feliow Turker: what color isthe room ? 2.ou 1 itioks cream colored >)
âcan you see anything else ? there is a shelf with items on it is anyone in the room ? nobody isin the room âcan you see on the outside ? no, itis only inside what colori tho sink ? the sink is white is the room clean ? itis very clean isthe toilet facing the sink ? yes the tollet is facing the sink âcan you see a door ? yes, lean see the door Copa: {10 what color isthe door ? {A sink and tole in a small room, 0 the door is tan colored,
Caption: A sink and toilet in a small room. You have to ASK questions about the image. fp oe U Is this a bathroom ? ; Fellow Turker connected |_Now you can send messages. | questions about the image. jrionTe | Fes. it a athroom 2.You l wit color isthe room 2 > Caption: A sink and toilet in a small room. You have to ANSWER âcan you see anything else ? there is a shelf with items on it is anyone in the room ? nobody isin the room âcan you see on the outside ? no, itis only inside what colori tho sink ? the sink is white is the room clean ? itis very clean isthe toilet facing the sink ? yes the tollet is facing the sink âcan you see a door ? yes, lean see the door Copa: {10 what color isthe door ? {A sink and tole in a small room, 0 the door is tan colored, Fellow Turker connected Now you can send messages. fi : 1S this bathroom ? 1.You: l yes, its a bathroom fa Je.Feliow Turker: what color isthe room ? 2.ou 1 itioks cream colored >) 2. = "2
(a) What the âquestionerâ sees.
# (b) What the âanswererâ sees.
(c) Example dialog from our VisDial dataset.
Figure 3: Collecting visually-grounded dialog data on Amazon Mechanical Turk via a live chat interface where one person is assigned the role of âquestionerâ and the second person is the âanswererâ. We show the ï¬rst two questions being collected via the interface as Turkers interact with each other in Fig. 3a and Fig. 3b. Remaining questions are shown in Fig. 3c.
age (caption from COCO); the image remains hidden to the questioner. Their task is to ask questions about this hidden image to âimagine the scene betterâ. The second worker (the âanswererâ) sees the image and caption. Their task is to an- swer questions asked by their chat partner. Unlike VQA [6], answers are not restricted to be short or concise, instead workers are encouraged to reply as naturally and âconversa- tionallyâ as possible. Fig. 3c shows an example dialog.
This process is an unconstrained âliveâ chat, with the only exception that the questioner must wait to receive an answer before posting the next question. The workers are allowed to end the conversation after 20 messages are exchanged (10 pairs of questions and answers). Further details about our ï¬nal interface can be found in the supplement.
one of the workers abandoned a HIT (or was disconnected) midway, automatic conditions in the code kicked in asking the remaining worker to either continue asking questions or providing facts (captions) about the image (depending on their role) till 10 messages were sent by them. Workers who completed the task in this way were fully compensated, but our backend discarded this data and automatically launched a new HIT on this image so a real two-person conversation could be recorded. Our entire data-collection infrastructure (front-end UI, chat interface, backend storage and messag- ing system, error handling protocols) is publicly available2.
# 4. VisDial Dataset Analysis
We also piloted a different setup where the questioner saw a highly blurred version of the image, instead of the caption. The conversations seeded with blurred images resulted in questions that were essentially âblob recognitionâ â âWhat is the pink patch at the bottom right?â. For our full-scale data-collection, we decided to seed with just the captions since it resulted in more ânaturalâ questions and more closely modeled the real-world applications discussed in Section 1 where no visual signal is available to the human.
Building a 2-person chat on AMT. Despite the popular- ity of AMT as a data collection platform in computer vi- sion, our setup had to design for and overcome some unique challenges â the key issue being that AMT is simply not designed for multi-user Human Intelligence Tasks (HITs). Hosting a live two-person chat on AMT meant that none of the Amazon tools could be used and we developed our own backend messaging and data-storage infrastructure based on Redis messaging queues and Node.js. To support data qual- ity, we ensured that a worker could not chat with themselves (using say, two different browser tabs) by maintaining a pool of worker IDs paired. To minimize wait time for one worker while the second was being searched for, we ensured that there was always a signiï¬cant pool of available HITs. If
We now analyze the v0.9 subset of our VisDial dataset â it contains 1 dialog (10 QA pairs) on â¼123k images from COCO-train/val, a total of 1,232,870 QA pairs.
# 4.1. Analyzing VisDial Questions
Visual Priming Bias. One key difference between VisDial and previous image question-answering datasets (VQA [6], Visual 7W [70], Baidu mQA [17]) is the lack of a âvi- sual priming biasâ in VisDial. Speciï¬cally, in all previ- ous datasets, subjects saw an image while asking questions about it. As analyzed in [3, 19, 69], this leads to a particular bias in the questions â people only ask âIs there a clock- tower in the picture?â on pictures actually containing clock towers. This allows language-only models to perform re- markably well on VQA and results in an inï¬ated sense of progress [19, 69]. As one particularly perverse example â for questions in the VQA dataset starting with âDo you see a . . . â, blindly answering âyesâ without reading the rest of the question or looking at the associated image results in an average VQA accuracy of 87%! In VisDial, questioners do not see the image. As a result, this bias is reduced.
# 2https://github.com/batra-mlp-lab/
visdial-amt-chat
4
) â Questions 50% â Answers Percentage coverage ee? 3 4 5 6 7 8 9 10 # Words in sentence
(a) (b)
â VOA â Visual Dialog Percentage coverage oly Gy iy ae # Unique answers (x 10000) 20
Figure 4: Distribution of lengths for questions and answers (left); and percent coverage of unique answers over all answers from the train dataset (right), compared to VQA. For a given coverage, Vis- Dial has more unique answers indicating greater answer diversity.
Distributions. Fig. 4a shows the distribution of question lengths in VisDial â we see that most questions range from four to ten words. Fig. 5 shows âsunburstsâ visualizing the distribution of questions (based on the ï¬rst four words) in VisDial vs. VQA. While there are a lot of similarities, some differences immediately jump out. There are more binary questions3 in VisDial as compared to VQA â the most fre- quent ï¬rst question-word in VisDial is âisâ vs. âwhatâ in VQA. A detailed comparison of the statistics of VisDial vs. other datasets is available in Table 1 in the supplement.
Finally, there is a stylistic difference in the questions that is difï¬cult to capture with the simple statistics above. In VQA, subjects saw the image and were asked to stump a smart robot. Thus, most queries involve speciï¬c details, of- ten about the background (âWhat program is being utilized in the background on the computer?â). In VisDial, question- ers did not see the original image and were asking questions to build a mental model of the scene. Thus, the questions tend to be open-ended, and often follow a pattern:
⢠Generally starting with the entities in the caption:
âAn elephant walking away from a pool in an exhibitâ, âIs there only 1 elephant?â,
⢠digging deeper into their parts or attributes:
âIs it full grown?â, âIs it facing the camera?â,
⢠asking about the scene category or the picture setting:
âIs this indoors or outdoors?â, âIs this a zoo?â,
⢠the weather:
âIs it snowing?â, âIs it sunny?â,
⢠simply exploring the scene:
âAre there people?â, âIs there shelter for elephant?â,
3 Questions starting in âDoâ, âDidâ, âHaveâ, âHasâ, âIsâ, âAreâ, âWasâ, âWereâ, âCanâ, âCouldâ.
5
⢠and asking follow-up questions about the new visual en- tities discovered from these explorations:
âThereâs a blue fence in background, like an enclosureâ, âIs the enclosure inside or outside?â.
# 4.2. Analyzing VisDial Answers
Answer Lengths. Fig. 4a shows the distribution of answer lengths. Unlike previous datasets, answers in VisDial are longer and more descriptive â mean-length 2.9 words (Vis- Dial) vs 1.1 (VQA), 2.0 (Visual 7W), 2.8 (Visual Madlibs).
Fig. 4b shows the cumulative coverage of all answers (y- axis) by the most frequent answers (x-axis). The difference between VisDial and VQA is stark â the top-1000 answers in VQA cover â¼83% of all answers, while in VisDial that ï¬gure is only â¼63%. There is a signiï¬cant heavy tail in Vis- Dial â most long strings are unique, and thus the coverage curve in Fig. 4b becomes a straight line with slope 1. In total, there are 337,527 unique answers in VisDial v0.9. Answer Types. Since the answers in VisDial are longer strings, we can visualize their distribution based on the starting few words (Fig. 5c). An interesting category of answers emerges â âI think soâ, âI canât tellâ, or âI canât seeâ â expressing doubt, uncertainty, or lack of information. This is a consequence of the questioner not being able to see the image â they are asking contextually relevant questions, but not all questions may be answerable with certainty from that image. We believe this is rich data for building more human-like AI that refuses to answer questions it doesnât have enough information to answer. See [48] for a related, but complementary effort on question relevance in VQA. Binary Questions vs Binary Answers. In VQA, binary questions are simply those with âyesâ, ânoâ, âmaybeâ as an- swers [6]. In VisDial, we must distinguish between binary questions and binary answers. Binary questions are those starting in âDoâ, âDidâ, âHaveâ, âHasâ, âIsâ, âAreâ, âWasâ, âWereâ, âCanâ, âCouldâ. Answers to such questions can (1) contain only âyesâ or ânoâ, (2) begin with âyesâ, ânoâ, and contain additional information or clariï¬cation, (3) involve ambiguity (âItâs hard to seeâ, âMaybeâ), or (4) answer the question without explicitly saying âyesâ or ânoâ (Q: âIs there any type of design or pattern on the cloth?â, A: âThere are circles and lines on the clothâ). We call answers that con- tain âyesâ or ânoâ as binary answers â 149,367 and 76,346 answers in subsets (1) and (2) from above respectively. Bi- nary answers in VQA are biased towards âyesâ [6, 69] â 61.40% of yes/no answers are âyesâ. In VisDial, the trend is reversed. Only 46.96% are âyesâ for all yes/no responses. This is understandable since workers did not see the image, and were more likely to end up with negative responses.
# 4.3. Analyzing VisDial Dialog
In Section 4.1, we discussed a typical ï¬ow of dialog in Vis- Dial. We analyze two quantitative statistics here.
(a) VisDial Questions (b) VQA Questions (c) VisDial Answers
Figure 5: Distribution of ï¬rst n-grams for (left to right) VisDial questions, VQA questions and VisDial answers. Word ordering starts towards the center and radiates outwards, and arc length is proportional to number of questions containing the word.
Coreference in dialog. Since language in VisDial is the re- sult of a sequential conversation, it naturally contains pro- nouns â âheâ, âsheâ, âhisâ, âherâ, âitâ, âtheirâ, âtheyâ, âthisâ, âthatâ, âthoseâ, etc. In total, 38% of questions, 19% of an- swers, and nearly all (98%) dialogs contain at least one pronoun, thus conï¬rming that a machine will need to over- come coreference ambiguities to be successful on this task. We ï¬nd that pronoun usage is low in the ï¬rst round (as ex- pected) and then picks up in frequency. A ï¬ne-grained per- round analysis is available in the supplement. Temporal Continuity in Dialog Topics. It is natural for conversational dialog data to have continuity in the âtop- icsâ being discussed. We have already discussed qualitative differences in VisDial questions vs. VQA. In order to quan- tify the differences, we performed a human study where we manually annotated question âtopicsâ for 40 images (a total of 400 questions), chosen randomly from the val set. The topic annotations were based on human judgement with a consensus of 4 annotators, with topics such as: asking about a particular object (âWhat is the man doing?â) , scene (âIs it outdoors or indoors?â), weather (âIs the weather sunny?â), the image (âIs it a color image?â), and exploration (âIs there anything else?â). We performed similar topic annotation for questions from VQA for the same set of 40 images, and compared topic continuity in questions. Across 10 rounds, VisDial question have 4.55 ± 0.17 topics on average, con- ï¬rming that these are not independent questions. Recall that VisDial has 10 questions per image as opposed to 3 for VQA. Therefore, for a fair comparison, we compute aver- age number of topics in VisDial over all subsets of 3 succes- sive questions. For 500 bootstrap samples of batch size 40, VisDial has 2.14 ± 0.05 topics while VQA has 2.53 ± 0.09. Lower mean suggests there is more continuity in VisDial because questions do not change topics as often.
# 4.4. VisDial Evaluation Protocol
One fundamental challenge in dialog systems is evaluation. Similar to the state of affairs in captioning and machine translation, it is an open problem to automatically evaluate the quality of free-form answers. Existing metrics such as BLEU, METEOR, ROUGE are known to correlate poorly with human judgement in evaluating dialog responses [33].
Instead of evaluating on a downstream task [9] or holisti- cally evaluating the entire conversation (as in goal-free chit- chat [5]), we evaluate individual responses at each round (t = 1, 2, . . . , 10) in a retrieval or multiple-choice setup. Speciï¬cally, at test time, a VisDial system is given an im- age I, the âground-truthâ dialog history (including the im- age caption) C, (Q1, A1), . . . , (Qtâ1, Atâ1), the question Qt, and a list of N = 100 candidate answers, and asked to return a sorting of the candidate answers. The model is evaluated on retrieval metrics â (1) rank of human response (lower is better), (2) recall@k, i.e. existence of the human response in top-k ranked responses, and (3) mean reciprocal rank (MRR) of the human response (higher is better).
The evaluation protocol is compatible with both discrimi- native models (that simply score the input candidates, e.g. via a softmax over the options, and cannot generate new answers), and generative models (that generate an answer string, e.g. via Recurrent Neural Networks) by ranking the candidates by the modelâs log-likelihood scores. Candidate Answers. We generate a candidate set of cor- rect and incorrect answers from four sets: Correct: The ground-truth human response to the question. Plausible: Answers to 50 most similar questions. Simi- lar questions are those that start with similar tri-grams and mention similar semantic concepts in the rest of the ques- tion. To capture this, all questions are embedded into a vec- tor space by concatenating the GloVe embeddings of the ï¬rst three words with the averaged GloVe embeddings of the remaining words in the questions. Euclidean distances
6
are used to compute neighbors. Since these neighboring questions were asked on different images, their answers serve as âhard negativesâ. Popular: The 30 most popular answers from the dataset â e.g. âyesâ, ânoâ, â2â, â1â, âwhiteâ, â3â, âgreyâ, âgrayâ, â4â, âyes it isâ. The inclusion of popular answers forces the machine to pick between likely a priori responses and plausible re- sponses for the question, thus increasing the task difï¬culty. Random: The remaining are answers to random questions in the dataset. To generate 100 candidates, we ï¬rst ï¬nd the union of the correct, plausible, and popular answers, and include random answers until a unique set of 100 is found.
# 5. Neural Visual Dialog Models
In this section, we develop a number of neural Visual Dialog answerer models. Recall that the model is given as input â an image J, the âground-truthâ dialog history (including the image caption) H = ( C ,(Q1,A1),---.(Qr-1,At-1)), Ya e_ââââ Ho Ay Aya the question Q;, and a list of 100 candidate answers A, = {AM,..., (1 } â and asked to return a sorting of Ay. At a high level, all our models follow the encoder-decoder framework, i.e. factorize into two parts â (1) an encoder that converts the input (I, H, Q,) into a vector space, and (2) a decoder that converts the embedded vector into an output. We describe choices for each component next and present experiments with all encoder-decoder combinations. Decoders: We use two types of decoders: ¢ Generative (LSTM) decoder: where the encoded vector is set as the initial state of the Long Short-Term Mem- ory (LSTM) RNN language model. During training, we maximize the log-likelihood of the ground truth answer sequence given its corresponding encoded representation (trained end-to-end). To evaluate, we use the modelâs log- likelihood scores and rank candidate answers. Note that this decoder does not need to score options dur- ing training. As a result, such models do not exploit the biases in option creation and typically underperform mod- els that do [25], but it is debatable whether exploiting such biases is really indicative of progress. Moreover, genera- tive decoders are more practical in that they can actually be deployed in realistic applications.
⢠Discriminative (softmax) decoder: computes dot product similarity between input encoding and an LSTM encoding of each of the answer options. These dot products are fed into a softmax to compute the posterior probability over options. During training, we maximize the log-likelihood of the correct option. During evaluation, options are sim- ply ranked based on their posterior probabilities.
Encoders: We develop 3 different encoders (listed below) that convert inputs (I, H, Qt) into a joint representation.
7
In all cases, we represent J via the ¢2-normalized activa- tions from the penultimate layer of VGG-16 [56]. For each encoder E, we experiment with all possible ablated ver- sions: £(Q:), E(Q:,1), E(Q:, H), E(Q:, 1, H) (for some encoders, not all combinations are âvalidâ; details below). ¢ Late Fusion (LF) Encoder: In this encoder, we treat H as a long string with the entire history (Ho,..., Hiâ1) concatenated. @, and H are separately encoded with 2 different LSTMs, and individual representations of par- ticipating inputs (I, H, Q,) are concatenated and linearly transformed to a desired size of joint representation.
⢠Hierarchical Recurrent Encoder (HRE): In this en- coder, we capture the intuition that there is a hierarchical nature to our problem â each question Qt is a sequence of words that need to be embedded, and the dialog as a whole is a sequence of question-answer pairs (Qt, At). Thus, similar to [54], as shown in Fig. 6, we propose an HRE model that contains a dialog-RNN sitting on top of a recur- rent block (Rt). The recurrent block Rt embeds the ques- tion and image jointly via an LSTM (early fusion), embeds each round of the history Ht, and passes a concatenation of these to the dialog-RNN above it. The dialog-RNN pro- duces both an encoding for this round (Et in Fig. 6) and a dialog context to pass onto the next round. We also add an attention-over-history (âAttentionâ in Fig. 6) mechanism allowing the recurrent block Rt to choose and attend to the round of the history relevant to the current question. This attention mechanism consists of a softmax over pre- vious rounds (0, 1, . . . , t â 1) computed from the history and question+image encoding.
âot E: Dialog-RNN Dialog-RNN = |}-â+ Ro | eS R, foie Attention over H Attention over H [qzenton overt 7 to T] LSTM LSTM LSTM LSTM t + + F + 4 Her} (4 )(@er Mm) (4 Jie )
Figure 6: Architecture of HRE encoder with attention. At the cur- rent round Rt, the model has the capability to choose and attend to relevant history from previous rounds, based on the current ques- tion. This attention-over-history feeds into a dialog-RNN along with question to generate joint representation Et for the decoder.
⢠Memory Network (MN) Encoder: We develop a MN encoder that maintains each previous question and answer as a âfactâ in its memory bank and learns to refer to the stored facts and image to answer the question. Speciï¬- cally, we encode Qt with an LSTM to get a 512-d vector, encode each previous round of history (H0, . . . , Htâ1) with another LSTM to get a t à 512 matrix. We com-
pute inner product of question vector with each history vector to get scores over previous rounds, which are fed to a softmax to get attention-over-history probabilities. Con- vex combination of history vectors using these attention probabilities gives us the âcontext vectorâ, which is passed through an fc-layer and added to the question vectorto con- struct the MN encoding. In the language of Memory Net- work [9], this is a â1-hopâ encoding.
We use a â[encoder]-[input]-[decoder]â convention to refer to model-input combinations. For example, âLF-QI-Dâ has a Late Fusion encoder with question+image inputs (no his- tory), and a discriminative decoder. Implementation details about the models can be found in the supplement.
# 6. Experiments
Splits. VisDial v0.9 contains 83k dialogs on COCO-train and 40k on COCO-val images. We split the 83k into 80k for training, 3k for validation, and use the 40k as test.
Data preprocessing, hyperparameters and training details are included in the supplement. Baselines We compare to a number of baselines: Answer Prior: Answer options to a test question are encoded with an LSTM and scored by a linear classiï¬er. This captures ranking by frequency of answers in our training set with- out resolving to exact string matching. NN-Q: Given a test question, we ï¬nd k nearest neighbor questions (in GloVe space) from train, and score answer options by their mean- similarity with these k answers. NN-QI: First, we ï¬nd K nearest neighbor questions for a test question. Then, we ï¬nd a subset of size k based on image feature similarity. Finally, we rank options by their mean-similarity to answers to these k questions. We use k = 20, K = 100. Finally, we adapt several (near) state-of-art VQA models (SAN [67], HieCoAtt [37]) to Visual Dialog. Since VQA is posed as classiï¬cation, we âchopâ the ï¬nal VQA-answer softmax from these models, feed these activations to our discriminative decoder (Section 5), and train end-to-end on VisDial. Note that our LF-QI-D model is similar to that in [36]. Altogether, these form fairly sophisticated baselines. Results. Tab. 5 shows results for our models and baselines on VisDial v0.9 (evaluated on 40k from COCO-val). A few key takeaways â 1) As expected, all learning based models signiï¬cantly outperform non-learning baselines. 2) All discriminative models signiï¬cantly outperform genera- tive models, which as we discussed is expected since dis- criminative models can tune to the biases in the answer options. 3) Our best generative and discriminative mod- els are MN-QIH-G with 0.526 MRR, and MN-QIH-D with 0.597 MRR. 4) We observe that naively incorporating his- tory doesnât help much (LF-Q vs. LF-QH and LF-QI vs. LF-QIH) or can even hurt a little (LF-QI-G vs. LF-QIH-
8
Model MRR R@1 R@5 R@10 Mean 2 Answer prior 0.3735 23.55 48.52 53.23 26.50 3 NN-Q 0.4570 35.93 54.07 60.26 18.93 a NN-QI 0.4274 33.13 50.83 58.69 19.62 LF-Q-G 0.5048 39.78 60.58 66.33 17.89 LF-QH-G 0.5055 39.73 60.86 66.68 17.78 o LF-QI-G 0.5204 42.04 61.65 67.66 16.84 5 LF-QIH-G 0.5199 41.83. 61.78 67.59 17.07 5 HRE-QH-G 8 HRE-QIH-G 0.5237 42.29 62.18 67.92 17.07 HREA-QUH-G 0.5242 42.28 62.33 68.17 16.79 MN-QH-G ~ â05115 â40.42. 6157 â67.44 17-747 MN-QIH-G 0.5259 42.29 62.85 68.88 17.06 LF-Q-D 0.5508 41.24 70.45 79.83 7.08 LF-QH-D 0.5578 41.75 71.45 80.94 6.74 2 LF-QLD 0.5759 43.33 74.27 83.68 5.87 3 LF-QIH-D 0.5807 43.82 74.68 84.07 5.78 â 5 HRE-QIH-D 0.5846 44.67 74.50 84.22 5.72 fal HREA-QIH-D 0.5868 44.82 74.81 84.36 5.66 0.5849 44.03 75.26 84.49 5.68 0.5965 45.55 76.22 85.37 5.46 < SANI-QI-D 0.5764 43.44 74.26 83.72 5.88 ot HieCoAtt-QI-D 0.5788 43.51 74.49 83.96 5.84
Table 1: Performance of methods on VisDial v0.9, measured by mean reciprocal rank (MRR), recall@k and mean rank. Higher is better for MRR and recall@k, while lower is better for mean rank. Performance on VisDial v0.5 is included in the supplement.
G). However, models that better encode history (MN/HRE) perform better than corresponding LF models with/without history (e.g. LF-Q-D vs. MN-QH-D). 5) Models looking at I ({LF,MN,HRE }-QIH) outperform corresponding blind models (without I). Human Studies. We conduct studies on AMT to quantita- tively evaluate human performance on this task for all com- binations of {with image, without image}Ã{with history, without history}. We ï¬nd that without image, humans per- form better when they have access to dialog history. As expected, this gap narrows down when they have access to the image. Complete details can be found in supplement.
# 7. Conclusions
To summarize, we introduce a new AI task â Visual Dialog, where an AI agent must hold a dialog with a human about visual content. We develop a novel two-person chat data- collection protocol to curate a large-scale dataset (VisDial), propose retrieval-based evaluation protocol, and develop a family of encoder-decoder models for Visual Dialog. We quantify human performance on this task via human stud- ies. Our results indicate that there is signiï¬cant scope for improvement, and we believe this task can serve as a testbed for measuring progress towards visual intelligence.
# 8. Acknowledgements
We thank Harsh Agrawal, Jiasen Lu for help with AMT data collection; Xiao Lin, Latha Pemula for model discussions; Marco Baroni, Antoine Bordes, Mike Lewis, MarcâAurelio Ranzato for helpful discussions. We are grateful to the de- velopers of Torch [2] for building an excellent framework. This work was funded in part by NSF CAREER awards to DB and DP, ONR YIP awards to DP and DB, ONR Grant N00014-14-1-0679 to DB, a Sloan Fellowship to DP, ARO YIP awards to DB and DP, an Allen Distinguished Investi- gator award to DP from the Paul G. Allen Family Founda- tion, ICTAS Junior Faculty awards to DB and DP, Google Faculty Research Awards to DP and DB, Amazon Aca- demic Research Awards to DP and DB, AWS in Education Research grant to DB, and NVIDIA GPU donations to DB. SK was supported by ONR Grant N00014-12-1-0903. The views and conclusions contained herein are those of the au- thors and should not be interpreted as necessarily represent- ing the ofï¬cial policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor.
9
# Appendix Overview
This supplementary document is organized as follows:
⢠Sec. A studies how and why VisDial is more than just a collection of independent Q&As.
⢠Sec. B shows qualitative examples from our dataset.
⢠Sec. C presents detailed human studies along with com- parisons to machine accuracy. The interface for human studies is demonstrated in a video4.
⢠Sec. D shows snapshots of our two-person chat data- collection interface on Amazon Mechanical Turk. The in- terface is also demonstrated in the video3.
⢠Sec. E presents further analysis of VisDial, such as ques- tion types, question and answer lengths per question type. A video with an interactive sunburst visualization of the dataset is included3.
⢠Sec. F presents performance of our models on VisDial v0.5 test.
⢠Sec. G presents implementation-level training details in- cluding data preprocessing, and model architectures.
⢠Putting it all together, we compile a video demonstrating our visual chatbot3 that answers a sequence of questions from a user about an image. This demo uses one of our best generative models from the main paper, MN-QIH-G, and uses sampling (without any beam-search) for infer- ence in the LSTM decoder. Note that these videos demon- strate an âunscriptedâ dialog â in the sense that the partic- ular QA sequence is not present in VisDial and the model is not provided with any list of answer options.
# A. In what ways are dialogs in VisDial more than just 10 visual Q&As?
In this section, we lay out an exhaustive list of differences between VisDial and image question-answering datasets, with the VQA dataset [6] serving as the representative.
In essence, we characterize what makes an instance in Vis- Dial more than a collection of 10 independent question- answer pairs about an image â what makes it a dialog. In order to be self-contained and an exhaustive list, some parts of this section repeat content from the main document.
# A.1. VisDial has longer free-form answers
Fig. 7a shows the distribution of answer lengths in VisDial. and Tab. 2 compares statistics of VisDial with existing im- age question answering datasets. Unlike previous datasets,
# 4https://goo.gl/yjlHxY
10
answers in VisDial are longer, conversational, and more de- scriptive â mean-length 2.9 words (VisDial) vs 1.1 (VQA), 2.0 (Visual 7W), 2.8 (Visual Madlibs). Moreover, 37.1% of answers in VisDial are longer than 2 words while the VQA dataset has only 3.8% answers longer than 2 words.
) â Questions â Answers Percentage coverage ee? 3 4 5 6 7 8 9 10 # Words in sentence
(a) (b)
100%, â VOA â Visual Dialog 80% Percentage coverage oly Gy iy ae # Unique answers (x 10000) 20
Figure 7: Distribution of lengths for questions and answers (left); and percent coverage of unique answers over all answers from the train dataset (right), compared to VQA. For a given coverage, Vis- Dial has more unique answers indicating greater answer diversity.
Fig. 7b shows the cumulative coverage of all answers (y- axis) by the most frequent answers (x-axis). The difference between VisDial and VQA is stark â the top-1000 answers in VQA cover â¼83% of all answers, while in VisDial that ï¬gure is only â¼63%. There is a signiï¬cant heavy tail of an- swers in VisDial â most long strings are unique, and thus the coverage curve in Fig. 7b becomes a straight line with slope 1. In total, there are 337,527 unique answers in VisDial (out of the 1,232,870 answers currently in the dataset).
# A.2. VisDial has co-references in dialogs
People conversing with each other tend to use pronouns to refer to already mentioned entities. Since language in Vis- Dial is the result of a sequential conversation, it naturally contains pronouns â âheâ, âsheâ, âhisâ, âherâ, âitâ, âtheirâ, âtheyâ, âthisâ, âthatâ, âthoseâ, etc. In total, 38% of ques- tions, 19% of answers, and nearly all (98%) dialogs contain at least one pronoun, thus conï¬rming that a machine will need to overcome coreference ambiguities to be successful on this task. As a comparison, only 9% of questions and 0.25% of answers in VQA contain at least one pronoun. In Fig. 8, we see that pronoun usage is lower in the ï¬rst round compared to other rounds, which is expected since there are fewer entities to refer to in the earlier rounds. The pronoun usage is also generally lower in answers than ques- tions, which is also understandable since the answers are generally shorter than questions and thus less likely to con- tain pronouns. In general, the pronoun usage is fairly con- sistent across rounds (starting from round 2) for both ques- tions and answers.
#QA #Images QLength ALength ALength>2 Top-1000A Human Accuracy DAQUAR [38] 12,468 1,447) 115424 12+0.5 3.4% 96.4% - Visual Madlibs [68] 56,468 9,688 4942.4 2.8+2.0 47.4% 57.9% - COCO-QA [49] 117,684 69,172 8.7+2.7 10+0 0.0% 100% - Baidu [17] 316,193 316,193 - - - - - VQA [6] 614,163 204,721 6242.0 1.1+04 3.8% 82.7% v Visual7W [70] 327,939 47,300 69424 2.0+1.4 27.6% 63.5% v VisDial (Ours) 1,232,870 123,287 5.1+0.0 2.9+0.0 37.1% 63.2% v
Table 2: Comparison of existing image question answering datasets with VisDial
Cee : s 50% = Fae ° oO 8 - oe £ 20% S 10% @ 1 2 3 4 5 6 7 8 9 10 Round
⢠and asking follow-up questions about the new visual en- tities discovered from these explorations:
âThereâs a blue fence in background, like an enclosureâ, âIs the enclosure inside or outside?â. Such a line of questioning does not exist in the VQA dataset, where the subjects were shown the questions already asked about an image, and explicitly instructed to ask about dif- ferent entities [6].
Figure 8: Percentage of QAs with pronouns for different rounds. In round 1, pronoun usage in questions is low (in fact, almost equal to usage in answers). From rounds 2 through 10, pronoun usage is higher in questions and fairly consistent across rounds.
# A.3. VisDial has smoothness/continuity in âtopicsâ
Qualitative Example of Topics. There is a stylistic dif- ference in the questions asked in VisDial (compared to the questions in VQA) due to the nature of the task assigned to the subjects asking the questions. In VQA, subjects saw the image and were asked to âstump a smart robotâ. Thus, most queries involve speciï¬c details, often about the background (Q: âWhat program is being utilized in the background on the computer?â). In VisDial, questioners did not see the original image and were asking questions to build a mental model of the scene. Thus, the questions tend to be open- ended, and often follow a pattern: ⢠Generally starting with the entities in the caption:
âAn elephant walking away from a pool in an exhibitâ, âIs there only 1 elephant?â,
Counting the Number of Topics. In order to quantify these qualitative differences, we performed a human study where we manually annotated question âtopicsâ for 40 im- ages (a total of 400 questions), chosen randomly from the val set. The topic annotations were based on human judge- ment with a consensus of 4 annotators, with topics such as: asking about a particular object (âWhat is the man doing?â), the scene (âIs it outdoors or indoors?â), the weather (âIs the weather sunny?â), the image (âIs it a color image?â), and ex- ploration (âIs there anything else?â). We performed similar topic annotation for questions from VQA for the same set of 40 images, and compared topic continuity in questions. Across 10 rounds, VisDial questions have 4.55 ± 0.17 top- ics on average, conï¬rming that these are not 10 independent questions. Recall that VisDial has 10 questions per image as opposed to 3 for VQA. Therefore, for a fair compari- son, we compute average number of topics in VisDial over all âsliding windowsâ of 3 successive questions. For 500 bootstrap samples of batch size 40, VisDial has 2.14 ± 0.05 topics while VQA has 2.53 ± 0.09. Lower mean number of topics suggests there is more continuity in VisDial because questions do not change topics as often.
⢠digging deeper into their parts, attributes, or proper- ties:
âIs it full grown?â, âIs it facing the camera?â,
# ⢠asking about the scene category or the picture setting:
âIs this indoors or outdoors?â, âIs this a zoo?â,
# ⢠the weather:
âIs it snowing?â, âIs it sunny?â,
# ⢠simply exploring the scene:
Transition Probabilities over Topics. We can take this analysis a step further by computing topic transition proba- bilities over topics as follows. For a given sequential dialog exchange, we now count the number of topic transitions be- tween consecutive QA pairs, normalized by the total num- ber of possible transitions between rounds (9 for VisDial and 2 for VQA). We compute this âtopic transition proba- bilityâ (how likely are two successive QA pairs to be about two different topics) for VisDial and VQA in two different settings â (1) in-order and (2) with a permuted sequence
âAre there people?â, âIs there shelter for elephant?â,
11
of QAs. Note that if VisDial were simply a collection of 10 independent QAs as opposed to a dialog, we would ex- pect the topic transition probabilities to be similar for in- order and permuted variants. However, we ï¬nd that for 1000 permutations of 40 topic-annotated image-dialogs, in- order-VisDial has an average topic transition probability of 0.61, while permuted-VisDial has 0.76 ± 0.02. In contrast, VQA has a topic transition probability of 0.80 for in-order vs. 0.83 ± 0.02 for permuted QAs. There are two key observations: (1) In-order transition probability is lower for VisDial than VQA (i.e. topic transi- tion is less likely in VisDial), and (2) Permuting the order of questions results in a larger increase for VisDial, around 0.15, compared to a mere 0.03 in case of VQA (i.e. in-order- VQA and permuted-VQA behave signiï¬cantly more simi- larly than in-order-VisDial and permuted-VisDial).
Both these observations establish that there is smoothness in the temporal order of topics in VisDial, which is indicative of the narrative structure of a dialog, rather than indepen- dent question-answers.
# A.4. VisDial has the statistics of an NLP dialog dataset
In this analysis, our goal is to measure whether VisDial be- haves like a dialog dataset. In particular, we compare VisDial, VQA, and Cornell Movie-Dialogs Corpus [11]. The Cornell Movie-Dialogs corpus is a text-only dataset extracted from pairwise inter- actions between characters from approximately 617 movies, and is widely used as a standard dialog corpus in the natural language processing (NLP) and dialog communities.
One popular evaluation criteria used in the dialog-systems research community is the perplexity of language models trained on dialog datasets â the lower the perplexity of a model, the better it has learned the structure in the dialog dataset.
For the purpose of our analysis, we pick the popular sequence-to-sequence (Seq2Seq) language model [24] and use the perplexity of this model trained on different datasets as a measure of temporal structure in a dataset.
As is standard in the dialog literature, we train the Seq2Seq model to predict the probability of utterance Ut given the previous utterance Utâ1, i.e. P(Ut | Utâ1) on the Cornell corpus. For VisDial and VQA, we train the Seq2Seq model to predict the probability of a question Qt given the previous question-answer pair, i.e. P(Qt | (Qtâ1, Atâ1)). For each dataset, we used its train and val splits for training and hyperparameter tuning respectively, and report results on test. At test time, we only use conversations of length 10 from Cornell corpus for a fair comparison to VisDial (which has 10 rounds of QA). For all three datasets, we created 100 permuted versions of
12
Dataset VQA Cornell (10) VisDial (Ours) Perplexity Per Token Orig 7.83 82.31 6.61 Shufï¬ed 8.16 ± 0.02 85.31 ± 1.51 7.28 ± 0.01 Classiï¬cation 52.8 ± 0.9 61.0 ± 0.6 73.3 ± 0.4
Table 3: Comparison of sequences in VisDial, VQA, and Cor- nell Movie-Dialogs corpus in their original ordering vs. permuted âshufï¬edâ ordering. Lower is better for perplexity while higher is better for classiï¬cation accuracy. Left: the absolute increase in perplexity from natural to permuted ordering is highest in the Cornell corpus (3.0) followed by VisDial with 0.7, and VQA at 0.35, which is indicative of the degree of linguistic structure in the sequences in these datasets. Right: The accuracy of a simple threshold-based classiï¬er trained to differentiate between the orig- inal sequences and their permuted or shufï¬ed versions. A higher classiï¬cation rate indicates the existence of a strong temporal con- tinuity in the conversation, thus making the ordering important. We can see that the classiï¬er on VisDial achieves the highest ac- curacy (73.3%), followed by Cornell (61.0%). Note that this is a binary classiï¬cation task with the prior probability of each class by design being equal, thus chance performance is 50%. The clas- siï¬er on VQA performs close to chance.
test, where either QA pairs or utterances are randomly shufï¬ed to disturb their natural order. This allows us to compare datasets in their natural ordering w.r.t. permuted orderings. Our hypothesis is that since dialog datasets have linguistic structure in the sequence of QAs or utterances they contain, this structure will be signiï¬cantly affected by permuting the sequence. In contrast, a collection of inde- pendent question-answers (as in VQA) will not be signiï¬- cantly affected by a permutation. Tab. 3 compares the original, unshufï¬ed test with the shufï¬ed testsets on two metrics:
Perplexity: We compute the standard metric of perplex- ity per token, i.e. exponent of the normalized negative-log- probability of a sequence (where normalized is by the length of the sequence). Tab. 3 shows these perplexities for the original unshufï¬ed test and permuted test sequences. We notice a few trends.
First, we note that the absolute perplexity values are higher for the Cornell corpus than QA datasets. We hypothesize that this is due to the broad, unrestrictive dialog generation task in Cornell corpus, which is a more difï¬cult task than question prediction about images, which is in comparison a more restricted task. Second, in all three datasets, the shufï¬ed test has statis- tically signiï¬cant higher perplexity than the original test, which indicates that shufï¬ing does indeed break the linguis- tic structure in the sequences.
Third, the absolute increase in perplexity from natural to permuted ordering is highest in the Cornell corpus (3.0) fol-
lowed by our VisDial with 0.7, and VQA at 0.35, which is indicative of the degree of linguistic structure in the se- quences in these datasets. Finally, the relative increases in perplexity are 3.64% in Cornell, 10.13% in VisDial, and 4.21% in VQA â VisDial suffers the highest relative in- crease in perplexity due to shufï¬ing, indicating the exis- tence of temporal continuity that gets disrupted.
Classiï¬cation: As our second metric to compare datasets in their natural vs. permuted order, we test whether we can reliably classify a given sequence as natural or permuted.
Our classiï¬er is a simple threshold on perplexity of a se- quence. Speciï¬cally, given a pair of sequences, we compute the perplexity of both from our Seq2Seq model, and predict that the one with higher perplexity is the sequence in per- muted ordering, and the sequence with lower perplexity is the one in natural ordering. The accuracy of this simple classiï¬er indicates how easy or difï¬cult it is to tell the dif- ference between natural and permuted sequences. A higher classiï¬cation rate indicates existence of temporal continuity in the conversation, thus making the ordering important.
Tab. 3 shows the classiï¬cation accuracies achieved on all datasets. We can see that the classiï¬er on VisDial achieves the highest accuracy (73.3%), followed by Cornell (61.0%). Note that this is a binary classiï¬cation task with the prior probability of each class by design being equal, thus chance performance is 50%. The classiï¬ers on VisDial and Cornell both signiï¬cantly outperforming chance. On the other hand, the classiï¬er on VQA is near chance (52.8%), indicating a lack of general temporal continuity.
To summarize this analysis, our experiments show that VisDial is signiï¬cantly more dialog-like than VQA, and behaves more like a standard dialog dataset, the Cornell Movie-Dialogs corpus.
# A.5. VisDial eliminates visual priming bias in VQA
One key difference between VisDial and previous image question answering datasets (VQA [6], Visual 7W [70], Baidu mQA [17]) is the lack of a âvisual priming biasâ in VisDial. Speciï¬cally, in all previous datasets, subjects saw an image while asking questions about it. As described in [69], this leads to a particular bias in the questions â people only ask âIs there a clocktower in the picture?â on pictures actually containing clock towers. This allows language- only models to perform remarkably well on VQA and re- sults in an inï¬ated sense of progress [69]. As one particu- larly perverse example â for questions in the VQA dataset starting with âDo you see a . . . â, blindly answering âyesâ without reading the rest of the question or looking at the as- sociated image results in an average VQA accuracy of 87%! In VisDial, questioners do not see the image. As a result,
13
this bias is reduced. This lack of visual priming bias (i.e. not being able to see the image while asking questions) and holding a dialog with another person while asking questions results in the follow- ing two unique features in VisDial.
Figure 9: Distribution of answers in VisDial by their ï¬rst four words. The ordering of the words starts towards the center and radiates outwards. The arc length is proportional to the number of questions containing the word. White areas are words with contri- butions too small to show.
Uncertainty in Answers in VisDial. Since the answers in VisDial are longer strings, we can visualize their distri- bution based on the starting few words (Fig. 9). An inter- esting category of answers emerges â âI think soâ, âI canât tellâ, or âI canât seeâ â expressing doubt, uncertainty, or lack of information. This is a consequence of the questioner not being able to see the image â they are asking contex- tually relevant questions, but not all questions may be an- swerable with certainty from that image. We believe this is rich data for building more human-like AI that refuses to answer questions it doesnât have enough information to an- swer. See [48] for a related, but complementary effort on question relevance in VQA.
Binary Questions # Binary Answers in VisDial. In VQA, binary questions are simply those with âyesâ, ânoâ, âmaybeâ as answers [6]. In VisDial, we must distinguish between binary questions and binary answers. Binary ques- tions are those starting in âDoâ, âDidâ, âHaveâ, âHasâ, âIsâ, âAreâ, âWasâ, âWereâ, âCanâ, âCouldâ. Answers to such questions can (1) contain only âyesâ or ânoâ, (2) begin with âyesâ, ânoâ, and contain additional information or clarifica- tion (Q: âAre there any animals in the image?â, A: âyes, 2 cats and a dogâ), (3) involve ambiguity (âItâs hard to seeâ,
âMaybeâ), or (4) answer the question without explicitly say- ing âyesâ or ânoâ (Q: âIs there any type of design or pat- tern on the cloth?â, A: âThere are circles and lines on the clothâ). We call answers that contain âyesâ or ânoâ as binary answers â 149,367 and 76,346 answers in subsets (1) and (2) from above respectively. Binary answers in VQA are biased towards âyesâ [6,69] â 61.40% of yes/no answers are âyesâ. In VisDial, the trend is reversed. Only 46.96% are âyesâ for all yes/no responses. This is understandable since workers did not see the image, and were more likely to end up with negative responses.
# B. Qualitative Examples from VisDial
Fig. 10 shows random samples of dialogs from the VisDial dataset.
# C. Human-Machine Comparison
Model MRR R@1 R@5 Mean « Human-Q 0.441 25.10 67.37 4.19 2 Human-QH 0.485 30.31 70.53 3.91 eI Human-Ql 0.619 46.12 82.54 2.92 Human-QIH 0.635 48.03 83.76 2.83 3 HREA-QIH-G 0.477 31.64 61.61 4.42 3 { MN-QIH-G_ 0.481 32.16 61.94 4.47 s MN-QIH-D 0.553 36.86 69.39 3.48
Table 4: Human-machine performance comparison on VisDial v0.5, measured by mean reciprocal rank (MRR), recall@k for k = {1, 5} and mean rank. Note that higher is better for MRR and recall@k, while lower is better for mean rank.
We conducted studies on AMT to quantitatively evaluate human performance on this task for all combinations of {with image, without image}Ã{with history, without his- tory} on 100 random images at each of the 10 rounds. Speciï¬cally, in each setting, we show human subjects a jumbled list of 10 candidate answers for a question â top-9 predicted responses from our âLF-QIH-Dâ model and the 1 ground truth answer â and ask them to rank the responses. Each task was done by 3 human subjects.
Results of this study are shown in the top-half of Tab. 4. We ï¬nd that without access to the image, humans perform better when they have access to dialog history â compare the Human-QH row to Human-Q (R@1 of 30.31 vs. 25.10). As perhaps expected, this gap narrows down when humans have access to the image â compare Human-QIH to Human- QI (R@1 of 48.03 vs. 46.12). Note that these numbers are not directly comparable to ma- chine performance reported in the main paper because mod- els are tasked with ranking 100 responses, while humans are asked to rank 10 candidates. This is because the task of
14
ranking 100 candidate responses would be too cumbersome for humans.
To compute comparable human and machine performance, we evaluate our best discriminative (MN-QIH-D) and gen- erative (HREA-QIH-G, MN-QIH-G)5 models on the same 10 options that were presented to humans. Note that in this setting, both humans and machines have R@10 = 1.0, since there are only 10 options.
Tab. 4 bottom-half shows the results of this comparison. We can see that, as expected, humans with full information (i.e. Human-QIH) perform the best with a large gap in human and machine performance (compare R@5: Human-QIH 83.76% vs. MN-QIH-D 69.39%). This gap is even larger when compared to generative models, which unlike the dis- criminative models are not actively trying to exploit the bi- ases in the answer candidates (compare R@5: Human-QIH 83.76% vs. HREA-QIH-G 61.61%). Furthermore, we see that humans outperform the best ma- chine even when not looking at the image, simply on the basis of the context provided by the history (compare R@5: Human-QH 70.53% vs. MN-QIH-D 69.39%). Perhaps as expected, with access to the image but not the history, humans are signiï¬cantly better than the best ma- chines (R@5: Human-QI 82.54% vs. MN-QIH-D 69.39%). With access to history humans perform even better.
From in-house human studies and worker feedback on AMT, we ï¬nd that dialog history plays the following roles for humans: (1) provides a context for the question and paints a picture of the scene, which helps eliminate cer- tain answer choices (especially when the image is not avail- able), (2) gives cues about the answererâs response style, which helps identify the right answer among similar answer choices, and (3) disambiguates amongst likely interpreta- tions of the image (i.e., when objects are small or occluded), again, helping identify the right answer among multiple plausible options.
# D. Interface
In this section, we show our interface to connect two Ama- zon Mechanical Turk workers live, which we used to collect our data. Instructions. To ensure quality of data, we provide detailed instructions on our interface as shown in Fig. 11a. Since the workers do not know their roles before starting the study, we provide instructions for both questioner and answerer roles. After pairing: Immediately after pairing two workers, we assign them roles of a questioner and a answerer and display role-speciï¬c instructions as shown in Fig. 11b. Observe that
5 We use both HREA-QIH-G, MN-QIH-G since they have similar ac- curacies.
Caption: The skiers stood on top of the mountain Person A (1): how many skiers are there Person B (1): hundreds Person A (2): are they getting ready to go downhill Person B (2); i think so my view is at end of line Person A (3): is it snowing Person B (3): no, there is lot of snow though Person A (4): can you ee anybody going Person B (4): no my view shows people going up small hill on skis i can't see what's going on from there Person A (5): do you see lift Person B (5): no Person A (6): can you tell if they are male or female Person B (6): skiers closest to me are male Person A (7): are there any children Person B (7): i don't s Person A (8): does any but there could be it's huge crowd ybody have hat on Person B (8): they all have winter hat of some sort on Person A (9): is sun shining Person B (9): yes, all blue sky 2e any clouds Person A (10): do you Person B (10): no clouds
Caption: an image of a man in a boat with a dog Person A (1): how old does man look Person B (1): he has white hair, so probably 60's Person A (2): can you tell what breed of dog Person B (2): itis dalmatian Person A (3): is this photo in color : Person B (3): yes itis Person A (4): does this photo look like it is older photo or from more recent times Person B (4): it looks fairly recent Person A (5): is boat in water Person B (5): yes it is nae Person A (6): do you see any fishing equipment Person B (6): i do not Person A (7): is boat focal point of photo Person B (7): yes it is Person A (8): does water look smooth Person B (8): it is quite smooth pe Person A (9): is it sunny Person B (9): yes it is Person A (10): does this look like professional photo Person B (10): this is not professional photo
(a)
(b)
Caption: A man and woman on bicycles are looking at a map. Person A (1): where are they located Person B (1): in city Person A (2): are they on road Person B (2): sidewalk next to 1 Person A (3): any vehicles Person B (3): 1 in background Person A (4): any other people Person B (4): no Person A (5): what color bikes Person B (5): 1 silver and 1 yellow Person A (6): do they look old or new Person B (6): new bikes Person A (7): any buildings Person B (7): yes Person A (8): what color Person B (8): brick Person A (9): are they tall or short Person B (9): i can't see enough of them to tell Person A (10): do they look like couple Person B (10): they are
Caption: a horse is pulling a white cart with two people a car and buildings Person A (1): is it day time Person B (1): i think it's day Person A (2); can you see other people Person B (2): no Person A (3): can you see trees Person B (3): yes Person A (4): can you see lot of buildings Person B (4): only 2 maybe 3 Person A (5): does it look warm Person B (5): no, there are no leaves on trees Person A (6): do they nave jackets on Person B (6): no, long sleeve shirts though Person A (7): are they driving cart (7): are they driving car Person B (7): yes Person A (8): what color is car Person B (8): i can't tell photo is in black and white Person A (9): is building brick Person B (9): no, looks like wood Person A (10): do trees look old Person B (10): no they are still small
# (c)
(d)
Caption: A statue depicting a bear breaking into a car. Person A (1): how big is statue Person B (1): about size of real full grown bear Person A (2): so is car full size then as well Person B (2): yes replica of car Person A (3): is statue all 1 color Person B (3): no brown and black Person A (4): what color is car Person B (4): dark red Person A (5): where is this, do you think Person B (5): in wooded area someplace Person A (6): do you see any people in image Person B (6): yes 1 man Person A (7): how old is man Person B (7): 35-40 Person A (8): what is man doing Person B (8): sitting in car behind replica Person A (9): do you see any signs : Person B (9): yes, on car door warning sign Person A (10): what else can you tell me about this image Person B (10): there are many trees in background
Caption: A dog with goggles is in a mo Person A (1): can you tell what kind of dog this is Person B (1): he ike beautiful pit bull mix Person A (2): can you tell if motorcycle is moving or still Person B (2): it's parked Person A (3): is dog's tongue lolling out Person B (3): not really Person A (4): i Person B (4): y Person A (5): what color is dog Person B (5): light tan with white patch that runs up to bottom of his chin and he has whit Person A (6): can you ¢ motorcycle Person B (6): from side, yes Person A (7): what coâ Person B (7): black wit! scents, sun is glaring so it's h Person A (8): is there anybody sitting on motorcycle Person B (8): no Person A (9):i Person B (9): i Person A (10): do Person B (10): yes
(e)
(f)
Figure 10: Examples from VisDial
the questioner does not see the image while the answerer does have access to it. Both questioner and answerer see the caption for the image.
# E. Additional Analysis of VisDial
In this section, we present additional analyses characteriz- ing our VisDial dataset.
15
Live Question/Answering about an Image. y Instructions In this task, you will be talking to a fellow Turker. You will either be asking questions or answering questions about an image. You will be given more specific instructions once you are connected to a fellow Turker. Stay tuned. A message and a beep will notify you when you have been connected with a fellow Turker. Please keep the following in mind while chatting with your fellow Turker: Please directly start the conversation. Do not make small talk. Please do not write potentially offensive messages. Noawnone Please do not have conversations about something other than the image. Just either ask questions, or answer questions about an image (depending on your role). Please do not use chat/IM language (e.g, "r8" instead of "right"). Please use professional and grammatically correct English. Please have a natural conversation. Unnatural sounding conversation including awkward messages and long silences will be rejected. Please note that you are expected to complete and submit the hit in one go (once you have been connected with a partner). You cannot resume hits. If you see someone who isn't performing HITs as per instructions or is idle for long, do let us know. We'll make sure we keep a close watch on their work and reject it if they have a track record of not doing HITs properly or wasting too much time. Make sure you include a snippet of the conversation and your role (questioner or answerer) in your message to us, so we can look up who the other worker was. 8 Donot wait for your partner to disconnect to be able to type in responses quickly, or your work will be rejected. Please complete one hit before proceeding to the other. Please don't open multiple tabs, you cannot chat with yourself.
(a) Detailed instructions for Amazon Mechanical Turkers on our interface
Caption: A man, wearing goggles and a backpack on skis pulls a girl on skis behind him. You have to ASK Questions about the image. Fellow Turker connected Now yuan send messages Type Message Here: Caption: A man, wearing goggles and a backpack on skis pulls a girl on skis behind him. âYou have to ANSWER questions about the image. | Ss S Type Message Here
(b) Left: What questioner sees; Right: What answerer sees.
# E.1. Question and Answer Lengths
# F. Performance on VisDial v0.5
Fig. 12 shows question lengths by type and round. Aver- age length of question by type is consistent across rounds. Questions starting with âanyâ (âany people?â, âany other fruits?â, etc.) tend to be the shortest. Fig. 13 shows answer lengths by type of question they were said in response to and round. In contrast to questions, there is signiï¬cant variance in answer lengths. Answers to binary questions (âAny peo- ple?â, âCan you see the dog?â, etc.) tend to be short while answers to âhowâ and âwhatâ questions tend to be more ex- planatory and long. Across question types, answers tend to be the longest in the middle of conversations.
# E.2. Question Types
Fig. 14 shows round-wise coverage by question type. We see that as conversations progress, âisâ, âwhatâ and âhowâ questions reduce while âcanâ, âdoâ, âdoesâ, âanyâ questions occur more often. Questions starting with âIsâ are the most popular in the dataset.
Tab. 5 shows the results for our proposed models and base- lines on VisDial v0.5. A few key takeaways â First, as ex- pected, all learning based models signiï¬cantly outperform non-learning baselines. Second, all discriminative mod- els signiï¬cantly outperform generative models, which as we discussed is expected since discriminative models can tune to the biases in the answer options. This improve- ment comes with the signiï¬cant limitation of not being able to actually generate responses, and we recommend the two decoders be viewed as separate use cases. Third, our best generative and discriminative models are MN-QIH-G with 0.44 MRR, and MN-QIH-D with 0.53 MRR that outper- form a suite of models and sophisticated baselines. Fourth, we observe that models with H perform better than Q-only models, highlighting the importance of history in VisDial. Fifth, models looking at I outperform both the blind models (Q, QH) by at least 2% on recall@1 in both decoders. Fi- nally, models that use both H and I have best performance.
16
a OO does fo can c do 2 h i) what Ys how =] is fom & i) Qa s * any 3 2. 5 6 Round
Figure 12: Question lengths by type and round. Average length of question by type is fairly consistent across rounds. Questions starting with âanyâ (âany people?â, âany other fruits?â, etc.) tend to be the shortest.
4.0 3.5 a ov = 2 3.0 oo what & how 2) z io} 2.5 = does are * is can 2.0 do any 157 2 3 4 5 6 7 8 9 10 Round
Figure 13: Answer lengths by question type and round. Across question types, average response length tends to be longest in the middle of the conversation.
Dialog-level evaluation. Using R@5 to deï¬ne round-level âsuccessâ, our best discriminative model MN-QIH-D gets 7.01 rounds out of 10 correct, while generative MN-QIH- G gets 5.37. Further, the mean ï¬rst-failure-round (under R@5) for MN-QIH-D is 3.23, and 2.39 for MN-QIH-G. Fig. 16a and Fig. 16b show plots for all values of k in R@k.
17
50% 40% 30% 20% Percentage coverage what are 10% can does any how 0% "lr (2 3. 4 #5 6 7 8 9 10 Round
Figure 14: Percentage coverage of question types per round. As conversations progress, âIsâ, âWhatâ and âHowâ questions reduce while âCanâ, âDoâ, âDoesâ, âAnyâ questions occur more often. Questions starting with âIsâ are the most popular in the dataset.
# G. Experimental Details
In this section, we describe details about our models, data preprocessing, training procedure and hyperparameter se- lection.
# G.1. Models
Late Fusion (LF) Encoder. We encode the image with a VGG-16 CNN, question and concatenated history with separate LSTMs and concatenate the three representations. This is followed by a fully-connected layer and tanh non- linearity to a 512-d vector, which is used to decode the re- sponse. Fig. 17a shows the model architecture for our LF encoder.
Hierarchical Recurrent Encoder (HRE). In this en- coder, the image representation from VGG-16 CNN is early fused with the question. Speciï¬cally, the image representa- tion is concatenated with every question word as it is fed to an LSTM. Each QA-pair in dialog history is indepen- dently encoded by another LSTM with shared weights. The image-question representation, computed for every round from 1 through t, is concatenated with history representa- tion from the previous round and constitutes a sequence of
20 Counts (x 1000) @ ; SS s SSS a Ss Bio & Â¥ i é - & LS ke oC NS 8 KF ww AN OD GS wv S \e 5Y Nor â's > CNS SS SF Swe a & & ws © ») ee es & Ss Do ARs Co vs
Figure 15: Most frequent answer responses except for âyesâ/ânoâ
(a) (b)
Mean # of correct rounds gee we gy 2 ys @ © k
Mean round of first failure ew 2 wo » @ © 2 a0 DSBS k
some examples of attention over history facts from our MN encoder. We see that the model learns to attend to facts relevant to the question being asked. For example, when asked âWhat color are kites?â, the model attends to âA lot of people stand around ï¬ying kites in a park.â For âIs any- one on bus?â, it attends to âA large yellow bus parked in some grass.â Note that these are selected examples, and not always are these attention weights interpretable.
Figure 16: Dialog-level evaluation
# G.2. Training
question-history vectors. These vectors are fed as input to a dialog-level LSTM, whose output state at t is used to decode the response to Qt. Fig. 17b shows the model architecture for our HRE.
Splits. Recall that VisDial v0.9 contained 83k dialogs on COCO-train and 40k on COCO-val images. We split the 83k into 80k for training, 3k for validation, and use the 40k as test.
Memory Network. The image is encoded with a VGG- 16 CNN and question with an LSTM. We concatenate the representations and follow it by a fully-connected layer and tanh non-linearity to get a âquery vectorâ. Each caption/QA- pair (or âfactâ) in dialog history is encoded independently by an LSTM with shared weights. The query vector is then used to compute attention over the t facts by inner product. Convex combination of attended history vectors is passed through a fully-connected layer and tanh non-linearity, and added back to the query vector. This combined represen- tation is then passed through another fully-connected layer and tanh non-linearity and then used to decode the response. The model architecture is shown in Fig. 17c. Fig. 18 shows
Preprocessing. We spell-correct VisDial data using the Bing API [41]. Following VQA, we lowercase all questions and answers, convert digits to words, and remove contrac- tions, before tokenizing using the Python NLTK [1]. We then construct a dictionary of words that appear at least ï¬ve times in the train set, giving us a vocabulary of around 7.5k.
Hyperparameters. All our models are implemented in Torch [2]. Model hyperparameters are chosen by early stop- ping on val based on the Mean Reciprocal Rank (MRR) metric. All LSTMs are 2-layered with 512-dim hidden states. We learn 300-dim embeddings for words and im- ages. These word embeddings are shared across ques- tion, history, and decoder LSTMs. We use Adam [28]
18
No | don't think Decoder they are together Answer A, Do you think the woman is with him? Question Q, The man is riding his bicycle on the sidewalk. Is the man wearing a helmet? No he does not have a helmet on. ... Are there any people nearby? Yes there's a woman walking behind him. t rounds of history (concatenated)
(a) Late Fusion Encoder
No | don't think -_ Decoder they are together Answer A, Do you think the woman is with him? Question Q, The man is riding his bicycle on the sidewalk Is the man wearing a helmet? No he does not have a = a helmet on. How old is the man? He looks around 40 years old, 9 wi What color is his bike? It has black wheels and handlebars. | can't see the body of the bike that well. â Is anyone else riding a bike? No he's the only one. H 7 t-1 Are there any people nearby? Yes thereâs a woman _ LsTM, walking behind him \ t rounds of history {(Caption), (Q,,A,), + (Q.4,A,)}
(b) Hierarchical Recurrent Encoder
No | donât think Decoder | they are together Answer A, . Fully-connected Do you think layer the woman is with him? Question Q, The man is riding his bicycle on the sidewalk. Weighted sum Is the man wearing a helmet? No he does not have a helmet on. How old is the man? He looks around 40 years old. |» > [ism What color is his bike? It has black wheels and handlebars. | can't see the body of the bike that well. â | Is anyone else riding a bike? No he's the only one. \. Are there any people nearby? Yes there's a woman walking behind him. \, t rounds of history {(Caption), (Q,,A,), «- (Q.) Ay)} tx 612 Attention over history
(c) Memory Network Encoder
Figure 17
19
Model MRR R@1 R@5 R@10 Mean Answer prior 0.311 19.85 39.14 44.28 31.56 NN-Q 0.392 30.54 46.99 49.98 30.88 NN-QI 0.385 29.71 46.57 49.86 30.90 LF-Q-G 0.403 29.74 50.10 56.32 24.06 | LF-QH-G 0.425 32.49 51.56 57.80 23.11 Baseline LF-QI-G 0.437 34.06 52.50 58.89 22.31 HRE-QH-G 0.430 32.84 52.36 58.64 22.59 HRE-QIH-G 0.442: 34.37: 53.40 59.74 21.75 HREA-QIH-G 0.442 34.47 53.43 59.73 21.83 Generative HRE-QIH-D = 0.502 36.26 65.67 77.05 7.79 HREA-QIH-D 0.508 36.76 66.54 77.75 7.59 Discriminative < SANI-QI-D â 0.506 36.21 67.08 78.16 7.74 4 HieCoAtt-QI-D 0.509 35.54 66.79 77.94 7.68 Human Accuracies
# n a m u H
# Human-Q Human-QH Human-QI Human-QIH
=
0.441 25.10 67.37 0.485 30.31 70.53 0.619 46.12 82.54 0.635 48.03 83.76
- - -
4.19 3.91 2.92 2.83
Table 5: Performance of methods on VisDial v0.5, measured by mean reciprocal rank (MRR), recall@k for k = {1, 5, 10} and mean rank. Note that higher is better for MRR and recall@k, while lower is better for mean rank. Memory Network has the best performance in both discriminative and generative settings.
20
with a learning rate of 10â3 for all models. Gradients at each iterations are clamped to [â5, 5] to avoid explosion. Our code, architectures, and trained models are available at https://visualdialog.org.
# References
[1] NLTK. http://www.nltk.org/. 18 [2] Torch. http://torch.ch/. 9, 18 [3] A. Agrawal, D. Batra, and D. Parikh. Analyzing the Behavior of Visual Question Answering Models. In EMNLP, 2016. 3, 4
[4] H. Agrawal, A. Chandrasekaran, D. Batra, D. Parikh, and M. Bansal. Sort story: Sorting jumbled images and captions into stories. In EMNLP, 2016. 3
[5] Amazon. Alexa. http://alexa.amazon.com/. 6 [6] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh. VQA: Visual Question Answering. In ICCV, 2015. 1, 2, 3, 4, 5, 10, 11, 13, 14
[7] J. P. Bigham, C. Jayant, H. Ji, G. Little, A. Miller, R. C. Miller, R. Miller, A. Tatarowicz, B. White, S. White, and T. Yeh. VizWiz: Nearly Real-time Answers to Visual Ques- tions. In UIST, 2010. 1
[8] A. Bordes, N. Usunier, S. Chopra, and J. Weston. Large- scale Simple Question Answering with Memory Networks. arXiv preprint arXiv:1506.02075, 2015. 3
Learning End-to-End Goal- Oriented Dialog. arXiv preprint arXiv:1605.07683, 2016. 3, 6, 8
[10] G. Christie, A. Laddha, A. Agrawal, S. Antol, Y. Goyal, K. Kochersberger, and D. Batra. Resolving language and vision ambiguities together: Joint segmentation and preposi- tional attachment resolution in captioned scenes. In EMNLP, 2016. 3
[11] C. Danescu-Niculescu-Mizil and L. Lee. Chameleons in imagined conversations: A new approach to understanding In Proceedings coordination of linguistic style in dialogs. of the Workshop on Cognitive Modeling and Computational Linguistics, ACL 2011, 2011. 12
[12] A. Das, H. Agrawal, C. L. Zitnick, D. Parikh, and D. Ba- tra. Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions? In EMNLP, 2016. 3
[13] H. de Vries, F. Strub, S. Chandar, O. Pietquin, H. Larochelle, and A. C. Courville. GuessWhat?! Visual object discovery through multi-modal dialogue. In CVPR, 2017. 3
[14] J. Dodge, A. Gane, X. Zhang, A. Bordes, S. Chopra, A. Miller, A. Szlam, and J. Weston. Evaluating Prerequi- site Qualities for Learning End-to-End Dialog Systems. In ICLR, 2016. 2, 3
[15] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term Re- current Convolutional Networks for Visual Recognition and Description. In CVPR, 2015. 3
[16] H. Fang, S. Gupta, F. N. Iandola, R. K. Srivastava, L. Deng, P. Dollár, J. Gao, X. He, M. Mitchell, J. C. Platt, C. L. Zit- nick, and G. Zweig. From Captions to Visual Concepts and Back. In CVPR, 2015. 3
[17] H. Gao, J. Mao, J. Zhou, Z. Huang, L. Wang, and W. Xu.
What color are kites? Can you see street signs? The computer on the desk shows an image of a car. What color is car? White A lot of people stand around flying kites in a park. Are these people children? It looks like a mixture of families Do you know make? Volkswagen Are there people? Probably driving car Do you see desk? Yes Is this field trip you think? Just family outing Is there lot of grass? Yes Is it laptop? No, desktop What color is computer? You can't see actual computer just screen and keyboard Are there people on carriage? A street scene with a horse and carriage. Is it real? Yes Are there lot of trees? No Any vehicles around? No Is anyone on bus? Are there any black stripes? Yes 3 black stripes Is there any writing? Yes it says âmoon farm day camp" Can you see brand? It's Mac Is picture of car taken outside? Yes What color is horse? Dark brown What color is carriage? Red Is it fairly close up shot? Anice bird standing on a bench. Gazing at? Camera | think Can you tell what kind of bird it is? No it's bright red bird with black face and red beek Is it tiny bird? Yes Is grass well-maintained? No it's all weeds Are they wearing wetsuit? No What sort of area is this in? Looks like it could be back deck
A lot of people stand around flying kites in a park. Are these people children? It looks like a mixture of families Is this field trip you think? Just family outing Is there lot of grass? Yes Are there lot of trees? No Any vehicles around? No
A street scene with a horse and carriage. Is it real? Yes What color is horse? Dark brown What color is carriage? Red
Figure 18: Selected examples of attention over history facts from our Memory Network encoder. The intensity of color in each row indicates the strength of attention placed on that round by the model.
Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering. In NIPS, 2015. 3, 4, 11, 13
[18] D. Geman, S. Geman, N. Hallonquist, and L. Younes. A Visual Turing Test for Computer Vision Systems. In PNAS, 2014. 3
21
[19] Y. Goyal, T. Khot, D. Summers-Stay, D. Batra, and D. Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In CVPR, 2017. 3, 4
[20] K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. In CVPR, 2016. 1
[21] K. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom. Teaching machines to read and comprehend. In NIPS, 2015. 1, 3
[22] R. Hu, M. Rohrbach, and T. Darrell. Segmentation from
natural language expressions. In ECCV, 2016. 3 [23] T.-H. Huang, F. Ferraro, N. Mostafazadeh,
I. Misra, A. Agrawal, J. Devlin, R. Girshick, X. He, P. Kohli, D. Ba- tra, L. Zitnick, D. Parikh, L. Vanderwende, M. Galley, and M. Mitchell. Visual storytelling. In NAACL HLT, 2016. 3
[24] Q. V. L. Ilya Sutskever, Oriol Vinyals. Sequence to Sequence Learning with Neural Networks. In NIPS, 2014. 12
[25] A. Jabri, A. Joulin, and L. van der Maaten. Revisiting visual question answering baselines. In ECCV, 2016. 7
[26] A. Kannan, K. Kurach, S. Ravi, T. Kaufmann, A. Tomkins, B. Miklos, G. Corrado, L. Lukács, M. Ganea, P. Young, et al. Smart Reply: Automated Response Suggestion for Email. In KDD, 2016. 3
[27] A. Karpathy and L. Fei-Fei. Deep visual-semantic align- In CVPR, 2015. ments for generating image descriptions. 3
[28] D. Kingma and J. Ba. Adam: A Method for Stochastic Opti- mization. In ICLR, 2015. 18
[29] C. Kong, D. Lin, M. Bansal, R. Urtasun, and S. Fidler. What are you talking about? text-to-image coreference. In CVPR, 2014. 3
[30] O. Lemon, K. Georgila, J. Henderson, and M. Stuttle. An ISU dialogue system exhibiting reinforcement learning of di- alogue policies: generic slot-ï¬lling in the TALK in-car sys- tem. In EACL, 2006. 2
[31] J. Li, W. Monroe, A. Ritter, M. Galley, J. Gao, and D. Juraf- sky. Deep Reinforcement Learning for Dialogue Generation. In EMNLP, 2016. 3
[32] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Dollár, and C. L. Zitnick. Microsoft COCO: Common Objects in Context. In ECCV, 2014. 2, 3
[33] C.-W. Liu, R. Lowe, I. V. Serban, M. Noseworthy, L. Charlin, and J. Pineau. How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation. In EMNLP, 2016. 3, 6 [34] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. SSD: Single Shot MultiBox Detector. In ECCV, 2016. 1
[35] R. Lowe, N. Pow, I. Serban, and J. Pineau. The Ubuntu Dia- logue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems. In SIGDIAL, 2015. 3
Deeper LSTM and Normalized CNN Visual Question Answering https://github.com/VT-vision-lab/ model. VQA_LSTM_CNN, 2015. 8
[37] J. Lu, J. Yang, D. Batra, and D. Parikh. Hierarchical Question-Image Co-Attention for Visual Question Answer- ing. In NIPS, 2016. 3, 8
[38] M. Malinowski and M. Fritz. A Multi-World Approach to Question Answering about Real-World Scenes based on Un-
22
certain Input. In NIPS, 2014. 3, 11
[39] M. Malinowski, M. Rohrbach, and M. Fritz. Ask your neu- rons: A neural-based approach to answering questions about images. In ICCV, 2015. 1, 3
[40] H. Mei, M. Bansal, and M. R. Walter. Listen, attend, and walk: Neural mapping of navigational instructions to action sequences. In AAAI, 2016. 2
[41] Microsoft. Bing Spell Check API. https://www.
microsoft.com/cognitive-services/en-us/ bing-spell-check-api/documentation. 18 [42] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Ve- ness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep rein- forcement learning. Nature, 518(7540):529â533, 02 2015. 1
[43] N. Mostafazadeh, C. Brockett, B. Dolan, M. Galley, J. Gao, G. P. Spithourakis, and L. Vanderwende. Image-Grounded Conversations: Multimodal Context for Natural Question and Response Generation. arXiv preprint arXiv:1701.08251, 2017. 3
[44] T. Paek. Empirical methods for evaluating dialog systems. In Proceedings of the workshop on Evaluation for Language and Dialogue Systems-Volume 9, 2001. 2
[45] B. A. Plummer, L. Wang, C. M. Cervantes, J. C. Caicedo, J. Hockenmaier, and S. Lazebnik. Flickr30k entities: Col- lecting region-to-phrase correspondences for richer image- to-sentence models. In ICCV, 2015. 3
[46] P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In EMNLP, 2016. 3
[47] V. Ramanathan, A. Joulin, P. Liang, and L. Fei-Fei. Linking people with "their" names using coreference resolution. In ECCV, 2014. 3
[48] A. Ray, G. Christie, M. Bansal, D. Batra, and D. Parikh. Question Relevance in VQA: Identifying Non-Visual And False-Premise Questions. In EMNLP, 2016. 5, 13
[49] M. Ren, R. Kiros, and R. Zemel. Exploring Models and Data for Image Question Answering. In NIPS, 2015. 1, 3, 11 [50] A. Rohrbach, M. Rohrbach, R. Hu, T. Darrell, and B. Schiele. Grounding of textual phrases in images by re- construction. In ECCV, 2016. 3
[51] A. Rohrbach, M. Rohrbach, N. Tandon, and B. Schiele. A dataset for movie description. In CVPR, 2015. 3
[52] I. V. Serban, A. GarcÃa-Durán, Ã. Gülçehre, S. Ahn, S. Chan- dar, A. C. Courville, and Y. Bengio. Generating Factoid Questions With Recurrent Neural Networks: The 30M Fac- toid Question-Answer Corpus. In ACL, 2016. 3
[53] I. V. Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau. Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models. In AAAI, 2016. 3
[54] I. V. Serban, A. Sordoni, R. Lowe, L. Charlin, J. Pineau, A. Courville, and Y. Bengio. A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues. arXiv preprint arXiv:1605.06069, 2016. 3, 7
[55] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou,
V. Panneershelvam, M. Lanctot, et al. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016. 1
[56] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. 7
[57] M. Tapaswi, Y. Zhu, R. Stiefelhagen, A. Torralba, R. Ur- tasun, and S. Fidler. MovieQA: Understanding Stories in Movies through Question-Answering. In CVPR, 2016. 1 [58] K. Tu, M. Meng, M. W. Lee, T. E. Choe, and S. C. Zhu. Joint Video and Text Parsing for Understanding Events and Answering Queries. IEEE MultiMedia, 2014. 1
[59] S. Venugopalan, M. Rohrbach, J. Donahue, R. J. Mooney, T. Darrell, and K. Saenko. Sequence to Sequence - Video to Text. In ICCV, 2015. 3
[60] S. Venugopalan, H. Xu, J. Donahue, M. Rohrbach, R. J. Mooney, and K. Saenko. Translating Videos to Natural Lan- guage Using Deep Recurrent Neural Networks. In NAACL HLT, 2015. 3
[61] O. Vinyals and Q. Le. A Neural Conversational Model. arXiv preprint arXiv:1506.05869, 2015. 3
[62] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In CVPR, 2015. 3
[63] L. Wang, S. Guo, W. Huang, Y. Xiong, and Y. Qiao. Knowledge Guided Disambiguation for Large-Scale Scene Classiï¬cation with Multi-Resolution CNNs. arXiv preprint arXiv:1610.01119, 2016. 1
23
[64] J. Weizenbaum. ELIZA. http://psych.fullerton. edu/mbirnbaum/psych101/Eliza.htm. 2, 3 [65] J. Weston, A. Bordes, S. Chopra, and T. Mikolov. Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks. In ICLR, 2016. 1, 3
[66] S. Wu, H. Pique, and J. Wieland. Intelligence to Help Blind People http://newsroom.fb.com/news/2016/04/using-artiï¬cial- intelligence-to-help-blind-people-see-facebook/, 1
# Artificial
# Facebook.
2016.
[67] Z. Yang, X. He, J. Gao, L. Deng, and A. J. Smola. Stacked Attention Networks for Image Question Answering. In CVPR, 2016. 8
[68] L. Yu, E. Park, A. C. Berg, and T. L. Berg. Visual Madlibs: Fill in the blank Image Generation and Question Answering. In ICCV, 2015. 11
[69] P. Zhang, Y. Goyal, D. Summers-Stay, D. Batra, and D. Parikh. Yin and Yang: Balancing and Answering Binary Visual Questions. In CVPR, 2016. 3, 4, 5, 13, 14
[70] Y. Zhu, O. Groth, M. Bernstein, and L. Fei-Fei. Visual7W: Grounded Question Answering in Images. In CVPR, 2016. 4, 11, 13
[71] C. L. Zitnick, A. Agrawal, S. Antol, M. Mitchell, D. Batra, and D. Parikh. Measuring machine intelligence through vi- sual question answering. AI Magazine, 2016. 1 | {
"id": "1605.06069"
} |
1611.06440 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | 7 1 0 2 n u J 8 ] G L . s c [
2 v 0 4 4 6 0 . 1 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# PRUNING CONVOLUTIONAL NEURAL NETWORKS FOR RESOURCE EFFICIENT INFERENCE
Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz NVIDIA {pmolchanov, styree, tkarras, taila, jkautz}@nvidia.com
# ABSTRACT
We propose a new formulation for pruning convolutional kernels in neural networks to enable efï¬cient inference. We interleave greedy criteria-based pruning with ï¬ne- tuning by backpropagationâa computationally efï¬cient procedure that maintains good generalization in the pruned network. We propose a new criterion based on Taylor expansion that approximates the change in the cost function induced by pruning network parameters. We focus on transfer learning, where large pretrained networks are adapted to specialized tasks. The proposed criterion demonstrates superior performance compared to other criteria, e.g. the norm of kernel weights or feature map activation, for pruning large CNNs after adaptation to ï¬ne-grained classiï¬cation tasks (Birds-200 and Flowers-102) relaying only on the ï¬rst order gradient information. We also show that pruning can lead to more than 10à theoretical reduction in adapted 3D-convolutional ï¬lters with a small drop in accuracy in a recurrent gesture classiï¬er. Finally, we show results for the large- scale ImageNet dataset to emphasize the ï¬exibility of our approach.
# INTRODUCTION
Convolutional neural networks (CNN) are used extensively in computer vision applications, including object classiï¬cation and localization, pedestrian and car detection, and video classiï¬cation. Many problems like these focus on specialized domains for which there are only small amounts of care- fully curated training data. In these cases, accuracy may be improved by ï¬ne-tuning an existing deep network previously trained on a much larger labeled vision dataset, such as images from Ima- geNet (Russakovsky et al., 2015) or videos from Sports-1M (Karpathy et al., 2014). While transfer learning of this form supports state of the art accuracy, inference is expensive due to the time, power, and memory demanded by the heavyweight architecture of the ï¬ne-tuned network.
While modern deep CNNs are composed of a variety of layer types, runtime during prediction is dominated by the evaluation of convolutional layers. With the goal of speeding up inference, we prune entire feature maps so the resulting networks may be run efï¬ciently even on embedded devices. We interleave greedy criteria-based pruning with ï¬ne-tuning by backpropagation, a computationally efï¬cient procedure that maintains good generalization in the pruned network.
Neural network pruning was pioneered in the early development of neural networks (Reed, 1993). Optimal Brain Damage (LeCun et al., 1990) and Optimal Brain Surgeon (Hassibi & Stork, 1993) leverage a second-order Taylor expansion to select parameters for deletion, using pruning as regu- larization to improve training and generalization. This method requires computation of the Hessian matrix partially or completely, which adds memory and computation costs to standard ï¬ne-tuning.
In line with our work, Anwar et al. (2015) describe structured pruning in convolutional layers at the level of feature maps and kernels, as well as strided sparsity to prune with regularity within kernels. Pruning is accomplished by particle ï¬ltering wherein conï¬gurations are weighted by misclassiï¬cation rate. The method demonstrates good results on small CNNs, but larger CNNs are not addressed.
(2015) introduce a simpler approach by fine-tuning with a strong (2 regularization term and dropping parameters with values below a predefined threshold. Such unstructured pruning is very effective for network compression, and this approach demonstrates good performance for intra-kernel pruning. But compression may not translate directly to faster inference since modern hardware
1
Published as a conference paper at ICLR 2017
exploits regularities in computation for high throughput. So specialized hardware may be needed for efï¬cient inference of a network with intra-kernel sparsity (Han et al., 2016). This approach also requires long ï¬ne-tuning times that may exceed the original network training by a factor of 3 or larger. Group sparsity based regularization of network parameters was proposed to penalize unimportant parameters (Wen et al., 2016; Zhou et al., 2016; Alvarez & Salzmann, 2016; Lebedev & Lempitsky, 2016). Regularization-based pruning techniques require per layer sensitivity analysis which adds extra computations. In contrast, our approach relies on global rescaling of criteria for all layers and does not require sensitivity estimation. Moreover, our approach is faster as we directly prune unimportant parameters instead of waiting for their values to be made sufï¬ciently small by optimization under regularization.
Other approaches include combining parameters with correlated weights (Srinivas & Babu, 2015), reducing precision (Gupta et al., 2015; Rastegari et al., 2016) or tensor decomposition (Kim et al., 2015). These approaches usually require a separate training procedure or signiï¬cant ï¬ne-tuning, but potentially may be combined with our method for additional speedups.
# 2 METHOD
The proposed method for pruning consists of the following steps: 1) Fine-tune the network until convergence on the target task; 2) Alternate iterations of pruning and further ï¬ne-tuning; 3) Stop prun- ing after reaching the target trade-off between accuracy and pruning objective, e.g. ï¬oating point operations (FLOPs) or memory utiliza- tion.
The procedure is simple, but its success hinges on employing the right pruning criterion. In this section, we introduce several efï¬cient pruning criteria and related technical considerations.
# training examples D = {xv
Consider a set of training examples D = {xv = {Xo,X1-eXv}) = {You ayn th, where x and y rep- resent an input and a target output, respectively. The networkâs parameter] = {(wh, bt), (w?, 02), ...Cw0*, bP*)} are optimized to minimize a cost value C(D|W). The most common choice for a cost function C(-) is a negative log-likelihood function. A cost function is selected independently of pruning and depends only on the task to be solved by the original network. In the case of transfer learning, we adapt a large network initialized with parameters Wo pretrained on a related but distinct dataset.
@ no Stop pruning
Figure 1: Network pruning as a backward ï¬lter.
During pruning, we refine a subset of parameters which preserves the accuracy of the adapted network, C(D|Wâ) = C(D|W). This corresponds to a combinatorial optimization:
min C(DIW') âC(D|W)}_ st. ||W' |p < B, (1)
where the £9 norm in ||Wâ||o bounds the number of non-zero parameters B in Wâ. Intuitively, if W' = W we reach the global minimum of the error function, however ||WVâ||o will also have its maximum.
Finding a good subset of parameters while maintaining a cost value as close as possible to the original is a combinatorial problem. It will require 2|W| evaluations of the cost function for a selected subset of data. For current networks it would be impossible to compute: for example, VGG-16 has |W| = 4224 convolutional feature maps. While it is impossible to solve this optimization exactly for networks of any reasonable size, in this work we investigate a class of greedy methods.
Starting with a full set of parameters W, we iteratively identify and remove the least important parameters, as illustrated in Figure [I] By removing parameters at each iteration, we ensure the eventual satisfaction of the ) bound on Wâ.
1A âparameterâ (w, b) â W might represent an individual weight, a convolutional kernel, or the entire set of kernels that compute a feature map; our experiments operate at the level of feature maps.
2
(1)
Published as a conference paper at ICLR 2017
Since we focus our analysis on pruning feature maps from convolutional layers, let us denote a set of image feature maps by ze ⬠R#¢*exCe with dimensionality Hp x W and Cy individual maps (or channels)P| The feature maps can either be the input to the network, zo, or the output from a convolutional layer, zy with ¢ ⬠[1,2,..., Z]. Individual feature maps are denoted 2") for k ⬠[1,2,...,C]. A convolutional layer ¢ applies the convolution operation (*) to a set of input feature maps ze_ with kernels parameterized by wi") ⬠RO XPxp,
⬠RO XPxp, wi" D4. o®),
2) = BR (0 1% wi" D4. o®), (2)
where 2i*) ⬠R%â¬*W¢ is the result of convolving each of Ce_ kernels of size p x p with its respective input feature map and adding bias otâ ) We introduce a pruning gate g, ⬠{0,1}', an external switch which determines if a particular feature map is included or pruned during feed-forward propagation, such that when g is vectorized: W! = gW.
# 2.1 ORACLE PRUNING
Minimizing the difference in accuracy between the full and pruned models depends on the criterion for identifying the âleast importantâ parameters, called saliency, at each step. The best criterion would be an exact empirical evaluation of each parameter, which we denote the oracle criterion, accomplished by ablating each non-zero parameter w ⬠Wâ in turn and recording the costâs difference.
We distinguish two ways of using this oracle estimation of importance: 1) oracle-loss quantifies importance as the signed change in loss, C(D|Wâ) â C(D|W), and 2) oracle-abs adopts the absolute difference, |C(D|Wâ) â C(D|W)|. While both discourage pruning which increases the loss, the oracle-loss version encourages pruning which may decrease the loss, while oracle-abs penalizes any pruning in proportion to its change in loss, regardless of the direction of change.
While the oracle is optimal for this greedy procedure, it is prohibitively costly to compute, requiring ||W||o evaluations on a training dataset, one evaluation for each remaining non-zero parameter. Since estimation of parameter importance is key to both the accuracy and the efficiency of this pruning approach, we propose and evaluate several criteria in terms of performance and estimation cost.
2.2 CRITERIA FOR PRUNING
There are many heuristic criteria which are much more computationally efficient than the oracle. For the specific case of evaluating the importance of a feature map (and implicitly the set of convolutional kernels from which it is computed), reasonable criteria include: the combined ¢2-norm of the kernel weights, the mean, standard deviation or percentage of the feature mapâs activation, and mutual information between activations and predictions. We describe these criteria in the following paragraphs and propose a new criterion which is based on the Taylor expansion.
Minimum weight. Pruning by magnitude of kernel weights is perhaps the simplest possible crite- rion, and it does not require any additional computation during the fine-tuning process. In case of prun- ing according to the norm of a set of weights, the criterion is evaluated as: Oxrw : RO-1XPXP _y R, with Oww(w) = Tel >; w?, where |w| is dimensionality of the set of weights after vectorization. The motivation to apply this type of pruning is that a convolutional kernel with low ¢2 norm detects less important features than those with a high norm. This can be aided during training by applying ¢; or ¢2 regularization, which will push unimportant kernels to have smaller values.
Activation. One of the reasons for the popularity of the ReLU activation is the sparsity in activation that is induced, allowing convolutional layers to act as feature detectors. Therefore it is reasonable to assume that if an activation value (an output feature map) is small then this feature detector is not important for prediction task at hand. We may evaluate this by mean activation, Oy : RMxWexCe 5 R with oe = rl = a; for activation a = zi"), ) or by the standard deviation of the activation, Oy74_sta( )= [Dia = Ha)? â Ha)?.
2While our notation is at times speciï¬c to 2D convolutions, the methods are applicable to 3D convolutions, as well as fully connected layers.
3
Published as a conference paper at ICLR 2017
Mutual information. Mutual information (MI) is a measure of how much information is present in one variable about another variable. We apply MI as a criterion for pruning, @ yy, : R#*WexCe â R, with Oy7;(a) = MI(a, y), where y is the target of neural network. MI is defined for continuous variables, so to simplify computation, we exchange it with information gain (IG), which is defined for quantized variables IG(y|a) = H(x) + H(y) â H(ax,y), where H(z) is the entropy of variable a. We accumulate statistics on activations and ground truth for a number of updates, then quantize the values and compute IG.
Taylor expansion. We phrase pruning as an optimization problem, trying to find Wâ with bounded number of non-zero elements that minimize |AC(h;)| = |C(D|Wâ) â C(D|W)|. With this approach based on the Taylor expansion, we directly approximate change in the loss function from removing a particular parameter. Let h; be the output produced from parameter 7. In the case of feature maps, h= {2h), 2), sey 2fOP7F, For notational convenience, we consider the cost function equally depen- dent on parameters and outputs computed from parameters: C(D|h;) = C(D|(w, b);). Assuming independence of parameters, we have: |AC(hi)| = |C(D, hi = 0) â C(D, hi), (3)
where C(D, hi = 0) is a cost value if output hi is pruned, while C(D, hi) is the cost if it is not pruned. While parameters are in reality inter-dependent, we already make an independence assumption at each gradient step during training.
To approximate âC(hi), we use the ï¬rst-degree Taylor polynomial. For a function f (x), the Taylor expansion at point x = a is
P f(a (0) = P19 (ea)? + Rl), 4) p=0
p=0
where f (p)(a) is the p-th derivative of f evaluated at point a, and Rp(x) is the p-th order remainder. Approximating C(D, hi = 0) with a ï¬rst-order Taylor polynomial near hi = 0, we have:
C(D, hi = 0) = C(D, hi) â δC δhi hi + R1(hi = 0). (5)
The remainder R1(hi = 0) can be calculated through the Lagrange form:
R1(hi = 0) = δ2C i = ξ) δ(h2 h2 i 2 , (6)
where ξ is a real number between 0 and hi. However, we neglect this ï¬rst-order remainder, largely due to the signiï¬cant calculation required, but also in part because the widely-used ReLU activation function encourages a smaller second order term.
Finally, by substituting Eq. (5) into Eq. (3) and ignoring the remainder, we have ÎT E : RHlÃWlÃCl â R+, with
6c Orp(hi) = |AC(h;)| = |C(D, hi) â shit âC(D,hj)| = | 6c âh; Ohy . (7)
Intuitively, this criterion prunes parameters that have an almost ï¬at gradient of the cost function w.r.t. feature map hi. This approach requires accumulation of the product of the activation and the gradient of the cost function w.r.t. to the activation, which is easily computed from the same computations for back-propagation. ÎT E is computed for a multi-variate output, such as a feature map, by
1 6C (k) M > 52) Zim m âLm Orn (zt) = ; (8)
where M is length of vectorized feature map. For a minibatch with T > 1 examples, the criterion is computed for each example separately and averaged over T .
Independently of our work, Figurnov et al. (2016) came up with similar metric based on the Taylor expansion, called impact, to evaluate importance of spatial cells in a convolutional layer. It shows that the same metric can be applied to evaluate importance of different groups of parameters.
4
Published as a conference paper at ICLR 2017
Relation to Optimal Brain Damage. The Taylor criterion proposed above relies on approximating the change in loss caused by removing a feature map. The core idea is the same as in Optimal Brain Damage (OBD) (LeCun et al., 1990). Here we consider the differences more carefully.
The primary difference is the treatment of the ï¬rst-order term of the Taylor expansion, in our notation y = δC δh h for cost function C and hidden layer activation h. After sufï¬cient training epochs, the δh â 0 and E(y) = 0. At face value y offers little useful information, gradient term tends to zero: δC hence OBD regards the term as zero and focuses on the second-order term.
However, the variance of y is non-zero and correlates with the stability of the local function w.r.t. activation h. By considering the absolute change in the cost3 induced by pruning (as in Eq. 3), we use the absolute value of the ï¬rst-order term, |y|. Under assumption that samples come from independent and identical distribution, E(|y|) = Ï Ï where Ï is the standard deviation of y, known as the expected value of the half-normal distribution. So, while y tends to zero, the expectation of |y| is proportional to the variance of y, a value which is empirically more informative as a pruning criterion.
As an additional beneï¬t, we avoid the computation of the second-order Taylor expansion term, or its simpliï¬cation - diagonal of the Hessian, as required in OBD.
We found important to compare proposed Taylor criteria to OBD. As described in the original papers (LeCun et al., 1990; 1998), OBD can be efï¬ciently implemented similarly to standard back propagation algorithm doubling backward propagation time and memory usage when used together with standard ï¬ne-tuning. Efï¬cient implementation of the original OBD algorithm might require signiï¬cant changes to the framework based on automatic differentiation like Theano to efï¬ciently compute only diagonal of the Hessian instead of the full matrix. Several researchers tried to tackle this problem with approximation techniques (Martens, 2010; Martens et al., 2012). In our implementation, we use efï¬cient way of computing Hessian-vector product (Pearlmutter, 1994) and matrix diagonal approximation proposed by (Bekas et al., 2007), please refer to more details in appendix. With current implementation, OBD is 30 times slower than Taylor technique for saliency estimation, and 3 times slower for iterative pruning, however with different implementation can only be 50% slower as mentioned in the original paper.
Average Percentage of Zeros (APoZ). Hu et al. (2016) proposed to explore sparsity in activations for network pruning. ReLU activation function imposes sparsity during inference, and average percentage of positive activations at the output can determine importance of the neuron. Intuitively, it is a good criteria, however feature maps at the ï¬rst layers have similar APoZ regardless of the networkâs target as they learn to be Gabor like ï¬lters. We will use APoZ to estimate saliency of feature maps.
2.3 NORMALIZATION
Some criteria return ârawâ values, whose scale varies with the depth of the parameterâs layer in the network. A simple layer-wise /2-normalization can achieve adequate rescaling across layers:
6(2")=
2.4 FLOPS REGULARIZED PRUNING
One of the main reasons to apply pruning is to reduce number of operations in the network. Feature maps from different layers require different amounts of computation due the number and sizes of input feature maps and convolution kernels. To take this into account we introduce FLOPs regularization:
Î(z(k) l ) = Î(z(k) ) â λÎf lops , l (9)
l where λ controls the amount of regularization. For our experiments, we use λ = 10â3. Îf lops is computed under the assumption that convolution is implemented as a sliding window (see Appendix). Other regularization conditions may be applied, e.g. storage size, kernel sizes, or memory footprint.
3OBD approximates the signed difference in loss, while our method approximates absolute difference in loss. We ï¬nd in our results that pruning based on absolute difference yields better accuracy.
5
Published as a conference paper at ICLR 2017
4500, a eer rene n ene ee â median 4000) oa -- min y = max 3500 3000 lower better Rank, eopote. °% 2 4 ~~ 8 10 cry 14 Layer, #
10 08 2 a Accuracy © âS \ â oracle-abs 09 ot 0% 95% 90% 85% 80% 75% Parameters
Figure 2: Global statistics of oracle ranking, shown by layer for Birds-200 transfer learning.
_
Figure 3: Pruning without ï¬ne-tuning using oracle ranking for Birds-200 transfer learning.
# 3 RESULTS
We empirically study the pruning criteria and procedure detailed in the previous section for a variety of problems. We focus many experiments on transfer learning problems, a setting where pruning seems to excel. We also present results for pruning large networks on their original tasks for more direct comparison with the existing pruning literature. Experiments are performed within Theano (Theano Development Team, 2016). Training and pruning are performed on the respective training sets for each problem, while results are reported on appropriate holdout sets, unless otherwise indicated. For all experiments we prune a single feature map at every pruning iteration, allowing ï¬ne-tuning and re-evaluation of the criterion to account for dependency between parameters.
# 3.1 CHARACTERIZING THE ORACLE RANKING
We begin by explicitly computing the oracle for a single pruning iteration of a visual transfer learning problem. We ï¬ne-tune the VGG-16 network (Simonyan & Zisserman, 2014) for classiï¬cation of bird species using the Caltech-UCSD Birds 200-2011 dataset (Wah et al., 2011). The dataset consists of nearly 6000 training images and 5700 test images, covering 200 species. We ï¬ne-tune VGG-16 for 60 epochs with learning rate 0.0001 to achieve a test accuracy of 72.2% using uncropped images.
To compute the oracle, we evaluate the change in loss caused by removing each individual feature map from the ï¬ne-tuned VGG-16 network. (See Appendix A.3 for additional analysis.) We rank feature maps by their contributions to the loss, where rank 1 indicates the most important feature mapâremoving it results in the highest increase in lossâand rank 4224 indicates the least important. Statistics of global ranks are shown in Fig. 2 grouped by convolutional layer. We observe: (1) Median global importance tends to decrease with depth. (2) Layers with max-pooling tend to be more important than those without. (VGG-16 has pooling after layers 2, 4, 7, 10, and 13.) However, (3) maximum and minimum ranks show that every layer has some feature maps that are globally important and others that are globally less important. Taken together with the results of subsequent experiments, we opt for encouraging a balanced pruning that distributes selection across all layers.
Next, we iteratively prune the network using pre-computed oracle ranking. In this experiment, we do not update the parameters of the network or the oracle ranking between iterations. Training accuracy is illustrated in Fig. 3 over many pruning iterations. Surprisingly, pruning by smallest absolute change in loss (Oracle-abs) yields higher accuracy than pruning by the net effect on loss (Oracle-loss). Even though the oracle indicates that removing some feature maps individually may decrease loss, instability accumulates due the large absolute changes that are induced. These results support pruning by absolute difference in cost, as constructed in Eq. 1.
# 3.2 EVALUATING PROPOSED CRITERIA VERSUS THE ORACLE
To evaluate computationally efï¬cient criteria as substitutes for the oracle, we compute Spearmanâs rank correlation, an estimate of how well two predictors provide monotonically related outputs,
6
Published as a conference paper at ICLR 2017
âAlexNet / Flowers-102 VGG-16 / Birds-200 Weight Activation OBD Taylor Weight âActivation OBD Taylor Mutual Mean S.d. APoZ Mean S.d. APoZ Info. Per layer 017 0.65 067 054 0.64 0.77 0.27 056 057 «(0.35 «(059 «(0.73 0.28 All layers 028 051 053 041 0.68 0.37 0.34 0.35 «030 «043° «(0.65 (0.14 0.35 (w/fs-norm) 0.13 (0.63«0.61«0.60 = (O75, 0.33 «0.64 «(066 «(0.51 2«=«-~=S.73 0.47 AlexNet / Birds-200 VGG-16 / Flowers-102 Per layer 036 «0.57 065 042 054 0.81 0.19 051 047 036 021 06 All layers 032 037 051 0.28 061 0.37 0.35 053 045 0.61 0.28 0.02 (w/fs-norm) 0.23 0.54. 0.57 0.49 - 0.78 0.28 «0.66 «(065 «(061 ~~ - 0.7 AlexNet / ImageNet Per layer 057 0.09 019 0.06 058 0.58 All layers 067 0.00 013 â0.08 0.72 0.11 (w/fs-norm) 0.44 «0.10 0.19 0.19 = 0.55
# OBD Taylor Mutual
Table 1: Spearmanâs rank correlation of criteria vs. oracle for convolutional feature maps of VGG-16 and AlexNet ï¬ne-tuned on Birds-200 and Flowers-102 datasets, and AlexNet trained on ImageNet.
0.8 07 0.6 3g 205 £ S04 g 5 0.3| * + Activation (mean) go i g ++ Minimum weight < 0.2|| = Tver: flops reg / â#1 es Random âA A From scratch 0.1) .. opp (Lecun et al., 1990) © APoZ (Hu et al., 2016) 0 â1 80% 80% 60% 40% 20% 0% Parameters
0.8 0.8 07 0.7 0.6 0.6 3g 205 205 8 S04 0.4 e Taylor \ 0.3| * + Activation (mean) 503 Activation (mean) | go i 3g 0. 5 ++ Minimum weight 3 Minimum weight 0.2|| = Tver: flops reg / < 02 Taylor, flops reg â#1 es Random Random. âA A From scratch From scratch \ 0.1) .. opp (Lecun et al., 1990) 0.1 OBD (LeCun et al., 1990) © APoZ (Hu et al., 2016) APoz (Hu et al., 2016) ~ 0 â1 0.0 80% 80% 60% 40% 20% 0% 30 25 20 15 10 5 0 Parameters GFLOPs
0.8 0.7 0.6 3g 205 8 0.4 e Taylor \ 503 Activation (mean) | 3g 0. 5 3 Minimum weight < 02 Taylor, flops reg Random. From scratch \ 0.1 OBD (LeCun et al., 1990) APoz (Hu et al., 2016) ~ 0.0 30 25 20 15 10 5 0 GFLOPs
Figure 4: Pruning of feature maps in VGG-16 ï¬ne-tuned on the Birds-200 dataset.
even if their relationship is not linear. Given the difference between oracle4 and criterion ranks di = rank(Îoracle(i)) â rank(Îcriterion(i)) for each parameter i, the rank correlation is computed:
N 6 S=1- â_â_ d 10 N(N? = 1) » (10)
where N is the number of parameters (and the highest rank). This correlation coefï¬cient takes values in [â1, 1], where â1 implies full negative correlation, 0 no correlation, and 1 full positive correlation.
We show Spearmanâs correlation in Table |1|to compare the oracle-abs ranking to rankings by different criteria on a set of networks/datasets some of which are going to be introduced later. Data-dependent criteria (all except weight magnitude) are computed on training data during the fine-tuning before or between pruning iterations. As a sanity check, we evaluate random ranking and observe 0.0 correlation across all layers. âPer layerâ analysis shows ranking within each convolutional layer, while âAll layersâ describes ranking across layers. While several criteria do not scale well across layers with raw values, a layer-wise £2-normalization significantly improves performance. The Taylor criterion has the highest correlation among the criteria, both within layers and across layers (with C2 normalization). OBD shows the best correlation across layers when no normalization used; it also shows best results for correlation on ImageNet dataset. (See Appendi for further analysis.)
# 3.3 PRUNING FINE-TUNED IMAGENET NETWORKS
We now evaluate the full iterative pruning procedure on two transfer learning problems. We focus on reducing the number of convolutional feature maps and the total estimated ï¬oating point operations (FLOPs). Fine-grained recognition is difï¬cult for relatively small datasets without relying on transfer
# 4We use Oracle-abs because of better performance in previous experiment
7
Published as a conference paper at ICLR 2017
© a © a ° a ° a © ES © ES Accuracy, test set Accuracy, test set Taylor âTaylor 0.3] ++ Activation (mean) 0.3] ++ Activation (mean) â+ Minimum weight â+ Minimum weight 0.2} ++. Random 0.2} =â*. Random a rom race a romana \ 0.1} «+. OBD (LeCun et al., 1990) 0.1} ++ 08D (LeCun et al., 1990) s+ APoZ (Hu et al., 2016) + APoZ (Hu et al., 2016) \L 0.9 0.0 100% 80% 60% 40% 20% 0% 14.12 «10 #08 O06 04 02 00 Parameters GFLOPs
© a ° a © ES Accuracy, test set Taylor 0.3] ++ Activation (mean) â+ Minimum weight 0.2} ++. Random a rom race 0.1} «+. OBD (LeCun et al., 1990) s+ APoZ (Hu et al., 2016) 0.9 100% 80% 60% 40% 20% 0% Parameters
© a ° a © ES Accuracy, test set âTaylor 0.3] ++ Activation (mean) â+ Minimum weight 0.2} =â*. Random a romana \ 0.1} ++ 08D (LeCun et al., 1990) + APoZ (Hu et al., 2016) \L 0.0 14.12 «10 #08 O06 04 02 00 GFLOPs
Figure 5: Pruning of feature maps in AlexNet on ï¬ne-tuned on Flowers-102.
learning. Branson et al. (2014) show that training CNN from scratch on the Birds-200 dataset achieves test accuracy of only 10.9%. We compare results to training a randomly initialized CNN with half the number of parameters per layer, denoted "from scratch".
Fig. 4 shows pruning of VGG-16 after ï¬ne-tuning on the Birds-200 dataset (as described previously). At each pruning iteration, we remove a single feature map and then perform 30 minibatch SGD updates with batch-size 32, momentum 0.9, learning rate 10â4, and weight decay 10â4. The ï¬gure depicts accuracy relative to the pruning rate (left) and estimated GFLOPs (right). The Taylor criterion shows the highest accuracy for nearly the entire range of pruning ratios, and with FLOPs regularization demonstrates the best performance relative to the number of operations. OBD shows slightly worse performance of pruning in terms of parameters, however signiï¬cantly worse in terms of FLOPs.
In Fig. 5, we show pruning of the CaffeNet implementation of AlexNet (Krizhevsky et al., 2012) after adapting to the Oxford Flowers 102 dataset (Nilsback & Zisserman, 2008), with 2040 training and 6129 test images from 102 species of ï¬owers. Criteria correlation with oracle-abs is summarized in Table 1. We initially ï¬ne-tune the network for 20 epochs using a learning rate of 0.001, achieving a ï¬nal test accuracy of 80.1%. Then pruning procedes as previously described for Birds-200, except with only 10 mini-batch updates between pruning iterations. We observe the superior performance of the Taylor and OBD criteria in both number of parameters and GFLOPs.
We observed that Taylor criterion shows the best performance which is closely followed by OBD with a bit lower Spearmanâs rank correlation coefï¬cient. Implementing OBD takes more effort because of computation of diagonal of the Hessian and it is 50% to 300% slower than Taylor criteria that relies on ï¬rst order gradient only.
Fig. 6 shows pruning with the Taylor technique and a varying number of ï¬ne-tuning updates between pruning iterations. Increasing the number of updates results in higher accuracy, but at the cost of additional runtime of the pruning procedure.
During pruning we observe a small drop in accuracy. One of the reasons is ï¬ne-tuning between pruning iterations. Accuracy of the initial network can be improved with longer ï¬ne tunning and search of better optimization parameters. For example accuracy of unpruned VGG16 network on Birds-200 goes up to 75% after extra 128k updates. And AlexNet on Flowers-102 goes up to 82.9% after 130k updates. It should be noted that with farther ï¬ne-tuning of pruned networks we can achieve higher accuracy as well, therefore the one-to-one comparison of accuracies is rough.
3.4 PRUNING A RECURRENT 3D-CNN NETWORK FOR HAND GESTURE RECOGNITION
Molchanov et al. (2016) learn to recognize 25 dynamic hand gestures in streaming video with a large recurrent neural network. The network is constructed by adding recurrent connections to a 3D-CNN pretrained on the Sports-1M video dataset (Karpathy et al., 2014) and ï¬ne tuning on a gesture dataset. The full network achieves an accuracy of 80.7% when trained on the depth modality, but a single inference requires an estimated 37.8 GFLOPs, too much for deployment on an embedded GPU. After several iterations of pruning with the Taylor criterion with learning rate 0.0003, momentum 0.9, FLOPs regularization 10â3, we reduce inference to 3.0 GFLOPs, as shown in Fig. 7. While pruning
8
Published as a conference paper at ICLR 2017
0.9 Accuracy, test set 0.3||¢* Tyler 10 updates â © Taylor, 30 updates t © Taylor, 60 updates \ 0.2) ee Taylor, 1000 updates . âA A From scratch . 0.1 14 12 1.0 0.8 0.6 0.4 0.2 0.0 GFLOPs
2 gq P] Accuracy, test set fine-tuning. 2 Taylor, flops reg, 10 updates A A fine-tuned after pruning 40 35 30 2 20 15 10 5 O GFLOPs
Figure 6: Varying the number of minibatch updates between pruning iterations with AlexNet/Flowers-102 and the Taylor criterion.
Figure 7: Pruning of a recurrent 3D-CNN for dynamic hand gesture recognition (Molchanov et al., 2016).
0.8 0.7 0.6 â© @ Taylor, 100 updates 0.21). Taylor, 1000 updates © © Weight, 100 updates â© © Random, 100 updates " e-* Random, 1000 updates 0. \Bos 80% 60% 40% Parameters Accuracy (top-5), validation set x! 20% 0%
° wo Accuracy, test set ° £ © Taylor, 100 updates .\ -* Taylor, 1000 updates va © Weight, 100 updates i 0.1}]e © Random, 100 updates â ¢* Random, 1000 updates . 0.0 a ° R â14 12 10 0.8 0.6 0.4 0.2 0.0 GFLOPs
Figure 8: Pruning of AlexNet on Imagenet with varying number of updates between pruning iterations.
increases classiï¬cation error by nearly 6%, additional ï¬ne-tuning restores much of the lost accuracy, yielding a ï¬nal pruned network with a 12.6à reduction in GFLOPs and only a 2.5% loss in accuracy.
# 3.5 PRUNING NETWORKS FOR IMAGENET
We also test our pruning scheme on the large- scale ImageNet classiï¬cation task. In the ï¬rst experiment, we begin with a trained CaffeNet implementation of AlexNet with 79.2% top-5 validation accuracy. Between pruning iterations, we ï¬ne-tune with learning rate 10â4, momen- tum 0.9, weight decay 10â4, batch size 32, and drop-out 50%. Using a subset of 5000 training images, we compute oracle-abs and Spearmanâs rank correlation with the criteria, as shown in Table 1. Pruning traces are illustrated in Fig. 8.
We observe: 1) Taylor performs better than ran- dom or minimum weight pruning when 100 up- dates are used between pruning iterations. When results are displayed w.r.t. FLOPs, the differ- ence with random pruning is only 0%â4%, but the difference is higher, 1%â10%, when plot- ted with the number of feature maps pruned. 2) Increasing the number of updates from 100 to 1000 improves performance of pruning signiï¬- cantly for both the Taylor criterion and random pruning.
o Ss om @ S_8 Ss a o 2 gq a Accuracy (top-5), validation set ° & & ee Qn aes e-® Taylor, flops reg, 100 updates Fine-tuning 30 25 20 15 10 5 GFLOPs
Figure 9: Pruning of the VGG-16 network on ImageNet, with additional following ï¬ne-tuning at 11.5 and 8 GFLOPs.
9
Published as a conference paper at ICLR 2017
Hardware Batch Accuracy Time, ms Accuracy Time (speed up) Accuracy Time (speed up) AlexNet / Flowers-102, 1.46 GFLOPs CPU: Intel Core i7-5930K GPU: GeForce GTX TITAN X (Pascal) GPU: GeForce GTX TITAN X (Pascal) GPU: NVIDIA Jetson TX1 16 16 512 32 80.1% 226.4 4.8 88.3 169.2 41% feature maps, 0.4 GFLOPs 79.8%(-0.3%) 121.4 (1.9x) 2.4 (2.0x) 36.6 (2.4x) 73.6 (2.3x) 19.5% feature maps, 0.2 GFLOPs 87.0 (2.6x) 74.1%(-6.0%) 1.9 (2.5x) 27.4 (3.2x) 58.6 (2.9x) VGG-16 / ImageNet, 30.96 GFLOPs CPU: Intel Core i7-5930K GPU: GeForce GTX TITAN X (Pascal) GPU: NVIDIA Jetson TX1 16 16 4 89.3% 2564.7 68.3 456.6 66% feature maps, 11.5 GFLOPs 1483.3 (1.7x) 87.0% (-2.3%) 31.0 (2.2x) 182.5 (2.5x) 52% feature maps, 8.0 GFLOPs 84.5% (-4.8%) 1218.4 (2.1x) 20.2 (3.4x) 138.2 (3.3x) R3DCNN / nvGesture, 37.8 GFLOPs GPU: GeForce GT 730M 1 80.7% 438.0 25% feature maps, 3 GFLOPs 78.2% (-2.5%) 85.0 (5.2x)
Table 2: Actual speed up of networks pruned by Taylor criterion for various hardware setup. All measurements were performed with PyTorch with cuDNN v5.1.0, except R3DCNN which was implemented in C++ with cuDNN v4.0.4). Results for ImageNet dataset are reported as top-5 accuracy on validation set. Results on AlexNet / Flowers-102 are reported for pruning with 1000 updates between iterations and no ï¬ne-tuning after pruning.
For a second experiment, we prune a trained VGG-16 network with the same parameters as before, except enabling FLOPs regularization. We stop pruning at two points, 11.5 and 8.0 GFLOPs, and ï¬ne-tune both models for an additional ï¬ve epochs with learning rate 10â4. Fine-tuning after pruning signiï¬cantly improves results: the network pruned to 11.5 GFLOPs improves from 83% to 87% top-5 validation accuracy, and the network pruned to 8.0 GFLOPs improves from 77.8% to 84.5%.
3.6 SPEED UP MEASUREMENTS
During pruning we were measuring reduction in computations by FLOPs, which is a common practice (Han et al., 2015; Lavin, 2015a;b). Improvements in FLOPs result in monotonically decreasing inference time of the networks because of removing entire feature map from the layer. However, time consumed by inference dependents on particular implementation of convolution operator, parallelization algorithm, hardware, scheduling, memory transfer rate etc. Therefore we measure improvement in the inference time for selected networks to see real speed up compared to unpruned networks in Table 2. We observe signiï¬cant speed ups by proposed pruning scheme.
# 4 CONCLUSIONS
We propose a new scheme for iteratively pruning deep convolutional neural networks. We ï¬nd: 1) CNNs may be successfully pruned by iteratively removing the least important parametersâfeature maps in this caseâaccording to heuristic selection criteria; 2) a Taylor expansion-based criterion demonstrates signiï¬cant improvement over other criteria; 3) per-layer normalization of the criterion is important to obtain global scaling.
# REFERENCES
Jose M Alvarez and Mathieu Salzmann. Learning the Number of Neurons in Deep Networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems 29, pp. 2262â2270. Curran Associates, Inc., 2016.
Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. Structured pruning of deep convolutional neural networks. arXiv preprint arXiv:1512.08571, 2015. URL http://arxiv.org/abs/1512. 08571.
Costas Bekas, Effrosyni Kokiopoulou, and Yousef Saad. An estimator for the diagonal of a matrix. Applied numerical mathematics, 57(11):1214â1229, 2007.
Steve Branson, Grant Van Horn, Serge Belongie, and Pietro Perona. Bird species categorization using pose normalized deep convolutional nets. arXiv preprint arXiv:1406.2952, 2014.
Yann Dauphin, Harm de Vries, and Yoshua Bengio. Equilibrated adaptive learning rates for non- convex optimization. In Advances in Neural Information Processing Systems, pp. 1504â1512, 2015.
10
Published as a conference paper at ICLR 2017
Mikhail Figurnov, Aizhan Ibraimova, Dmitry P Vetrov, and Pushmeet Kohli. PerforatedCNNs: Acceleration through elimination of redundant convolutions. In Advances in Neural Information Processing Systems, pp. 947â955, 2016.
Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. CoRR, abs/1502.02551, 392, 2015. URL http://arxiv.org/ abs/1502.025513.
Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efï¬cient neural network. In Advances in Neural Information Processing Systems, pp. 1135â1143, 2015.
Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A. Horowitz, and William J. Dally. EIE: Efï¬cient inference engine on compressed deep neural network. In Proceedings of the 43rd International Symposium on Computer Architecture, ISCA â16, pp. 243â254, Piscataway, NJ, USA, 2016. IEEE Press.
Babak Hassibi and David G. Stork. Second order derivatives for network pruning: Optimal brain surgeon. In Advances in Neural Information Processing Systems (NIPS), pp. 164â171, 1993.
Hengyuan Hu, Rui Peng, Yu-Wing Tai, and Chi-Keung Tang. Network trimming: A data-driven neuron pruning approach towards efï¬cient deep architectures. arXiv preprint arXiv:1607.03250, 2016.
Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. Large-scale video classiï¬cation with convolutional neural networks. In CVPR, 2014.
Yong-Deok Kim, Eunhyeok Park, Sungjoo Yoo, Taelim Choi, Lu Yang, and Dongjun Shin. Com- pression of deep convolutional neural networks for fast and low power mobile applications. In Proceedings of the International Conference on Learning Representations (ICLR), 2015.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolu- tional neural networks. In Advances in neural information processing systems, pp. 1097â1105, 2012.
Andrew Lavin. maxDNN: An Efï¬cient Convolution Kernel for Deep Learning with Maxwell GPUs. CoRR, abs/1501.06633, 2015a. URL http://arxiv.org/abs/1501.06633.
Andrew Lavin. Fast algorithms for convolutional neural networks. arXiv preprint arXiv:1509.09308, 2015b.
Vadim Lebedev and Victor Lempitsky. Fast convnets using group-wise brain damage. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2554â2564, 2016.
Yann LeCun, J. S. Denker, S. Solla, R. E. Howard, and L. D. Jackel. Optimal brain damage. In Advances in Neural Information Processing Systems (NIPS), 1990.
Yann LeCun, Leon Bottou, Genevieve B. Orr, and Klaus Robert Müller. Efï¬cient BackProp, pp. 9â50. Springer Berlin Heidelberg, Berlin, Heidelberg, 1998.
James Martens. Deep learning via Hessian-free optimization. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 735â742, 2010.
James Martens, Ilya Sutskever, and Kevin Swersky. Estimating the Hessian by back-propagating curvature. arXiv preprint arXiv:1206.6464, 2012.
Pavlo Molchanov, Xiaodong Yang, Shalini Gupta, Kihwan Kim, Stephen Tyree, and Jan Kautz. Online detection and classiï¬cation of dynamic hand gestures with recurrent 3d convolutional neural network. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
M-E. Nilsback and A. Zisserman. Automated ï¬ower classiï¬cation over a large number of classes. In Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing, Dec 2008.
11
Published as a conference paper at ICLR 2017
Barak A. Pearlmutter. Fast Exact Multiplication by the Hessian. Neural Computation, 6:147â160, 1994.
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks. CoRR, abs/1603.05279, 2016. URL http://arxiv.org/abs/1603.05279.
Russell Reed. Pruning algorithms-a survey. IEEE transactions on Neural Networks, 4(5):740â747, 1993.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115 (3):211â252, 2015.
K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
Suraj Srinivas and R. Venkatesh Babu. Data-free parameter pruning for deep neural networks. In Mark W. Jones Xianghua Xie and Gary K. L. Tam (eds.), Proceedings of the British Machine Vision Conference (BMVC), pp. 31.1â31.12. BMVA Press, September 2015.
Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. URL http://arxiv.org/abs/ 1605.02688.
Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. 2011.
Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, pp. 2074â2082, 2016.
Hao Zhou, Jose M. Alvarez, and Fatih Porikli. Less is more: Towards compact cnns. In European Conference on Computer Vision, pp. 662â677, Amsterdam, the Netherlands, October 2016.
12
Published as a conference paper at ICLR 2017
A APPENDIX
A.1 FLOPS COMPUTATION
To compute the number of ï¬oating-point operations (FLOPs), we assume convolution is implemented as a sliding window and that the nonlinearity function is computed for free. For convolutional kernels we have:
FLOPs = 2HW (CinK 2 + 1)Cout, (11)
where H, W and Cin are height, width and number of channels of the input feature map, K is the kernel width (assumed to be symmetric), and Cout is the number of output channels.
For fully connected layers we compute FLOPs as:
FLOPs = (2I â 1)O, (12)
where I is the input dimensionality and O is the output dimensionality.
We apply FLOPs regularization during pruning to prune neurons with higher FLOPs ï¬rst. FLOPs per convolutional neuron in every layer:
VGG16: Îf lops = [3.1, 57.8, 14.1, 28.9, 7.0, 14.5, 14.5, 3.5, 7.2, 7.2, 1.8, 1.8, 1.8, 1.8] AlexNet: Îf lops = [2.3, 1.7, 0.8, 0.6, 0.6] R3DCNN: Îf lops = [5.6, 86.9, 21.7, 43.4, 5.4, 10.8, 1.4, 1.4]
# A.2 NORMALIZATION ACROSS LAYERS
Scaling a criterion across layers is very important for pruning. If the criterion is not properly scaled, then a hand-tuned multiplier would need to be selected for each layer. Statistics of feature map ranking by different criteria are shown in Fig.{10] Without normalization (Fig. [4a}fT%d). the weight magnitude criterion tends to rank feature maps from the first layers more important than last layers; the activation criterion ranks middle layers more important; and Taylor ranks first layers higher. After â¬y normalization (Fig.[TOd}{T0f), all criteria have a shape more similar to the oracle, where each layer has some feature maps which are highly important and others which are unimportant.
(a) Weight (b) Activation (mean) (c) Taylor errresrs) ee a en 7 _ = median (d) Weight + ¢2 (e) Activation (mean) + £2 (f) Taylor + £2
errresrs) ee
a en
7 _ = median
Figure 10: Statistics of feature map ranking by raw criteria values (top) and by criteria values after 02 normalization (bottom).
13
Published as a conference paper at ICLR 2017
MI Weight Activation OBD Taylor Mean S.d. APoZ Per layer Layer 1 0.41 0.40 0.65 0.78 0.36 0.54 0.95 Layer 2 0.23 0.57 0.56 0.59 0.33 0.78 0.90 Layer 3 0.14 0.55 0.48 0.45 0.51 0.66 0.74 Layer 4 0.26 0.23 0.58 0.42 0.10 0.36 0.80 Layer 5 0.17 0.28 0.49 0.52 0.15 0.54 0.69 Layer 6 0.21 0.18 0.41 0.48 0.16 0.49 0.63 Layer 7 0.12 0.19 0.54 0.49 0.38 0.55 0.71 Layer 8 0.18 0.23 0.43 0.42 0.30 0.50 0.54 Layer 9 0.21 0.18 0.50 0.55 0.35 0.53 0.61 Layer 10 0.26 0.15 0.59 0.60 0.45 0.61 0.66 Layer 11 0.41 0.12 0.61 0.65 0.45 0.64 0.72 Layer 12 0.47 0.15 0.60 0.66 0.39 0.66 0.72 Layer 13 0.61 0.21 0.77 0.76 0.65 0.76 0.77 Mean 0.28 0.27 0.56 0.57 0.35 0.59 0.73 All layers No normalization 0.35 0.34 0.35 0.30 0.43 0.65 0.14 ¢, normalization 0.47 0.37 0.63 0.63 0.52 0.65, 0.71 9 normalization 0.47 0.33 0.64 0.66 0.51 0.60 0.73 Min-max normalization 0.27 0.17 0.52 0.57 0.42 0.54 0.67
Table 3: Spearmanâs rank correlation of criteria vs oracle-abs in VGG-16 ï¬ne-tuned on Birds 200.
A.3 ORACLE COMPUTATION FOR VGG-16 ON BIRDS-200
We compute the change in the loss caused by removing individual feature maps from the VGG-16 network, after ï¬ne-tuning on the Birds-200 dataset. Results are illustrated in Fig. 11a-11b for each feature map in layers 1 and 13, respectively. To compute the oracle estimate for a feature map, we remove the feature map and compute the network prediction for each image in the training set using the central crop with no data augmentation or dropout. We draw the following conclusions:
⢠The contribution of feature maps range from positive (above the red line) to slightly negative (below the red line), implying the existence of some feature maps which decrease the training cost when removed.
⢠There are many feature maps with little contribution to the network output, indicated by almost zero change in loss when removed.
⢠Both layers contain a small number of feature maps which induce a signiï¬cant increase in the loss when removed.
(a) Layer 1 (b) Layer 13
oor 0.008 0.006 0.004 change in loss 0.002, 0.000 00025 30 0 35 20 i0 0 Feature map Index
0.0035, 0.0030) 0.0025 0.0029) 0.0015; ange in toss c.0010) 0.0008 556 a0 300 200 700 0 Feature map index
Figure 11: Change in training loss as a function of the removal of a single feature map from the VGG-16 network after ï¬ne-tuning on Birds-200. Results are plotted for two convolutional layers w.r.t. the index of the removed feature map index. The loss with all feature maps, 0.00461, is indicated with a red horizontal line.
14
Published as a conference paper at ICLR 2017
100% <â & regularization, > = 0.01 80% " larization, 7 = 0.04 " lor, 50 updates ~~ Taylor, 100 updates 60% â Taylor, 200 updates Parameters 40% 20% 0%, 0 50 ~ 100 ~+150°~S*S*«S 002; Mini-batch updates, x1000
100% <â & regularization, > = 0.01 80% " larization, 7 = 0.04 " lor, 50 updates ~~ Taylor, 100 updates 60% â Taylor, 200 updates Parameters 40% Accuracy, test set 20% 0%, 0 50 ~ 100 ~+150°~S*S*«S 002; 380% 80% 60% 40% 20% 0% Mini-batch updates, x1000 Parameters
Accuracy, test set 380% 80% 60% 40% 20% 0% Parameters
Figure 12: Comparison of our iterative pruning with pruning by regularization
Table|3|contains a layer-by-layer listing of Spearmanâs rank correlation of several criteria with the ranking of oracle-abs. In this more detailed comparison, we see the Taylor criterion shows higher correlation for all individual layers. For several methods including Taylor, the worst correlations are observed for the middle of the network, layers 5-10. We also evaluate several techniques for normalization of the raw criteria values for comparison across layers. The table shows the best performance is obtained by £2 normalization, hence we select it for our method.
# A.4 COMPARISON WITH WEIGHT REGULARIZATION
5) find that fine-tuning with high @, or ¢2 regularization causes unimportant connections to be suppressed. Connections with energy lower than some threshold can be removed on the assumption that they do not contribute much to subsequent layers. The same work also finds that thresholds must be set separately for each layer depending on its sensitivity to pruning. The procedure to evaluate sensitivity is time-consuming as it requires pruning layers independently during evaluation.
The idea of pruning with high regularization can be extended to removing the kernels for an entire feature map if the £2 norm of those kernels is below a predefined threshold. We compare our approach with this regularization-based pruning for the task of pruning the last convolutional layer of VGG-16 fine-tuned for Birds-200. By considering only a single layer, we avoid the need to compute layerwise sensitivity. Parameters for optimization during fine-tuning are the same as other experiments with the Birds-200 dataset. For the regularization technique, the pruning threshold is set to ¢ = 10~° while we vary the regularization coefficient 7 of the £2 norm on each feature map kernel} We prune only kernel weights, while keeping the bias to maintain the same expected output.
A comparison between pruning based on regularization and our greedy scheme is illustrated in Fig. 12. We observe that our approach has higher test accuracy for the same number of remaining unpruned feature maps, when pruning 85% or more of the feature maps. We observe that with high regularization all weights tend to zero, not only unimportant weights as Han et al. (2015) observe in the case of ImageNet networks. The intuition here is that with regularization we push all weights down and potentially can affect important connections for transfer learning, whereas in our iterative procedure we only remove unimportant parameters leaving others untouched.
A.5 COMBINATION OF CRITERIA
One of the possibilities to improve saliency estimation is to combine several criteria together. One of the straight forward combinations is Taylor and mean activation of the neuron. We compute the joint criteria as Îjoint(z(k) ) and perform a grid search of parameter λ in Fig.13. The highest correlation value for each dataset is marked with with vertical bar with λ and gain. We observe that the gain of linearly combining criteria is negligibly small (see ââs in the ï¬gure).
5In our implementation, the regularization coefï¬cient is multiplied by the learning rate equal to 10â4.
15
Published as a conference paper at ICLR 2017
oo s Ga S 0.006400 = 169e03 Correlation, higher better 0.00.4 0.05, 4 o im 107 10° d, criterion = (1 - \)*Taylor + \*Activation â S = VGG-16/Birds-200 â AlexNet/Flowers-102 â VGG-16/Flowers-102 â AlexNet/ImageNet â AlexNet/Birds-200
Figure 13: Spearman rank correlation for linear combination of criteria. The per layer metric is used. Each â indicates the gain in correlation for one experiment.
A.6 OPTIMAL BRAIN DAMAGE IMPLEMENTATION
OBD computes saliency of a parameter by computing a product of the squared magnitude of the parameter and the corresponding element on the diagonal of the Hessian. For many deep learning frameworks, an efï¬cient implementation of the diagonal evaluation is not straightforward and approximation techniques must be applied. Our implementation of Hessian diagonal computation was inspired by Dauphin et al. (2015) work, where the technique proposed by Bekas et al. (2007) was used to evaluate SGD preconditioned with the Jacobi preconditioner. It was shown that diagonal of the Hessian can be approximated as:
diag(H) = E[v © Hv] = E[v© V(VC -v)], (13)
where © is the element-wise product, v are random vectors with entries +1, and V is the gradient operator. To compute saliency with OBD, we randomly draw v and compute the diagonal over 10 iterations for a single minibatch for 1000 mini batches. We found that this number of mini batches is required to compute close approximation of the Hessianâs diagonal (which we verified). Computing saliency this way is computationally expensive for iterative pruning, and we use a slightly different but more efficient procedure. Before the first pruning iteration, saliency is initialized from values computed off-line with 1000 minibatches and 10 iterations, as described above. Then, at every minibatch we compute the OBD criteria with only one iteration and apply an exponential moving averaging with a coefficient of 0.99. We verified that this computes a close approximation to the Hessianâs diagonal.
A.7 CORRELATION OF TAYLOR CRITERION WITH GRADIENT AND ACTIVATION
The Taylor criterion is composed of both an activation term and a gradient term. In Figure 14, we depict the correlation between the Taylor criterion and each constituent part. We consider expected absolute value of the gradient instead of the mean, because otherwise it tends to zero. The plots are computed from pruning criteria for an unpruned VGG network ï¬ne-tuned for the Birds-200 dataset. (Values are shown after layer-wise normalization). Figure 14(a-b) depict the Taylor criterion in the y-axis for all neurons w.r.t. the gradient and activation components, respectively. The bottom 10% of neurons (lowest Taylor criterion, most likely to be pruned) are depicted in red, while the top 10% are shown in green. Considering all neurons, both gradient and activation components demonstrate a linear trend with the Taylor criterion. However, for the bottom 10% of neurons, as shown in Figure 14(c-d), the activation criterion shows much stronger correlation, with lower activations indicating lower Taylor scores.
16
Published as a conference paper at ICLR 2017
0.25 . . . . E : oss F oxo :
0.25 . . E : oss . F oxo
(a) (b)
i 0.002 activation (normalized)
i 0.002 . gradient (normalized)
(c)
(d)
Figure 14: Correlation of Taylor criterion with gradient and activation (after layer-wise 2 normaliza- tion) for all neurons (a-b) and bottom 10% of neurons (c-d) for unpruned VGG after fine-tuning on Birds-200.
17 | {
"id": "1512.08571"
} |
1611.06216 | Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure. | http://arxiv.org/pdf/1611.06216 | Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.NE, I.5.1; I.2.7 | 6 pages, 1 figure, 3 tables; NIPS 2016 workshop on Learning Methods
for Dialogue | null | cs.CL | 20161118 | 20161118 | 6 1 0 2
v o N 8 1 ] L C . s c [
1 v 6 1 2 6 0 . 1 1 6 1 : v i X r a
# Generative Deep Neural Networks for Dialogue: A Short Review
Iulian Vlad Serban Department of Computer Science and Operations Research, University of Montreal
# Ryan Lowe School of Computer Science, McGill University
# Laurent Charlin School of Computer Science, McGill University
# Joelle Pineau School of Computer Science, McGill University
# Abstract
Researchers have recently started investigating deep neural networks for dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq) models have shown promising results for unstructured tasks, such as word-level dialogue response generation. The hope is that such models will be able to leverage massive amounts of data to learn meaningful natural language representations and response generation strategies, while requiring a minimum amount of domain knowledge and hand-crafting. An important challenge is to develop models that can effectively incorporate dialogue context and generate meaningful and diverse responses. In support of this goal, we review recently proposed models based on generative encoder-decoder neural network architectures, and show that these models have better ability to incorporate long-term dialogue history, to model uncertainty and ambiguity in dialogue, and to generate responses with high-level compositional structure.
# Introduction
Researchers have recently started investigating sequence-to-sequence (Seq2Seq) models for dialogue applications. These models typically use neural networks to both represent dialogue histories and to generate or select appropriate responses. Such models are able to leverage large amounts of data in order to learn meaningful natural language representations and generation strategies, while requiring a minimum amount of domain knowledge and hand-crafting. Although the Seq2Seq framework is different from the well-established goal-oriented setting [Gorin et al., 1997, Young, 2000, Singh et al., 2002], these models have already been applied to several real-world applications, with Microsoftâs system Xiaoice [Markoff and Mozur, 2015] and Googleâs Smart Reply system [Kannan et al., 2016] as two prominent examples.
Researchers have mainly explored two types of Seq2Seq models. The ï¬rst are generative models, which are usually trained with cross-entropy to generate responses word-by-word conditioned on a dialogue context [Ritter et al., 2011, Vinyals and Le, 2015, Sordoni et al., 2015, Shang et al., 2015, Li et al., 2016a, Serban et al., 2016b]. The second are discriminative models, which are trained to select an appropriate response from a set of candidate responses [Lowe et al., 2015, Bordes and Weston, 2016, Inaba and Takahashi, 2016, Yu et al., 2016]. In a related strand of work, researchers have also investigated applying neural networks to the different components of a standard dialogue system, including natural language understanding, natural language generation, dialogue state tracking and
30th Conference on Neural Information Processing Systems (NIPS 2016), Workshop on Learning Methods for Dialogue, Barcelona, Spain.
evaluation [Wen et al., 2016, 2015, Henderson et al., 2013, Mrkši´c et al., 2015, Su et al., 2015]. In this paper, we focus on generative models trained with cross-entropy.
One weakness of current generative models is their limited ability to incorporate rich dialogue context and to generate meaningful and diverse responses [Serban et al., 2016b, Li et al., 2016a]. To overcome this challenge, we propose new generative models that are better able to incorporate long-term dialogue history, to model uncertainty and ambiguity in dialogue, and to generate responses with high-level compositional structure. Our experiments demonstrate the importance of the model architecture and the related inductive biases in achieving this improved performance.
CEOS) GORE a O dated A) Classic LSTM Cc) MrRNN B) VHRED
Figure 1: Probabilistic graphical models for dialogue response generation. Variables w represent natural language utterances. Variables z represent discrete or continuous stochastic latent variables. (A): Classic LSTM model, which uses a shallow generation process. This is problematic because it has no mechanism for incorporating uncertainty and ambiguity and because it forces the model to generate compositional and long-term structure incrementally on a word-by-word basis. (B): VHRED expands the generation process by adding one latent variable for each utterance, which helps incorporate uncertainty and ambiguity in the representations and generate meaningful, diverse responses. (C): MrRNN expands the generation process by adding a sequence of discrete stochastic variables for each utterance, which helps generate responses with high-level compositional structure.
# 2 Models
HRED: The Hierarchical Recurrent Encoder-Decoder model (HRED) [Serban et al., 2016b] is a type of Seq2Seq model that decomposes a dialogue into a two-level hierarchy: a sequence of utterances, each of which is a sequence of words. HRED consists of three recurrent neural networks (RNNs): an encoder RNN, a context RNN and a decoder RNN. Each utterance is encoded into a real-valued vector representation by the encoder RNN. These utterance representations are given as input to the context RNN, which computes a real-valued vector representation summarizing the dialogue at every turn. This summary is given as input to the decoder RNN, which generates a response word-by-word. Unlike the RNN encoders in previous Seq2Seq models, the context RNN is only updated once every dialogue turn and uses the same parameters for each update. This gives HRED an inductive bias that helps incorporate long-term context and learn invariant representations.
VHRED: The Latent Variable Hierarchical Recurrent Encoder-Decoder model (VHRED) [Serban et al., 2016c] is an HRED model with an additional component: a high-dimensional stochastic latent variable at every dialogue turn. As in HRED, the dialogue context is encoded into a vector representation using encoder and context RNNs. Conditioned on the summary vector at each dialogue turn, VHRED samples a multivariate Gaussian variable, which is given along with the summary vector as input to the decoder RNN. The multivariate Gaussian latent variable allows modelling ambiguity and uncertainty in the dialogue through the latent variable distribution parameters (mean and variance parameters). This provides a useful inductive bias, which helps VHRED encode the dialogue context into a real-valued embedding space even when the dialogue context is ambiguous or uncertain, and it helps VHRED generate more diverse responses.
MrRNN: The Multiresolution RNN (MrRNN) [Serban et al., 2016a] models dialogue as two parallel stochastic sequences: a sequence of high-level coarse tokens (coarse sequences), and a sequence of low-level natural language words (utterances). The coarse sequences follow a latent stochastic processâanalogous to hidden Markov modelsâwhich conditions the utterances through a hierar- chical generation process. The hierarchical generation process ï¬rst generates the coarse sequence, and conditioned on this generates the natural language utterance. In our experiments, the coarse
2
Table 1: Model response examples. The arrows indicate change of turn.
Dialogue Context Hello I tried to install ubuntu studio but I get W : Failed to fetch <url > 404 Not Found when apt-get update â the ppa you added is not online hi community.. is there any difference between " /.bashrc " " /.bashrc. local " â yes, theyâre different ï¬les :-) the default /.bashrc sources /.bashrc. local â sorry I could undersatnd... If I write on terminal " gedit /.bashrc " it opens a ï¬le but how I open " /.bashrc. local " â " gedit /.bashrc. local "... â but it just open blank ï¬le nothing inside it Hey guys what do you in general use for irc something ubuntu xchat or xchat-gnome ? â without -gnome. that is just cut down â you mean drop xchat-gnome and go with xchat ? when setting up rules with iptables command only writes changes this ï¬le " etciptables. rules "? i ask this so i can backup before messing anything â sudo iptables-save something . dat to backup your rules restore with sudo iptables-restore < something . dat Model Response MrRNN Act. -Ent.: try a different mirror MrRNN Noun: something VHRED: but I dont know how to add it HRED: how do I do that ? MrRNN Act.-Ent.: open up gedit /.bashrc, there is a ï¬le called " something " MrRNN Noun: the ï¬le is empty.. VHRED: itâs in /.bashrc or /.bashrc HRED: what are you trying to do ? MrRNN Act. -Ent.: im using xchat right now MrRNN Noun: what is xchat-gnome ? VHRED: correct HRED: yes MrRNN Act. -Ent.: I donât see any reason why you need iptables to do that MrRNN Noun: are you using ubuntu ? VHRED: thx HRED: thanks
sequences are deï¬ned as either noun sequences or activity-entity pairs (predicate-argument pairs) extracted from the natural language utterances. The coarse sequences and utterances are modelled by two separate HRED models. The hierarchical generation provides an important inductive bias, because it helps MrRNN model high-level, compositional structure and generate meaningful and on-topic responses.
# 3 Experiments
We apply our generative models to dialogue response generation on the Ubuntu Dialogue Cor- pus [Lowe et al., 2015]. For each example, given a dialogue context, the model must generate an appropriate response. We also present results on Twitter in the Appendix. This task has been studied extensively in the recent literature [Ritter et al., 2011, Sordoni et al., 2015, Li et al., 2016a].
Corpus: The Ubuntu Dialogue Corpus consists of about half a million dialogues extracted from the #Ubuntu Internet Relayed Chat (IRC) channel. Users entering this chat channel usually have a speciï¬c technical problem. Typically, users ï¬rst describe their problem, and other users try to help them resolve it. The technical problems range from software-related and hardware-related issues (e.g. installing packages, ï¬xing broken drivers) to informational needs (e.g. ï¬nding software).
Evaluation: We carry out an in-lab human study to evaluate the model responses. We recruit 5 human evaluators. We show each evaluator between 30 and 40 dialogue contexts with the ground truth response, and 4 candidate model responses. For each example, we ask the evaluators to compare the candidate responses to the ground truth response and dialogue context, and rate them for ï¬uency and relevancy on a scale 0â4, where 0 means incomprehensible or no relevancy and 4 means ï¬awless English or all relevant. In addition to the human evaluation, we also evaluate dialogue responses w.r.t. the activity-entity metrics proposed by Serban et al. [2016a]. These metrics measure whether the model response contains the same activities (e.g. download, install) and entities (e.g. ubuntu, ï¬refox) as the ground truth responses. Models that generate responses with the same activities and entities as the ground truth responsesâincluding expert responses, which often lead to solving the userâs problemâare given higher scores. Sample responses from each model are shown in Table 1.
Table 2: Ubuntu evaluation using F1 metrics w.r.t. activities and entities (mean scores ± 90% conï¬dence intervals), and human ï¬uency and human relevancy scores given on a scale 0-4 (â indicates scores signiï¬cantly different from baseline models at 90% conï¬dence)
Model F1 Activity F1 Entity Human Fluency Human Relevancy LSTM HRED VHRED MrRNN Noun MrRNN Act.-Ent. 1.18 ±0.18 4.34 ±0.34 4.63 ±0.34 4.04 ±0.33 11.43 ±0.54 0.87 ±0.15 2.22 ±0.25 2.53 ±0.26 6.31 ±0.42 3.72 ±0.33 - 2.98 - 3.48â 3.42â - 1.01 - 1.32â 1.04
Results: The results are given in Table 2. The MrRNNs perform substantially better than the other models w.r.t. both the human evaluation study and the evaluation metrics based on activities and
3
entities. MrRNN with noun representations obtains an F1 entity score at 6.31, while all other models obtain less than half F1 scores between 0.87 â 2.53, and human evaluators consistently rate its ï¬uency and relevancy signiï¬cantly higher than all the baseline models. MrRNN with activity representations obtains an F1 activity score at 11.43, while all other models obtain less than half F1 activity scores between 1.18 â 4.63, and performs substantially better than the baseline models w.r.t. the F1 entity score. This indicates that the MrRNNs have learned to model high-level, goal-oriented sequential structure in the Ubuntu domain. Followed by these, VHRED performs better than the HRED and LSTM models w.r.t. both activities and entities. This shows that VHRED generates more appropriate responses, which suggests that the latent variables are useful for modeling uncertainty and ambiguity. Finally, HRED performs better than the LSTM baseline w.r.t. both activities and entities, which underlines the importance of representing longer-term context. These conclusions are conï¬rmed by additional experiments on response generation for the Twitter domain (see Appendix).
# 4 Discussion
We have presented generative models for dialogue response generation. We have proposed ar- chitectural modiï¬cations with inductive biases towards 1) incorporating longer-term context, 2) handling uncertainty and ambiguity, and 3) generating diverse and on-topic responses with high-level compositional structure. Our experiments show the advantage of the architectural modiï¬cations quantitatively through human experiments and qualitatively through manual inspections. These experiments demonstrate the need for further research into generative model architectures. Although we have focused on three generative models, other model architectures such as memory-based models [Bordes and Weston, 2016, Weston et al., 2015] and attention-based models [Shang et al., 2015] have also demonstrated promising results and therefore deserve the attention of future research.
In another line of work, researchers have started proposing alternative training and response selection criteria [Weston, 2016]. Li et al. [2016a] propose ranking candidate responses according to a mutual information criterion, in order to incorporate dialogue context efï¬ciently and retrieve on-topic responses. Li et al. [2016b] further propose a model trained using reinforcement learning to optimize a hand-crafted reward function. Both these models are motivated by the lack of diversity observed in the generative model responses. Similarly, Yu et al. [2016] propose a hybrid modelâcombining retrieval models, neural networks and hand-crafted rulesâtrained using reinforcement learning to optimize a hand-crafted reward function. In contrast to these approaches, without combining several models or having to modify the training or response selection criterion, VHRED generates more diverse responses than previous models. Similarly, by optimizing the joint log-likelihood over sequences, MrRNNs generate more appropriate and on-topic responses with compositional structure. Thus, improving generative model architectures has the potential to compensate â or even remove the need â for hand-crafted reward functions.
At the same time, the models we propose are not necessarily better language models, which are more efï¬cient at compressing dialogue data as measured by word perplexity. Although these models produce responses that are preferred by humans, they often result in higher test set perplexity than traditional LSTM language models. This suggests maximizing log-likelihood (i.e. minimizing perplexity) is not a sufï¬cient training objective for these models. An important line of future work therefore lies in improving the objective functions for training and response selection, as well as learning directly from interactions with real users.
4
# References
A. Bordes and J. Weston. Learning end-to-end goal-oriented dialog. arXiv preprint arXiv:1605.07683, 2016. A. L. Gorin, G. Riccardi, and J. H. Wright. How may i help you? Speech communication, 23(1):113â127, 1997. M. Henderson, B. Thomson, and S. Young. Deep neural network approach for the dialog state tracking challenge.
In Proceedings of the SIGDIAL 2013 Conference, pages 467â471, 2013.
M. Inaba and K. Takahashi. Neural utterance ranking model for conversational dialogue systems. In 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, page 393, 2016.
A. Kannan, K. Kurach, S. Ravi, T. Kaufmann, A. Tomkins, B. Miklos, G. Corrado, L. Lukács, M. Ganea, P. Young, et al. Smart reply: Automated response suggestion for email. In Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), volume 36, pages 495â503, 2016.
J. Li, M. Galley, C. Brockett, J. Gao, and B. Dolan. A diversity-promoting objective function for neural conversation models. In NAACL, 2016a.
J. Li, W. Monroe, A. Ritter, and D. Jurafsky. Deep reinforcement learning for dialogue generation. arXiv preprint arXiv:1606.01541, 2016b.
R. Lowe, N. Pow, I. Serban, and J. Pineau. The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems. In Proc. of SIGDIAL-2015, 2015.
J. Markoff and P. Mozur. For sympathetic ear, more chinese turn to smartphone program. NY Times, 2015. N. Mrkši´c, D. O. Séaghdha, B. Thomson, M. Gaši´c, P.-H. Su, D. Vandyke, T.-H. Wen, and S. Young. Multi-
N. Mrk&ié, D. O. Séaghdha, B. Thomson, M. Gaiié, P.-H. Su, D. Vandyke, T.-H. Wen, and S. Young. Multi- domain dialog state tracking using recurrent neural networks. In HLT-NAACL, pages 120-129, 2015.
domain dialog state tracking using recurrent neural networks. In HLT-NAACL, pages 120â129, 2015. A. Ritter, C. Cherry, and W. B. Dolan. Data-driven response generation in social media. In EMNLP, 2011. I. V. Serban, T. Klinger, G. Tesauro, K. Talamadupula, B. Zhou, Y. Bengio, and A. Courville. Multiresolution recurrent neural networks: An application to dialogue response generation. arXiv preprint arXiv:1606.00776, 2016a.
I. V. Serban, A. Sordoni, Y. Bengio, A. C. Courville, and J. Pineau. Building end-to-end dialogue systems using generative hierarchical neural network models. In AAAI, pages 3776â3784, 2016b.
I. V. Serban, A. Sordoni, R. Lowe, L. Charlin, J. Pineau, A. Courville, and Y. Bengio. A hierarchical latent variable encoder-decoder model for generating dialogues. arXiv preprint arXiv:1605.06069, 2016c.
L. Shang, Z. Lu, and H. Li. Neural responding machine for short-text conversation. In ACL-IJCNLP, pages 1577â1586, 2015.
S. Singh, D. Litman, M. Kearns, and M. Walker. Optimizing dialogue management with reinforcement learning: Experiments with the njfun system. JAIR, 16:105â133, 2002.
A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell, J.-Y. Nie, J. Gao, and B. Dolan. A neural network approach to context-sensitive generation of conversational responses. In Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2015), 2015.
P.-H. Su, D. Vandyke, M. Gasic, D. Kim, N. Mrksic, T.-H. Wen, and S. Young. Learning from real users: Rating dialogue success with neural networks for reinforcement learning in spoken dialogue systems. In SIGDIAL, 2015.
O. Vinyals and Q. Le. A neural conversational model. ICML, Workshop, 2015. T.-H. Wen, M. Gasic, N. Mrksic, P.-H. Su, D. Vandyke, and S. Young. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1711â1721, Lisbon, Portugal, September 2015. Association for Computational Linguistics. URL http://aclweb.org/anthology/D15-1199.
T.-H. Wen, M. Gasic, N. Mrksic, L. M. Rojas-Barahona, P.-H. Su, S. Ultes, D. Vandyke, and S. Young. A network-based end-to-end trainable task-oriented dialogue system. arXiv:1604.04562, 2016.
J. Weston. Dialog-based language learning. arXiv preprint arXiv:1604.06045, 2016. J. Weston, S. Chopra, and A. Bordes. Memory networks. ICLR, 2015. S. Young. Probabilistic methods in spokenâdialogue systems. Philosophical Transactions of the Royal Society
of London. Series A: Mathematical, Physical and Engineering Sciences, 358(1769), 2000.
Z. Yu, Z. Xu, A. W. Black, and A. I. Rudnicky. Strategy and policy learning for non-task-oriented conversational systems. In 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, page 404, 2016.
5
# Appendix
# Twitter Results
Corpus: We experiment on a Twitter Dialogue Corpus [Ritter et al., 2011] containing about one million dialogues. The task is to generate utterances to append to existing Twitter conversations. This task is typically categorized as a non-goal-driven task, because any ï¬uent and on-topic response may be adequate.
Evaluation: We carry out a human study on Amazon Mechanical Turk (AMT). We show human evaluators a dialogue context along with two potential responses: one response generated from each model conditioned on the dialogue context. We ask evaluators to choose the response most appropriate to the dialogue context. If the evaluators are indifferent, they can choose neither response. For each pair of models we conduct two experiments: one where the example contexts contain at least 80 unique tokens (long context), and one where they contain at least 20 (not necessarily unique) tokens (short context). We experiment with the LSTM, HRED and VHRED models, as well as a TF-IDF retrieval-based baseline model. We do not experiment with the MrRNN models, because we do not have appropriate coarse representations for this domain.
Results: The results given in Table 3 show that VHRED is strongly preferred in the majority of the experiments. In particular, VHRED is strongly preferred over the HRED and TF-IDF baseline models for both short and long context settings. VHRED is also strongly preferred over the LSTM baseline model for long contexts, although the LSTM model is preferred over VHRED for short contexts.For short contexts, the LSTM model is often preferred over VHRED because the LSTM model tends to generate very generic responses. Such generic or safe responses are reasonable for a wide range of contexts, but are not useful when applied through-out a dialogue, because the user would loose interest in the conversation.
In conclusion, VHRED performs substantially better overall than competing models, which suggests that the high-dimensional latent variables help model uncertainty and ambiguity in the dialogue context and help generate meaningful responses.
Table 3: Wins, losses and ties (in %) of VHRED against baselines based on the human study (mean preferences ± 90% conï¬dence intervals, where â indicates signiï¬cant differences at 90% conï¬dence)
Opponent Wins Losses Ties Short Contexts VHRED vs LSTM VHRED vs HRED VHRED vs TF-IDF 32.3 ±2.4 42.0 ±2.8â 51.6 ±3.3â 42.5 ±2.6â 31.9 ±2.6 17.9 ±2.5 25.2 ±2.3 26.2 ±2.5 30.4 ±3.0 Long Contexts VHRED vs LSTM 41.9 ±2.2â 41.5 ±2.8â VHRED vs HRED 47.9 ±3.4â VHRED vs TF-IDF 36.8 ±2.2 29.4 ±2.6 11.7 ±2.2 21.3 ±1.9 29.1 ±2.6 40.3 ±3.4
6 | {
"id": "1605.06069"
} |
1611.05763 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | 7 1 0 2
n a J 3 2 ] G L . s c [
3 v 3 6 7 5 0 . 1 1 6 1 : v i X r a
# LEARNING TO REINFORCEMENT LEARN
JX Wang1, Z Kurth-Nelson1, D Tirumala1, H Soyer1, JZ Leibo1, R Munos1, C Blundell1, D Kumaran1,3, M Botvinick1,2 1DeepMind, London, UK 2Gatsby Computational Neuroscience Unit, UCL, London, UK 3Institute of Cognitive Neuroscience, UCL, London, UK
{wangjane, zebk, dhruvat, soyer, jzl, munos, cblundell, dkumaran, botvinick} @google.com
# ABSTRACT
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is conï¬gured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
1
# INTRODUCTION
Recent advances have allowed long-standing methods for reinforcement learning (RL) to be newly extended to such complex and large-scale task environments as Atari (Mnih et al., 2015) and Go (Silver et al., 2016). The key enabling breakthrough has been the development of techniques allowing the stable integration of RL with non-linear function approximation through deep learning (LeCun et al., 2015; Mnih et al., 2015). The resulting deep RL methods are attaining human- and often superhuman-level performance in an expanding list of domains (Jaderberg et al., 2016; Mnih et al., 2015; Silver et al., 2016). However, there are at least two aspects of human performance that they starkly lack. First, deep RL typically requires a massive volume of training data, whereas human learners can attain reasonable performance on any of a wide range of tasks with comparatively little experience. Second, deep RL systems typically specialize on one restricted task domain, whereas human learners can ï¬exibly adapt to changing task conditions. Recent critiques (e.g., Lake et al., 2016) have invoked these differences as posing a direct challenge to current deep RL research.
In the present work, we outline a framework for meeting these challenges, which we refer to as deep meta-reinforcement learning, a label that is intended to both link it with and distinguish it from previous work employing the term âmeta-reinforcement learningâ (e.g. Schmidhuber et al., 1996; Schweighofer and Doya, 2003, discussed later). The key concept is to use standard deep RL techniques to train a recurrent neural network in such a way that the recurrent network comes to implement its own, free-standing RL procedure. As we shall illustrate, under the right circumstances, the secondary learned RL procedure can display an adaptiveness and sample efï¬ciency that the original RL procedure lacks.
The following sections review previous work employing recurrent neural networks in the context of meta-learning and describe the general approach for extending such methods to the RL setting. We
1
then present seven proof-of-concept experiments, each of which highlights an important ramiï¬cation of the deep meta-RL setup by characterizing agent performance in light of this framework. We close with a discussion of key challenges for next-step research, as well as some potential implications for neuroscience.
# 2 METHODS
2.1 BACKGROUND: META-LEARNING IN RECURRENT NEURAL NETWORKS
Flexible, data-efï¬cient learning naturally requires the operation of prior biases. In general terms, such biases can derive from two sources; they can either be engineered into the learning system (as, for example, in convolutional networks), or they can themselves be acquired through learning. The second case has been explored in the machine learning literature under the rubric of meta-learning (Schmidhuber et al., 1996; Thrun and Pratt, 1998).
In one standard setup, the learning agent is confronted with a series of tasks that differ from one another but also share some underlying set of regularities. Meta-learning is then deï¬ned as an effect whereby the agent improves its performance in each new task more rapidly, on average, than in past tasks (Thrun and Pratt, 1998). At an architectural level, meta-learning has generally been conceptualized as involving two learning systems: one lower-level system that learns relatively quickly, and which is primarily responsible for adapting to each new task; and a slower higher-level system that works across tasks to tune and improve the lower-level system.
A variety of methods have been pursued to implement this basic meta-learning setup, both within the deep learning community and beyond (Thrun and Pratt, 1998). Of particular relevance here is an approach introduced by Hochreiter and colleagues (Hochreiter et al., 2001), in which a recurrent neural network is trained on a series of interrelated tasks using standard backpropagation. A critical aspect of their setup is that the network receives, on each step within a task, an auxiliary input indicating the target output for the preceding step. For example, in a regression task, on each step the network receives as input an x value for which it is desired to output the corresponding y, but the network also receives an input disclosing the target y value for the preceding step (see Hochreiter et al., 2001; Santoro et al., 2016). In this scenario, a different function is used to generate the data in each training episode, but if the functions are all drawn from a single parametric family, then the system gradually tunes into this consistent structure, converging on accurate outputs more and more rapidly across episodes.
One interesting aspect of Hochreiterâs method is that the process that underlies learning within each new task inheres entirely in the dynamics of the recurrent network, rather than in the backpropagation procedure used to tune that networkâs weights. Indeed, after an initial training period, the network can improve its performance on new tasks even if the weights are held constant (see also Cotter and Conwell, 1990; Prokhorov et al., 2002; Younger et al., 1999). A second important aspect of the approach is that the learning procedure implemented in the recurrent network is ï¬t to the structure that spans the family of tasks on which the network is trained, embedding biases that allow it to learn efï¬ciently when dealing with tasks from that family.
2.2 DEEP META-RL: DEFINITION AND KEY FEATURES
Importantly, Hochreiterâs original work (Hochreiter et al., 2001), as well as its subsequent extensions (Cotter and Conwell, 1990; Prokhorov et al., 2002; Santoro et al., 2016; Younger et al., 1999) only addressed supervised learning (i.e. the auxiliary input provided on each step explicitly indicated the target output on the previous step, and the network was trained using explicit targets). In the present work we consider the implications of applying the same approach in the context of reinforcement learning. Here, the tasks that make up the training series are interrelated RL problems, for example, a series of bandit problems varying only in their parameterization. Rather than presenting target outputs as auxiliary inputs, the agent receives inputs indicating the action output on the previous step and, critically, the quantity of reward resulting from that action. The same reward information is fed in parallel to a deep RL procedure, which tunes the weights of the recurrent network.
It is this setup, as well as its result, that we refer to as deep meta-RL (although from here on, for brevity, we will often simply call it meta-RL, with apologies to authors who have used that term
2
previously). As in the supervised case, when the approach is successful, the dynamics of the recurrent network come to implement a learning algorithm entirely separate from the one used to train the network weights. Once again, after sufï¬cient training, learning can occur within each task even if the weights are held constant. However, here the procedure the recurrent network implements is itself a full-ï¬edged reinforcement learning algorithm, which negotiates the exploration-exploitation tradeoff and improves the agentâs policy based on reward outcomes. A key point, which we will emphasize in what follows, is that this learned RL procedure can differ starkly from the algorithm used to train the networkâs weights. In particular, its policy update procedure (including features such as the effective learning rate of that procedure), can differ dramatically from those involved in tuning the network weights, and the learned RL procedure can implement its own approach to exploration. Critically, as in the supervised case, the learned RL procedure will be ï¬t to the statistics spanning the multi-task environment, allowing it to adapt rapidly to new task instances.
2.3 FORMALISM
Let us write as D a distribution (the prior) over Markov Decision Processes (MDPs). We want to demonstrate that meta-RL is able to learn a prior-dependent RL algorithm, in the sense that it will perform well on average on MDPs drawn from D or slight modiï¬cations of D. An appropriately structured agent, embedding a recurrent neural network, is trained by interacting with a sequence of MDP environments (also called tasks) through episodes. At the start of a new episode, a new MDP task m â¼ D and an initial state for this task are sampled, and the internal state of the agent (i.e., the pattern of activation over its recurrent units) is reset. The agent then executes its action-selection strategy in this environment for a certain number of discrete time-steps. At each step t an action at â A is executed as a function of the whole history Ht = {x0, a0, r0, . . . , xtâ1, atâ1, rtâ1, xt} of the agent interacting in the MDP m during the current episode (set of states {xs}0â¤sâ¤t, actions {as}0â¤s<t, and rewards {rs}0â¤s<t observed since the beginning of the episode, when the recurrent unit was reset). The network weights are trained to maximize the sum of observed rewards over all steps and episodes.
After training, the agentâs policy is ï¬xed (i.e. the weights are frozen, but the activations are changing due to input from the environment and the hidden state of the recurrent layer), and it is evaluated on a set of MDPs that are drawn either from the same distribution D or slight modiï¬cations of that distribution (to test the generalization capacity of the agent). The internal state is reset at the beginning of the evaluation of any new episode. Since the policy learned by the agent is history-dependent (as it makes uses of a recurrent network), when exposed to any new MDP environment, it is able to adapt and deploy a strategy that optimizes rewards for that task.
# 3 EXPERIMENTS
In order to evaluate the approach to learning that we have just described, we conducted a series of six proof-of-concept experiments, which we present here along with a seventh experiment originally reported in a related paper (Mirowski et al., 2016). One particular point of interest in these experiments was to see whether meta-RL could be used to learn an adaptive balance between exploration and exploitation, as demanded of any fully-ï¬edged RL procedure. A second and still more important focus was on the question of whether meta-RL can give rise to learning that gains efï¬ciency by capitalizing on task structure.
In order to examine these questions, we performed four experiments focusing on bandit tasks and two additional experiments focusing on Markov decision problems. All of our experiments (as well as the additional experiment we report) employ a common set of methods, with minor implementational variations. In all experiments, the agent architecture centers on a recurrent neural network (LSTM; Hochreiter and Schmidhuber, 1997) feeding into a soft-max output representing discrete actions. As detailed below, the parameters of this network core, as well as some other architectural details, varied across experiments (see Figure 1 and Table 1). However, it is important to emphasize that comparisons between speciï¬c architectures are outside the scope of this paper. Our main aim is to illustrate and validate the meta-RL framework in a more general way. To this end, all experiments used the high-level task setup previously described: Both training and testing were organized into ï¬xed-length episodes, each involving a task randomly sampled from a predetermined task distribution, with the LSTM hidden state initialized at the beginning of each episode. Task-speciï¬c inputs and
3
Parameter Exps.1&2 Exp.3 Exp. 4 Exp. 5 Exp. 6 No. threads 1 1 1 1 32 No. LSTMs 1 1 1 1 2 No. hiddens 48 48 48 48 256/64 Steps unrolled 100 5 150 20 100 Be annealed annealed annealed 0.05 0.001 Bo 0.05 0.05 0.05 0.05 0.4 Learning rate tuned 0.001 0.001 tuned tuned Discount factor tuned 0.8 0.8 tuned tuned Input a,r,t a,r,t a,r,t a,r,t,x a,7T, x Observation 1-hot RGB (84x84) No. trials/episode 100 5 150 10 10 Episode length 100 5 150 20 <3600
Table 1: List of hyperparameters. βe = coefï¬cient of entropy regularization loss; in Exps. 1-4, βe is annealed from 1.0 to 0.0 over the course of training. βv = coefï¬cient of value function loss (Mirowski et al., 2016). r = reward, a = last action, t = current time step, x = current observation. Exp. 1: Bandits with independent arms (Section 3.1.1); Exp. 2: Bandits with dependent arms I (Section 3.1.2); Exp. 3: Bandits with dependent arms II (Section 3.1.3); Exp. 4: Restless bandits (Section 3.1.4); Exp. 5: The âTwo-Step Taskâ (Section 3.2.1); Exp. 6: Learning abstract task structure (Section 3.2.2).
action outputs are described in conjunction with individual experiments. In all experiments except where speciï¬ed, the input included a scalar indicating the reward received on the preceding time-step as well as a one-hot representation of the action sampled on that time-step.
All reinforcement learning was conducted using the Advantage Actor-Critic algorithm, as detailed in Mnih et al. (2016) and Mirowski et al. (2016) (see also Figure 1). Details of training, including the use of entropy regularization and a combined policy and value estimate loss, closely follow the methods detailed in Mirowski et al. (2016), with the exception that our experiments used a single thread unless otherwise noted. For a full listing of parameters refer to Table 1.
v v ma ma Cc | Cc | a Vv Pree ¢ * a yO = e enc enc x, a an U x, Lv a.) x, a a (a) LSTM A2C (b) LSTM A3C (c) Stacked-LSTM A3C
Figure 1: Advantage actor-critic with recurrence. In all architectures, reward and last action are additional inputs to the LSTM. For non-bandit environments, observation is also fed into the LSTM either as a one-hot or passed through an encoder model [3-layer encoder: two convolutional layers (ï¬rst layer: 16 8x8 ï¬lters applied with stride 4, second layer: 32 4x4 ï¬lters with stride 2) followed by a fully connected layer with 256 units and then a ReLU non-linearity. See for details Mirowski et al. (2016)]. For bandit experiments, current time step is also fed in as input. Ï = policy; v = value function. A3C is the distributed multi-threaded asynchronous version of the advantage actor-critic algorithm (Mnih et al., 2016); A2C is single threaded. (a) Architecture used in experiments 1-5. (b) Convolutional-LSTM architecture used in experiment 6. (c) Stacked LSTM architecture with convolutional encoder used in experiments 6 and 7.
4
3.1 BANDIT PROBLEMS
As an initial setting for evaluating meta-RL, we studied a series of bandit problems. Except for a very limited set of bandit environments, it is intractable to compute the (prior-dependent) Bayesian-optimal strategy. Here we demonstrate that a recurrent system trained on a set of bandit environments drawn i.i.d. from a given distribution of environments produces a bandit algorithm which performs well on problems drawn from that distribution, and to a certain extent generalizes to related distributions. Thus, meta-RL learns a prior-dependent bandit algorithm.
The specific bandit instantiation of the general meta-RL procedure described in Section[2.3]is defined as follows. Let D be a training distribution over bandit environments. The meta-RL system is trained on a sequence of bandit environments through episodes. At the start of a new episode, its LSTM state is reset and a bandit task b ~ D is sampled. A bandit task is defined as a set of distributions â one for each arm â from which rewards are sampled. The agent plays in this bandit environment for a certain number of trials and is trained to maximize observed rewards. After training, the agentâs policy is evaluated on a set of bandit tasks that are drawn from a test distribution Dâ, which can either be the same as D or a slight modification of it.
We evaluate the resulting performance of the learned bandit algorithm by the cumulative regret, a measure of the loss (in expected rewards) suffered when playing sub-optimal arms. Writing Ha(b) the expected reward of arm a in bandit environment 6, and u*(b) = maXq fla(b) = Hav) (0) (where a*(b) is one optimal arm) the optimal expected reward, we define the cumulative regret (in environment b) as Rr(b) = 7)_, pe*(b) â fia, (b), where a, is the arm (action) chosen at time t. In experiment 4 (Restless bandits; Section 3.1.4), .* also depends on t. We report the performance (average over bandit environments drawn from the test distribution) either in terms of the cumulative regret: E,~p/[Rr(b)] or in terms of number of sub-optimal pulls: Eyvp[S7)_, Har 4 a*(b)}]-
# 3.1.1 BANDITS WITH INDEPENDENT ARMS
We ï¬rst consider a simple two-armed bandit task to examine the behavior of meta-RL under conditions where theoretical guarantees exist and general purpose algorithms apply. The arm distributions are independent Bernoulli distributions (rewards are 1 with probability p and 0 with probability 1 â p), where the parameters of each arm (p1 and p2) are sampled independently and uniformly over [0, 1]. We denote by Di the corresponding distribution over these independent bandit environments (where the subscript i stands for independent arms).
At the beginning of each episode, a new bandit task is sampled and held constant for 100 trials. Training lasted for 20,000 episodes. The network is given as input the last reward, last action taken, and the trial number t, subsequently producing the action for the next trial t + 1 (Figure 1). After training, we evaluated on 300 new episodes with the learning rate set to zero (the learned policy is ï¬xed).
Across model instances, we randomly sampled learning rate and discount, following Mnih et al. (2016). For all ï¬gures, we plotted the average of the top 5 runs of 100 randomly sampled hyperparameter settings, where the top agents were selected from the ï¬rst half of the 300 evaluation episodes and performance was plotted for the second half. We measured the cumulative expected regret across the episode, comparing with several algorithms tailored for this independent bandit setting: Gittins indices (Gittins, 1979) (which is Bayesian optimal in the ï¬nite-horizon case), UCB (Auer et al., 2002) (which comes with theoretical ï¬nite-time regret guarantees), and Thompson sampling (Thompson, 1933) (which is asymptotically optimal in this setting: see Kaufmann et al., 2012b). Model simulations were conducted with the PymaBandits toolbox from (Kaufmann et al., 2012a) and custom Matlab scripts.
As shown in Figure 2a (green line; âIndependentâ), meta-RL outperforms both Thompson sampling (gray dashed line) and UCB (light gray dashed line), although it performs less well compared to Gittins (black dashed line). To verify the critical importance of providing reward information to the LSTM, we removed this input, leaving all other inputs as before. As expected, performance was at chance levels on all bandit tasks.
5
(a) (b) Testing: Independent = Sub-optimal arm pulls âLSTM A2C âIndependentâ - 4g 3 =sGittins S = Thompson 2 uce 7 £ 3 - z1 5 fel - ° *° âSrialy O° 80100 Trial # 100 i) Testing: Dependent Uniform (d) Testing: Easy LSTM A2C âDependent Uniformâ âLSTM A2C âMediumâ 3 3) Gittins g 3) ---Gittins 5 £ = thompson 3 2 B 2 Ea | uessszeti----4 g g 5 ict : F g é 0 20 40 60 80-100 0 20 40 60 80 100 Trial # Trial # Cumulative regret ) Testing: Hard © 9 âLSTM A2C âMediumâ A Indep. 077 =-Gittins ~~ Thompson < m3 UB Unit. 1s 0.67 12 4 2 g & 22 fay 1s) ose o1a S £ E £ oi = Med. 13 07 1a 0 20 40. 60 80 100 Hard 1 15 Trial # Indep. Unif. Easy += Med. â Hard
# Testing Condition
Figure 2: Performance on independent- and correlated-arm bandits. We report performance as the cumulative expected regret RT for 150 test episodes, averaged over the top 5 hyperparameters for each agent-task con- ï¬guration, where the top 5 was determined based on performance on a separate set of 150 test episodes. (a) LSTM A2C trained and evaluated on bandits with independent arms (distribution Di; see text), and compared with theoretically optimal models. (b) A single agent playing the medium difï¬culty task with distribution Dm. Suboptimal arm pulls over trials are depicted for 300 episodes. (c) LSTM A2C trained and evaluated on bandits with dependent uniform arms (distribution Du), (d) trained on medium bandit tasks (Dm) and tested on easy (De), and (e) trained on medium (Dm) and tested on hard task (Dh). (f) Cumulative regret for all possible combinations of training and testing environments (Di, Du, De, Dm, Dh).
3.1.2 BANDITS WITH DEPENDENT ARMS (I)
As we have emphasized, a key property of meta-RL is that it gives rise to a learned RL algorithm that exploits consistent structure in the training distribution. In order to garner empirical evidence for this point, we tested the agent from our ï¬rst experiment in a more structured bandit task. Speciï¬cally, we trained the system on two-arm bandits in which arm reward distributions are correlated. In this setting, unlike the one studied in the previous section, experience with either arm provides information about the other. Standard bandit algorithms, including UCB and Thompson sampling, perform suboptimally in this setting, as they are not designed to exploit such correlations. In some cases it is possible to tailor algorithms for speciï¬c arm structures (see for example Lattimore and Munos, 2014), but extensive problem-speciï¬c analysis is typically required. Our approach aims to learn a structure-dependent bandit algorithm directly from experience with the target bandit domain.
We consider Bernoulli distributions where the parameters (p1, p2) of the two arms are correlated in the sense that p1 = 1 â p2. We consider several training and test distributions. The uniform means that p1 â¼ U([0, 1]) (uniform distribution over the unit interval). The easy means that p1 â¼ U({0.1, 0.9}) (uniform distribution over those two possible values), and similarly we call medium when p1 â¼ U({0.25, 0.75}) and hard when p1 â¼ U({0.4, 0.6}). We denote by Du, De, Dm, and Dh the corresponding induced distributions over bandit environments. In addition
6
we also considered the independent uniform distribution (as in the previous section, Di) where p1, p2 â¼ U([0, 1]) independently. Agents were both trained and tested on those ï¬ve distributions over bandit environments (among which four correspond to correlated distributions: Du, De, Dm and Dh; and one to the independent case: Di). As a validation of the names given to the task distributions (De, Dm, Dh), results show that the easy task is easier to learn than the medium which itself is easier than the hard one (Figure 2f). This is compatible with the general notion that the hardness of a bandit problem is inversely proportional to the difference between the expected reward of the optimal and sub-optimal arms. We again note that withholding the reward input to the LSTM resulted in chance performance on even the easiest bandit task, as should be expected.
Figure 2f reports the results of all possible training-testing regimes. From observing the cumulative expected regrets, we make the following observations: i) agents trained in structured environments (Du, De, Dm, and Dh) develop prior knowledge that can be used effectively when tested on structured distributions â performing comparably to Gittins (Figure 2c-f), and superiorly compared to agents trained on independent arms (Di) in all structured tasks at test (Figure 2f). This is because an agent trained on independent rewards (Di) has not learned to exploit the reward correlations that are useful in those structured tasks. ii) Conversely, previous training on any structured distribution (Du, De, Dm, or Dh) hurts performance when agents are tested on an independent distribution (Di; Figure 2f). This makes sense, as training on correlated arms may produce a policy that relies on speciï¬c reward structure, thereby impacting performance in problems where no such structure exists. iii) Whilst the previous results emphasize the point that meta-RL gives rise to a separate learnt RL algorithm that implements prior-dependent bandit strategies, results also provide evidence that there is some generalization beyond the exact training distribution encountered (Figure 2f). For example, agents trained on the distributions De and Dm perform well when tested over a much wider structured distribution (i.e. Du). Further, our evidence suggests that there is generalization from training on the easier tasks (De,Dm) to testing on the hardest task (Dh; Figure 2e), with similar or even marginally superior performance as compared to training on the hard distribution Dh itself(Figure 2f). In contrast, training on the hard distribution Dh results in relatively poor generalization to other structured distributions (Du, De, Dm), suggesting that training purely on hard instances may result in a learned RL algorithm that is more constrained by prior knowledge, perhaps due to the difï¬culty of solving the original problem.
# 3.1.3 BANDITS WITH DEPENDENT ARMS (II)
In the previous experiment, the agent could outperform standard bandit algorithms by making use of learned dependencies between arms. However, it could do this while always choosing what it believes to be the highest-paying arm. We next examine a problem where information can be gained by paying a short-term reward cost. Similar problems have been examined before as providing a challenge to standard bandit algorithms (see e.g. Russo and Van Roy, 2014). In contrast, humans and animals make decisions that sacriï¬ce immediate reward for information gain (e.g. Bromberg-Martin and Hikosaka, 2009).
In this experiment, the agent was trained on 11-armed bandits with strong dependencies between arms. All arms had deterministic payouts. Nine ânon-targetâ arms had reward = 1, and one âtargetâ arm had reward = 5. Meanwhile, arm a11 was always âinformativeâ, in that the target arm was indexed by 10 times a11âs reward (e.g. a reward of 0.2 on a11 indicated that a2 was the target arm). Thus, a11âs payouts ranged from 0.1 to 1. In each episode, the index of the target arm was randomly assigned. On the ï¬rst trial of each episode, the agent could not know which arm was the target, so the informative arm returned expected reward 0.55 and every target arm returned expected reward 1.4. Choosing the informative arm thus meant foregoing immediate reward, but with the compensation of valuable information. Episodes were ï¬ve steps long. Again, the reward on the previous trial was provided as an additional observation to the agent. To facilitate learning, this was encoded in 1-hot format.
Results are shown in Figure 3. The agent learned the optimal long-run strategy of sampling the informative arm once, despite the short-term cost, and then using the resulting information to exploit the high-value target arm. Thompson sampling, if supplied the true prior, searched potential target arms and exploited the target if found. UCB performed worse because it sampled every arm once even if the target arm was found early.
7
15 + LSTM A2C âOâ Optimal ~~ =~: Thompson 4 10 + UCB one Cumulative Regret Trial #
Figure 3: Learned RL procedure pays immediate cost to gain information to improve long-run returns. In this task, one arm is lower-paying but provides perfect information about which of the other ten arms is highest-paying. The remaining nine arms are intermediate in reward. The index of the informative arm is ï¬xed between episodes, but the index of the highest-paying arm is randomized between episodes. On the ï¬rst trial, the trained agent samples the informative arm. On subsequent trials, the agent uses the information it gained to deterministically exploit the highest-paying arm. Thompson sampling and UCB are not able to take advantage of the dependencies between arms.
# 3.1.4 RESTLESS BANDITS
In previous experiments we considered stationary problems where the agentâs actions yielded in- formation about task parameters that remained ï¬xed throughout each episode. Next, we consider a bandit problem in which reward probabilities change over the course of an episode, with different rates of change (volatilities) in different episodes. To perform well, the agent must not only track the best arm, but also infer the volatility of the episode and adjust its own learning rate accordingly. In such an environment, learning rates should be higher when the environment is changing rapidly, because past information becomes irrelevant more quickly (Behrens et al., 2007; Sutton and Barto, 1998).
We tested whether meta-RL would learn such a flexible RL policy using a two-armed Bernoulli bandit task with reward probabilities p; and 1-p;. The value of p; changed slowly in âlow volâ episodes and quickly in âhigh volâ episodes. The agent had no way of knowing which type of episode it was in, except for its reward history within the episode. Figur shows example âlow volâ and âhigh volâ episodes. Reward magnitude was fixed at 1, and episodes were 100 steps long. UCB and Thompson sampling were again implemented for comparison. The confidence bound term J xlog n ni
Thompson sampling were again implemented for comparison. The conï¬dence bound term in UCB had parameter Ï which was set to 1, selected empirically for good performance on our data set. Thompson samplingâs posterior update included knowledge of the Gaussian random walk, but with a ï¬xed volatility for all episodes.
As in the previous experiment, meta-RL achieved lower regret in test than Thompson sampling, UCB, or the Rescorla-Wagner (R-W) learning rule (Figure[4p;|Rescorla et al.|[1972) fixed learning rate (a=0.5). To test whether the agent adjusted its effective learning rate to match environments with different volatility levels, we fit R-W models to the agentâs behavior, concatenating episodes into blocks of 10, where each block consisted of only âlow volâ or only âhigh volâ episodes. We considered four different models encompassing different combinations of three parameters: learning rate a, softmax inverse temperature 3, and a lapse rate ⬠to account for unexplained choice variance not related to estimated value{Economides et al.| (2015). Model âbâ included only 8, âabâ included a and 3, âbeâ included 3 and e, and âabeâ included all three. All parameters were estimated separately on each block of 10 episodes. In models where ⬠and a were not free, they were fixed to 0 and 0.5, respectively. Model comparison by Bayesian Information Criterion (BIC) indicated that meta-RLâs behavior was better described by a model with different learning rates for each block than a model with a fixed learning rate across blocks. As a control, we performed the same model comparison on the behavior produced by the best R-W agent, finding no benefit of allowing different learning rates across episodes (models âabeâ and âabâ vs âbeâ and âbâ; Figure[4p-d). In these models, the parameter estimates for meta-RLâs behavior were strongly related to the volatility of the episodes, indicating that meta-RL adjusted its learning rate to the volatility of the episode, whereas model fitting the R-W behavior simply recovered the fixed parameters (Figure/4p-f).
8
(a) LSTM, low vol (c) (e) 0.75 AAT RN EET TH R-W RW âTr 50 ee ee ee ~ meen R-W, low vol Ss 40 RRR âODER ART OEY Ke 0.75" $s © + low vol episodes kee) a0 0400 100 ee OG â_ 26d 0 oO s 2 30 « high vol episodes LSTM, high vol av g Ly (re pepsin i Ap aie ee s i imi hl, ek Ld Led "10 R-W, high vol - true p ey LP ee Sa EER Rap ET RACED) o action s 0 â uly! VEL |) \ yl feedback! oo fo y Vt Ly Ly] be Qala: dot a Jets es eas als foes Ms aaa 22 6 0 20°40 1,00 80100 © 9 02 04 06 ste| (b) P (a) LsTM (f) LsTM 50 20 - â LSTM A2c a se best R-W. & 40 a = = Thompson Sf L o ucs be) P _ 30 . o os 3 @ os 5 ao & = 20 Ss 2 é SS 10 s Ry i 0 20 40 60 so 100 © 8 S$ 8 * 0 02 04 06 step © model alpha
Figure 4: Learned RL procedure adapts its own learning rate to the environment. (a) Agents were trained on two-armed bandits with perfectly anti-correlated Bernoulli reward probabilities, p1 and 1-p1. Two example episodes are shown. p1 changed within an episode (solid black line), with a fast Poisson jump rate in âhigh volâ episodes and a slow rate in âlow volâ episodes. (b) The trained LSTM agent outperformed UCB, Thompson sampling, and a Rescorla-Wagner (R-W) learner with ï¬xed learning rate α=0.5 (selected for being optimal on average in this distribution of environments). (c,d) We ï¬t R-W models by maximum likelihood both to the behavior of R-W (as a control) and to the behavior of LSTM. Models including a learning rate that could vary between episodes (âabâ and âabeâ) outperformed models without these free parameters on LSTMâs data, but not on R-Wâs data. Addition of a lapse parameter further improved model ï¬ts on LSTMâs data (âbeâ and âabeâ), suggesting that the algorithm implemented by LSTM is not exactly Rescorla-Wagner. (e,f) The LSTMâs, but not R-Wâs, estimated learning rate was higher in volatile episodes. Small jitter added to visualize overlapping points.
3.2 MARKOV DECISION PROBLEMS
The foregoing experiments focused on bandit tasks in which actions do not affect the taskâs underlying state. We turn now to MDPs where actions do inï¬uence state. We begin with a task derived from the neuroscience literature and then turn to a task, originally studied in the context of animal learning, which requires learning of abstract task structure. As in the previous experiments, our focus is on examining how meta-RL adapts to invariances in task structure. We wrap up by reviewing an experiment recently reported in a related paper (Mirowski et al., 2016), which demonstrates how meta-RL can scale to large-scale navigation tasks with rich visual inputs.
3.2.1 THE âTWO-STEP TASKâ
Here we examine meta-RL in a setting that has been widely used in the neuroscience literature to distinguish the contribution of different systems viewed to support decision making (Daw et al., 2005). Speciï¬cally, this paradigm â known as the âtwo-step taskâ (Daw et al., 2011) â was developed to dissociate a model-free system that caches values of actions in states (e.g. TD(1) Q-learning; see Sutton and Barto, 1998), from a model-based system which learns an internal model of the environment and evaluates the value of actions at the time of decision-making through look-ahead planning (Daw et al., 2005). Our interest was in whether meta-RL would give rise to behavior emulating a model-based strategy, despite the use of a model-free algorithm (in this case A2C) to train the system weights.
9
We used a modiï¬ed version of the two-step task, designed to bolster the utility of model-based over model-free control (see Kool et al., 2016). The taskâs structure is diagrammed in Figure 5a. From the ï¬rst-stage state S1, action a1 leads to second-stage states S2 and S3 with probability 0.75 and 0.25, respectively, while action a2 leads to S2 and S3 with probabilities 0.25 and 0.75. One second-stage state yielded a reward of 1.0 with probability 0.9 (and otherwise zero); the other yielded the same reward with probability 0.1. The identity of the higher-valued state was assigned randomly for each episode. Thus, the expected values for the two ï¬rst-stage actions were either ra = 0.9 and rb = 0.1, or ra = 0.1 and rb = 0.9. All three states were represented by one-hot vectors, with the transition model held constant across episodes: i.e. only the expected value of the second stage states changed from episode to episode.
We applied the conventional analysis used in the neuroscience literature to dissociate model-free from model-based control (Daw et al., 2011). This focuses on the âstay probability,â that is, the probability with which a ï¬rst-stage action is selected at trial t + 1 following a second-stage reward at trial t, as a function of whether trial t involved a common transition (e.g. action a1 at state S1 led to S2) or rare transition (action a2 at state S1 led to S3). Under the standard interpretation (see Daw et al., 2011), model-free control â à la TD(1) â predicts that there should be a main effect of reward: First-stage actions will tend to be repeated if followed by reward, regardless of transition type, and such actions will tend not to be repeated (choice switch) if followed by non-reward (Figure 5b). In contrast, model-based control predicts an interaction between the reward and transition type, reï¬ecting a more goal-directed strategy, which takes the transition structure into account. Intuitively, if you receive a second-stage reward (e.g. at S2) following a rare transition (i.e. having taken action a2 at state S1), to maximize your chances of getting to this reward on the next trial based on your knowledge of the transition structure, the optimal ï¬rst stage action is a1 (i.e. switch).
The results of the stay-probability analysis performed on the agentâs choices show a pattern conven- tionally interpreted as implying the operation of model-based control (Figure 5c). As in previous experiments, when reward information was withheld at the level of network input, performance was at chance levels.
If interpreted following standard practice in neuroscience, the behavior of the model in this experiment reï¬ects a surprising effect: training with model-free RL gives rise to behavior reï¬ecting model-based control. We hasten to note that different interpretations of the observed pattern of behavior are available (Akam et al., 2015), a point to which we will return below. However, notwithstanding this caveat, the results of the present experiment provide a further illustration of the point that the learning procedure that emerges from meta-RL can differ starkly from the original RL algorithm used to train the network weights, and takes a form that exploits consistent task structure.
# 3.2.2 LEARNING ABSTRACT TASK STRUCTURE
In the ï¬nal experiment we conducted, we took a step towards examining the scalabilty of meta-RL, by studying a task that involves rich visual inputs, longer time horizons and sparse rewards. Additionally, in this experiment we studied a meta-learning task that requires the system to tune into an abstract task structure, in which a series of objects play deï¬ned roles which the system must infer.
The task was adapted from a classic study of animal behavior, conducted by Harlow (1949). On each trial in the original task, Harlow presented a monkey with two visually contrasting objects. One of these covered a small well containing a morsel of food; the other covered an empty well. The animal chose freely between the two objects and could retrieve the food reward if present. The stage was then hidden and the left-right positions of the objects were randomly reset. A new trial then began, with the animal again choosing freely. This process continued for a set number of trials using the same two objects. At completion of this set of trials, two entirely new and unfamiliar objects were substituted for the original two, and the process began again. Importantly, within each block of trials, one object was chosen to be consistently rewarded (regardless of its left-right position), with the other being consistently unrewarded. What Harlow (Harlow, 1949) observed was that, after substantial practice, monkeys displayed behavior that reï¬ected an understanding of the taskâs rules. When two new objects were presented, the monkeyâs ï¬rst choice between them was necessarily arbitrary. But after observing the outcome of this ï¬rst choice, the monkey was at ceiling thereafter, always choosing the rewarded object.
10
(a) Two-step task (b) Model predictions
Model-based Model-free Last trial transition Common = Fare § & . 6 0.0 0.0 & Lasttrial rewarded Last trial not rewarded Last trial rewarded Last trial not rewarded
Last trial transition â== Common mm Rare Last trial rewarded Last trial not rewarded
# (c) LSTM A2C with reward input
Figure 5: Three-state MDP modeled after the âtwo-step taskâ from Daw et al. (2011). (a) MDP with 3 states and 2 actions. All trials start in state S1, with transition probabilities after taking actions a1 or a2 depicted in the graph. S2 and S3 result in expected rewards ra and rb (see text). (b) Predictions of choice probabilities given either a model-based strategy or a model-free strategy (Daw et al., 2011). Speciï¬cally, model-based strategies take into account transition probabilities and would predict an interaction between the amount of reward received on the last trial and the transition (common or uncommon) observed. (c) Agent displays a perfectly model-based proï¬le when given the reward as input.
We anticipated that meta-RL should give rise to the same pattern of abstract one-shot learning. In order to test this, we adapted Harlowâs paradigm into a visual ï¬xation task, as follows. A 84x84 pixel input represented a simulated computer screen (see Figure 6a-c). At the beginning of each trial, this display was blank except for a small central ï¬xation cross (red crosshairs). The agent selected discrete left-right actions which shifted its view approximately 4.4 degrees in the corresponding direction, with a small momentum effect (alternatively, a no-op action could be selected). The completion of a trial required performing two tasks: saccading to the central ï¬xation cross, followed by saccading to the correct image. If the agent held the ï¬xation cross in the center of the ï¬eld of view (within a tolerance of 3.5 degrees visual angle) for a minimum of four time steps, it received a reward of 0.2. The ï¬xation cross then disappeared and two images â drawn randomly from the ImageNet dataset (Deng et al., 2009) and resized to 34x34 â appeared on the left and right side of the display (Figure 6b). The agentâs task was then to âselectâ one of the images by rotating until the center of the image aligned with the center of the visual ï¬eld of view (within a tolerance of 7 degrees visual angle). Once one of the images was selected, both images disappeared and, after an intertrial interval of 10 time-steps, the ï¬xation cross reappeared, initiating the next trial. Each episode contained a maximum of 10 trials or 3600 steps. Following Mirowski et al. (2016), we implemented an action repeat of 4, meaning that selecting an image took a minimum of three independent decisions (twelve primitive actions) after having completed the ï¬xation. It should be noted, however, that the rotational position of the agent was not limited; that is, 360 degree rotations could occur, while the simulated computer screen only subtended 65 degrees.
Although new ImageNet images were chosen at the beginning of each episode (sampled with replacement from a set of 1000 images), the same images were re-used across all trials within an episode, though in randomly varying left-right placement, similar to the objects in Harlowâs experiment. And as in that experiment, one image was arbitrarily chosen to be the ârewardedâ image throughout the episode. Selection of this image yielded a reward of 1.0, while the other image yielded a reward of -1.0. During test, the A3C learning rate was set to zero and ImageNet images were drawn from a separate held-out set of 1000, never presented during training.
A grid search was conducted for optimal hyperparameters. At perfect performance, agents can complete one trial per 20-30 steps and achieve a maximum expected reward of 9 per 10 trials. Given
11
(a) Fixation (b) Image display (c) Right saccade and selection (d) Training performance (e) Robustness over random seeds (f) One-shot learning
Rewardstvial Random ° 40000 89000 120000 Episodesthread
12345678910 Tale
Random 0 2% 46d BO 100 Rank
Figure 6: Learning abstract task structure in visually rich 3D environment. a-c) Example of a single trial, beginning with a central ï¬xation, followed by two images with random left-right placement. d) Average performance (measured in average reward per trial) of top 40 out of 100 seeds during training. Maximum expected performance is indicated with black dashed line. e) Performance at episode 100,000 for 100 random seeds, in decreasing order of performance. f) Probability of selecting the rewarded image, as a function of trial number for a single A3C stacked LSTM agent for a range of training durations (episodes per thread, 32 threads).
the nature of the task â which requires one-shot image-reward memory together with maintenance of this information over a relatively long timescale (i.e. over ï¬xation-cross selections and across trials) â we assessed the performance of not only a convolutional-LSTM architecture which receives reward and action as additional input (see Figure 1b and Table 1), but also a convolutional-stacked LSTM architecture used in a navigation task discussed below (see Figure 1c).
Agent performance is illustrated in Figure 6d-f. Whilst the single LSTM agent was relatively successful at solving the task, the stacked-LSTM variant exhibited much better robustness. That is, 43% of random seeds of the best hyperparameter set performed at ceiling (Figure 6e), compared to 26% of the single LSTM.
Like the monkeys in Harlowâs experiment (Harlow, 1949), the networks converge on an optimal policy: Not only does the agent successfully ï¬xate to begin each trial, but starting on the second trial of each episode it invariably selects the rewarded image, regardless of which image it selected on the ï¬rst trial(Figure 6f). This reï¬ects an impressive form of one-shot learning, which reï¬ects an implicit understanding of the task structure: After observing one trial outcome, the agent binds a complex, unfamiliar image to a speciï¬c task role.
Further experiments, reported elsewhere (Wang et al., 2017), conï¬rmed that the same recurrent A3C system is also able to solve a substantially more difï¬cult version of the task. In this task, only one image â which was randomly designated to be either the rewarding item to be selected, or the unrewarding item to be avoided â was presented on every trial during an episode, with the other image presented being novel on every trial.
3.2.3 ONE-SHOT NAVIGATION
The experiments using the Harlow task demonstrate the capacity of meta-RL to operate effectively within a visually rich environment, with relatively long time horizons. Here we consider related experiments recently reported within the navigation domain (Mirowski et al., 2016) (see also Jaderberg et al., 2016), and discuss how these can be recast as examples of meta-RL â attesting to the scaleability of this principle to more typical MDP settings that pose challenging RL problems due to dynamically changing sparse rewards.
12
(a) Labryinth I-maze (b) Illustrative Episode
sy Value function w i) 100 200 300 400 500 600 700 800 Time step in episode
â FF A3c (87) â Nav 3c (260) 0.0 0.2 o4 0.6 08 Lo ee
# (c) Performance
# (d) Value Function
Figure 7: a) view of I-maze showing goal object in one of the 4 alcoves b) following initial exploration (light trajectories), agent repeatedly goes to goal (blue trajectories) c) Performance of stacked LSTM (termed âNav A3Câ) and feedforward (âFF A3Câ) architectures, per episode (goal = 10 points) averaged across top 5 hyperparameters. e) following initial goal discovery (goal hits marked in red), value function occurs well in advance of the agent seeing the goal which is hidden in an alcove. Figure used with permission from Mirowski et al. (2016).
Speciï¬cally, we consider a setting where the environment layout is ï¬xed but the goal changes location randomly each episode (Figure 7; Mirowski et al., 2016). Although the layout is relatively simple, the Labyrinth environment (see for details Mirowski et al., 2016) is richer and more ï¬nely discretized (cf VizDoom), resulting in long time horizons; a trained agent takes approximately 100 steps (10 seconds) to reach the goal for the ï¬rst time in a given episode. Results show that a stacked LSTM architecture (Figure 1c), that receives reward and action as additional inputs equivalent to that used in our Harlow experiment achieves near-optimal behavior â showing one-shot memory for the goal location after an initial exploratory period, followed by repeated exploitation (see Figure 7c). This is evidenced by a substantial decrease in latency to reach the goal for the ï¬rst time (~100 timesteps) compared to subsequent visits (~30 timesteps). Notably, a feedforward network (see Figure 7c), that receives only a single image as observation, is unable to solve the task (i.e. no decrease in latency between successive goal rewards). Whilst not interpreted as such in Mirowski et al. (2016), this provides a clear demonstration of the effectiveness of meta-RL: a separate RL algorithm with the capability of one-shot learning emerges through training with a ï¬xed and more incremental RL algorithm (i.e. policy gradient). Meta-RL can be viewed as allowing the agent to infer the optimal value function following initial exploration (see Figure 7d) â with the additional LSTM providing information about the currently relevant goal location to the LSTM that outputs the policy over the extended timeframe of the episode. Taken together, meta-RL allows a base model-free RL algorithm to solve a challenging RL problem that might otherwise require fundamentally different approaches (e.g. based on successor representations or fully model-based RL).
# 4 RELATED WORK
We have already touched on the relationship between deep meta-RL and pioneering work by Hochre- iter et al. (2001) using recurrent networks to perform meta-learning in the setting of full supervision
13
(see also Cotter and Conwell, 1990; Prokhorov et al., 2002; Younger et al., 1999). That approach was recently extended in Santoro et al. (2016), which demonstrated the utility of leveraging an external memory structure. The idea of crossing meta-learning with reinforcement learning has been previ- ously discussed by Schmidhuber et al. (1996). That work, which appears to have introduced the term âmeta-RL,â differs from ours in that it did not involve a neural network implementation. More recently, however, there has been a surge of interest in using neural networks to learn optimization procedures, using a range of innovative meta-learning techniques (Andrychowicz et al., 2016; Chen et al., 2016; Li and Malik, 2016; Zoph and Le, 2016). Recent work by Chen et al. (2016) is particularly close in spirit to the work we have presented here, and can be viewed as treating the case of âinï¬nite banditsâ using a meta-learning strategy broadly analogous to the one we have pursued.
The present research also bears a close relationship with a different body of recent work that has not been framed in terms of meta-learning. A number of studies have used deep RL to train recurrent neural networks on navigation tasks, where the structure of the task (e.g., goal location or maze conï¬guration) varies across episodes (Jaderberg et al., 2016; Mirowski et al., 2016). The ï¬nal experiment that we presented above, drawn from (Mirowski et al., 2016), is one example. To the extent that such experiments involve the key ingredients of deep meta-RL â a neural network with memory, trained through RL on a series of interrelated tasks â they are almost certain to involve the kind of meta-learning we have described in the present work. This related work provides an indication that meta-RL can be fruitfully applied to larger scale problems than the ones we have studied in our own experiments. Importantly, it indicates that a key ingredient in scaling the approach may be to incorporate memory mechanisms beyond those inherent in unstructured recurrent neural networks (see Graves et al., 2016; Mirowski et al., 2016; Santoro et al., 2016; Weston et al., 2014). Our work, for its part, suggests that there is untapped potential in deep recurrent RL agents to meta-learn quite abstract aspects of task structure, and to discover strategies that exploit such structure toward rapid, ï¬exible adaptation.
During completion of the present research, closely related work was reported by Duan et al. (2016). Like us, Duan and colleagues use deep RL to train a recurrent network on a series of interrelated tasks, with the result that the network dynamics learn a second RL procedure which operates on a faster time-scale than the original algorithm. They compare the performance of these learned procedures against conventional RL algorithms in a number of domains, including bandits and navigation. An important difference between this parallel work and our own is the formerâs primary focus on relatively unstructured task distributions (e.g., uniformly distributed bandit problems and random MDPs); our main interest, in contrast, has been in structured task distributions (e.g., dependent bandits and the task introduced by Harlow, 1949), because it is in this setting where the system can learn a biased â and therefore efï¬cient â RL procedure that exploits regular task structure. The two perspectives are, in this regard, nicely complementary.
# 5 CONCLUSION
A current challenge in artiï¬cial intelligence is to design agents that can adapt rapidly to new tasks by leveraging knowledge acquired through previous experience with related activities. In the present work we have reported initial explorations of what we believe is one promising avenue toward this goal. Deep meta-RL involves a combination of three ingredients: (1) Use of a deep RL algorithm to train a recurrent neural network, (2) a training set that includes a series of interrelated tasks, (3) network input that includes the action selected and reward received in the previous time interval. The key result, which emerges naturally from the setup rather than being specially engineered, is that the recurrent network dynamics learn to implement a second RL procedure, independent from and potentially very different from the algorithm used to train the network weights. Critically, this learned RL algorithm is tuned to the shared structure of the training tasks. In this sense, the learned algorithm builds in domain-appropriate biases, which can allow it to operate with greater efï¬ciency than a general-purpose algorithm. This bias effect was particularly evident in the results of our experiments involving dependent bandits (sections 3.1.2 and 3.1.3), where the system learned to take advantage of the taskâs covariance structure; and in our study of Harlowâs animal learning task (section 3.2.2), where the recurrent network learned to exploit the taskâs structure in order to display one-shot learning with complex novel stimuli.
14
One of our experiments (section 3.2.1) illustrated the point that a system trained using a model-free RL algorithm can develop behavior that emulates model-based control. A few further comments on this result are warranted. As noted in our presentation of the simulation results, the pattern of choice behavior displayed by the network has been considered in the cognitive and neuroscience literatures as reï¬ecting model-based control or tree search. However, as has been remarked in very recent work, the same pattern can arise from a model-free system with an appropriate state representation (Akam et al., 2015). Indeed, we suspect this may be how our network in fact operates. However, other ï¬ndings suggest that a more explicitly model-based control mechanism can emerge when a similar system is trained on a more diverse set of tasks. In particular, Ilin et al. (2007) showed that recurrent networks trained on random mazes can approximate dynamic programming procedures (see also Silver et al., 2017; Tamar et al., 2016). At the same time, as we have stressed, we consider it an important aspect of deep meta-RL that it yields a learned RL algorithm that capitalizes on invariances in task structure. As a result, when faced with widely varying but still structured environments, deep meta-RL seems likely to generate RL procedures that occupy a grey area between model-free and model-based RL.
The two-step decision problem studied in Section 3.2.1 was derived from neuroscience, and we believe deep meta-RL may have important implications in that arena (Wang et al., 2017). The notion of meta-RL has been discussed previously in neuroscience but only in a narrow sense, according to which meta-learning adjusts scalar hyperparameters such as the learning rate or softmax inverse temperature (Khamassi et al., 2011; 2013; Kobayashi et al., 2009; Lee and Wang, 2009; Schweighofer and Doya, 2003; Soltani et al., 2006). In recent work (Wang et al., 2017) we have shown that deep meta-RL can account for a wider range of experimental observations, providing an integrative framework for understanding the respective roles of dopamine and the prefrontal cortex in biological reinforcement learning.
ACKNOWLEDGEMENTS
We would like the thank the following colleagues for useful discussion and feedback: Nando de Freitas, David Silver, Koray Kavukcuoglu, Daan Wierstra, Demis Hassabis, Matt Hoffman, Piotr Mirowski, Andrea Banino, Sam Ritter, Neil Rabinowitz, Peter Dayan, Peter Battaglia, Alex Lerchner, Tim Lillicrap and Greg Wayne.
# REFERENCES
Thomas Akam, Rui Costa, and Peter Dayan. Simple plans or sophisticated habits? state, transition and learning interactions in the two-step task. PLoS Comput Biol, 11(12):e1004648, 2015.
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. arXiv preprint arXiv:1606.04474, 2016.
Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2-3):235â256, 2002.
Timothy EJ Behrens, Mark W Woolrich, Mark E Walton, and Matthew FS Rushworth. Learning the value of information in an uncertain world. Nature neuroscience, 10(9):1214â1221, 2007.
Ethan S Bromberg-Martin and Okihide Hikosaka. Midbrain dopamine neurons signal preference for advance information about upcoming rewards. Neuron, 63(1):119â126, 2009.
Yutian Chen, Matthew W Hoffman, Sergio Gomez, Misha Denil, Timothy P Lillicrap, and Nando de Freitas. Learning to learn for global optimization of black box functions. arXiv preprint arXiv:1611.03824, 2016.
NE Cotter and PR Conwell. Fixed-weight networks can learn. In 1990 IJCNN International Joint Conference on Neural Networks, pages 553â559, 1990.
Nathaniel D Daw, Yael Niv, and Peter Dayan. Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nature neuroscience, 8(12):1704â1711, 2005.
Nathaniel D Daw, Samuel J Gershman, Ben Seymour, Peter Dayan, and Raymond J Dolan. Model-based inï¬uences on humansâ choices and striatal prediction errors. Neuron, 69(6):1204â1215, 2011.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248â255. IEEE, 2009.
Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, and Pieter Abbeel. Rl2: Fast reinforcement learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779, 2016. URL http://arxiv.
15
org/abs/1611.02779.
Marcos Economides, Zeb Kurth-Nelson, Annika Lübbert, Marc Guitart-Masip, and Raymond Dolan. Model- based reasoning in humans becomes automatic with training. PLoS Computational Biology, 11(9):e1004463, 2015.
John C Gittins. Bandit processes and dynamic allocation indices. Journal of the Royal Statistical Society. Series B (Methodological), pages 148â177, 1979.
Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwi´nska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 2016.
Harry F Harlow. The formation of learning sets. Psychological review, 56(1):51, 1949.
Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997.
Sepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent. International Conference on Artiï¬cial Neural Networks, pages 87â94. Springer, 2001. In
Roman Ilin, Robert Kozma, and Paul J Werbos. Efï¬cient learning in cellular simultaneous recurrent neural networks-the case of maze navigation problem. In 2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning, pages 324â329. IEEE, 2007.
Max Jaderberg, Volodymir Mnih, Wojciech Czarnecki, Tom Schaul, Joel Z. Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397, 2016. URL http://arxiv.org/abs/1611.05397.
Emilie Kaufmann, Olivier Cappé, and Aurélien Garivier. On bayesian upper conï¬dence bounds for bandit problems. In Proc. of Intâl Conf. on Artiï¬cial Intelligence and Statistics, AISTATS, 2012a.
Emilie Kaufmann, Nathaniel Korda, and Rémi Munos. Thompson sampling: An asymptotically optimal ï¬nite-time analysis. In Algorithmic Learning Theory - 23rd International Conference, pages 199â213, 2012b.
Mehdi Khamassi, Stéphane Lallée, Pierre Enel, Emmanuel Procyk, and Peter F Dominey. Robot cognitive control with a neurophysiologically inspired reinforcement learning model. Frontiers in neurorobotics, 5:1, 2011.
Mehdi Khamassi, Pierre Enel, Peter Ford Dominey, and Emmanuel Procyk. Medial prefrontal cortex and the adaptive regulation of reinforcement learning parameters. Prog Brain Res, 202:441â464, 2013.
Kunikazu Kobayashi, Hiroyuki Mizoue, Takashi Kuremoto, and Masanao Obayashi. A meta-learning method based on temporal difference error. In International Conference on Neural Information Processing, pages 530â537. Springer, 2009.
Wouter Kool, Fiery A Cushman, and Samuel J Gershman. When does model-based control pay off? PLoS Comput Biol, 12(8):e1005090, 2016.
Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines that learn and think like people. arXiv preprint arXiv:1604.00289, 2016.
Tor Lattimore and Rémi Munos. Bounded regret for ï¬nite-armed structured bandits. In Advances in Neural Information Processing Systems 27, pages 550â558, 2014.
Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436â444, 2015.
Daeyeol Lee and Xiao-Jing Wang. Mechanisms for stochastic decision making in the primate frontal cortex: Single-neuron recording and circuit modeling. Neuroeconomics: Decision making and the brain, pages 481â501, 2009.
Ke Li and Jitendra Malik. Learning to optimize. arXiv preprint arXiv:1606.01885, 2016.
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andy Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, and Raia Hadsell. Learning to navigate in complex environments. arXiv preprint arXiv:1611.03673, 2016. URL http://arxiv.org/abs/1611. 03673.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, et al. Human-level control through deep reinforcement learning. Nature, 518:529â533, 2015.
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Proc. of Intâl Conf. on Machine Learning, ICML, 2016.
16
Danil V Prokhorov, Lee A Feldkamp, and Ivan Yu Tyukin. Adaptive behavior with ï¬xed weights in rnn: an overview. In Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN), pages 2018â2023, 2002.
Robert A Rescorla, Allan R Wagner, et al. A theory of pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. Classical conditioning II: Current research and theory, 2:64â99, 1972.
Dan Russo and Benjamin Van Roy. Learning to optimize via information-directed sampling. In Advances in Neural Information Processing Systems 27, pages 1583â1591, 2014.
Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Meta-learning with memory-augmented neural networks. In Proceedings of The 33rd International Conference on Machine Learning, pages 1842â1850, 2016.
Jurgen Schmidhuber, Jieyu Zhao, and Marco Wiering. Simple principles of metalearning. Technical report, SEE, 1996.
Nicolas Schweighofer and Kenji Doya. Meta-learning in reinforcement learning. Neural Networks, 16(1):5â9, 2003.
David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587): 484â489, 2016.
David Silver, Hado van Hasselt, Matteo Hessel, Tom Schaul, Arthur Guez, Tim Harley, Gabriel Dulac-Arnold, David Reichert, Neil Rabinowitz, Andre Barreto, and Thomas Degris. The predictron: End-to-end learning and planning. Submitted to Intâl Conference on Learning Representations, ICLR, 2017.
Alireza Soltani, Daeyeol Lee, and Xiao-Jing Wang. Neural mechanism for stochastic behaviour during a competitive game. Neural Networks, 19(8):1075â1090, 2006.
Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998.
Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, and Pieter Abbeel. Value iteration networks. arXiv preprint arXiv:1602.02867v2, 2016.
William R Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25:285â294, 1933.
Sebastian Thrun and Lorien Pratt. Learning to learn: Introduction and overview. In Learning to learn, pages 3â17. Springer, 1998.
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Joel Leibo, Hubert Soyer, Dharshan Kumaran, and Matthew Botvinick. Meta-reinforcement learning: a bridge between prefrontal and dopaminergic function. In Cosyne Abstracts, 2017.
Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprint arXiv:1410.3916, 2014.
A Steven Younger, Peter R Conwell, and Neil E Cotter. Fixed-weight on-line learning. IEEE Transactions on Neural Networks, 10(2):272â283, 1999.
Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016.
17 | {
"id": "1611.01578"
} |
1611.05397 | Reinforcement Learning with Unsupervised Auxiliary Tasks | Deep reinforcement learning agents have achieved state-of-the-art results by
directly maximising cumulative reward. However, environments contain a much
wider variety of possible training signals. In this paper, we introduce an
agent that also maximises many other pseudo-reward functions simultaneously by
reinforcement learning. All of these tasks share a common representation that,
like unsupervised learning, continues to develop in the absence of extrinsic
rewards. We also introduce a novel mechanism for focusing this representation
upon extrinsic rewards, so that learning can rapidly adapt to the most relevant
aspects of the actual task. Our agent significantly outperforms the previous
state-of-the-art on Atari, averaging 880\% expert human performance, and a
challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks
leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert
human performance on Labyrinth. | http://arxiv.org/pdf/1611.05397 | Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu | cs.LG, cs.NE | null | null | cs.LG | 20161116 | 20161116 | 6 1 0 2
v o N 6 1 ] G L . s c [
1 v 7 9 3 5 0 . 1 1 6 1 : v i X r a
# REINFORCEMENT LEARNING WITH UNSUPERVISED AUXILIARY TASKS
Max Jaderbergâ, Volodymyr Mnih*, Wojciech Marian Czarnecki* Tom Schaul, Joel Z Leibo, David Silver & Koray Kavukcuoglu DeepMind London, UK jaderberg,vmnih,lejlot,schaul,jzl,davidsilver,korayk {
# korayk}@google.com
}
# ABSTRACT
Deep reinforcement learning agents have achieved state-of-the-art results by di- rectly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by rein- forcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon ex- trinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent signiï¬cantly outperforms the previous state-of-the- art on Atari, averaging 880% expert human performance, and a challenging suite of ï¬rst-person, three-dimensional Labyrinth tasks leading to a mean speedup in learning of 10
Ã
Natural and artiï¬cial agents live in a stream of sensorimotor data. At each time step t, the agent receives observations ot and executes actions at. These actions inï¬uence the future course of the sensorimotor stream. In this paper we develop agents that learn to predict and control this stream, by solving a host of reinforcement learning problems, each focusing on a distinct feature of the sensorimotor stream. Our hypothesis is that an agent that can ï¬exibly control its future experiences will also be able to achieve any goal with which it is presented, such as maximising its future rewards.
The classic reinforcement learning paradigm focuses on the maximisation of extrinsic reward. How- ever, in many interesting domains, extrinsic rewards are only rarely observed. This raises questions of what and how to learn in their absence. Even if extrinsic rewards are frequent, the sensorimotor stream contains an abundance of other possible learning targets. Traditionally, unsupervised learn- ing attempts to reconstruct these targets, such as the pixels in the current or subsequent frame. It is typically used to accelerate the acquisition of a useful representation. In contrast, our learning objective is to predict and control features of the sensorimotor stream, by treating them as pseudo- rewards for reinforcement learning. Intuitively, this set of tasks is more closely matched with the agentâs long-term goals, potentially leading to more useful representations.
Consider a baby that learns to maximise the cumulative amount of red that it observes. To correctly predict the optimal value, the baby must understand how to increase ârednessâ by various means, including manipulation (bringing a red object closer to the eyes); locomotion (moving in front of a red object); and communication (crying until the parents bring a red object). These behaviours are likely to recur for many other goals that the baby may subsequently encounter. No understanding of these behaviours is required to simply reconstruct the redness of current or subsequent images.
Our architecture uses reinforcement learning to approximate both the optimal policy and optimal value function for many different pseudo-rewards. It also makes other auxiliary predictions that serve to focus the agent on important aspects of the task. These include the long-term goal of predicting cumulative extrinsic reward as well as short-term predictions of extrinsic reward. To learn more efï¬ciently, our agents use an experience replay mechanism to provide additional updates
âJoint ï¬rst authors. Ordered alphabetically by ï¬rst name.
1
Agent LSTM (a) Base A3C Agent R+ââ R«ââ_R Agent ConvNet 4 4 4 V0 Aux FC net â T Â¥ - âFE Aux DeConvNet ââ 4 4 > - - y yoy y O Iâ-Oâ-0âO Replay Buffer Environment * > ral r 0 0 â - : â (d) Value Function Replay rr =) tr-3 tro t-1 (c) Reward Prediction trai tre âa qus . / (b) Pixel Control
Figure 1: Overview of the UNREAL agent. (a) The base agent is a CNN-LSTM agent trained on-policy with the A3C loss (Mnih et al., 2016). Observations, rewards, and actions are stored in a small replay buffer which encapsulates a short history of agent experience. This experience is used by auxiliary learning tasks. (b) Pixel Control â auxiliary policies Qaux are trained to maximise change in pixel intensity of different regions of the input. The agent CNN and LSTM are used for this task along with an auxiliary deconvolution network. This auxiliary control task requires the agent to learn how to control the environment. (c) Reward Prediction â given three recent frames, the network must predict the reward that will be obtained in the next unobserved timestep. This task network uses instances of the agent CNN, and is trained on reward biased sequences to remove the perceptual sparsity of rewards. (d) Value Function Replay â further training of the value function using the agent network is performed to promote faster value iteration. Further visualisation of the agent can be found in https://youtu.be/Uz-zGYrYEjA
to the critics. Just as animals dream about positively or negatively rewarding events more frequently (Schacter et al., 2012), our agents preferentially replay sequences containing rewarding events.
Importantly, both the auxiliary control and auxiliary prediction tasks share the convolutional neural network and LSTM that the base agent uses to act. By using this jointly learned representation, the base agent learns to optimise extrinsic reward much faster and, in many cases, achieves better policies at the end of training.
This paper brings together the state-of-the-art Asynchronous Advantage Actor-Critic (A3C) frame- work (Mnih et al., 2016), outlined in Section 2, with auxiliary control tasks and auxiliary reward tasks, deï¬ned in sections Section 3.1 and Section 3.2 respectively. These auxiliary tasks do not re- quire any extra supervision or signals from the environment than the vanilla A3C agent. The result is our UNsupervised REinforcement and Auxiliary Learning (UNREAL) agent (Section 3.4)
In Section 4 we apply our UNREAL agent to a challenging set of 3D-vision based domains known as the Labyrinth (Mnih et al., 2016), learning solely from the raw RGB pixels of a ï¬rst-person view. Our agent signiï¬cantly outperforms the baseline agent using vanilla A3C, even when the baseline was augmented with an unsupervised reconstruction loss, in terms of speed of learning, robustness to hyperparameters, and ï¬nal performance. The result is an agent which on average achieves 87% of expert human-normalised score, compared to 54% with A3C, and on average 10 faster than A3C. Our UNREAL agent also signiï¬cantly outperforms the previous state-of-the-art in the Atari domain.
# 1 RELATED WORK
A variety of reinforcement learning architectures have focused on learning temporal abstractions, such as options (Sutton et al., 1999b), with policies that may maximise pseudo-rewards (Konidaris & Barreto, 2009; Silver & Ciosek, 2012). The emphasis here has typically been on the development of temporal abstractions that facilitate high-level learning and planning. In contrast, our agents do not make any direct use of the pseudo-reward maximising policies that they learn (although this is
2
an interesting direction for future research). Instead, they are used solely as auxiliary objectives for developing a more effective representation.
The Horde architecture (Sutton et al., 2011) also applied reinforcement learning to identify value functions for a multitude of distinct pseudo-rewards. However, this architecture was not used for representation learning; instead each value function was trained separately using distinct weights.
The UVFA architecture (Schaul et al., 2015a) is a factored representation of a continuous set of optimal value functions, combining features of the state with an embedding of the pseudo-reward function. Initial work on UVFAs focused primarily on architectural choices and learning rules for these continuous embeddings. A pre-trained UVFA representation was successfully transferred to novel pseudo-rewards in a simple task.
Similarly, the successor representation (Dayan, 1993; Barreto et al., 2016; Kulkarni et al., 2016) factors a continuous set of expected value functions for a ï¬xed policy, by combining an expectation over features of the state with an embedding of the pseudo-reward function. Successor representa- tions have been used to transfer representations from one pseudo-reward to another (Barreto et al., 2016) or to different scales of reward (Kulkarni et al., 2016).
Another, related line of work involves learning models of the environment (Schmidhuber, 2010; Xie et al., 2015; Oh et al., 2015). Although learning environment models as auxiliary tasks could improve RL agents (e.g. Lin & Mitchell (1992); Li et al. (2015)), this has not yet been shown to work in rich visual environments.
More recently, auxiliary predictions tasks have been studied in 3D reinforcement learning environ- ments. Lample & Chaplot (2016) showed that predicting internal features of the emulator, such as the presence of an enemy on the screen, is beneï¬cial. Mirowski et al. (2016) study auxiliary prediction of depth in the context of navigation.
# 2 BACKGROUND
We assume the standard reinforcement learning setting where an agent interacts with an environment over a number of discrete time steps. At time t the agent receives an observation 0, along with a reward r; and produces an action a;. The agentâs state s, is a function of its experience up until time t, 5, = f(01,71, 41, -.., 04,74). The n-step return Ry... at time t is defined as the discounted sum of rewards, Ri:t+n = 0), 7/'r4i. The value function is the expected return from state s, V"â¢(s) = E[Reco|se = 8,7], when actions are selected accorded to a policy 7(a|s). The action- value function Q*(s,a) = E[Rz:co |S: = $,a, = a, 7] is the expected return following action a from state s.
Value-based reinforcement learning algorithms, such as Q-learning 1989), or its deep learning instantiations DQN (Mnih et al and asynchronous Q-learning (Mnih et al.|{2016), approximate the action-value function Q(s,a;@) using parameters 0, and then update parameters to minimise the mean-squared error, for example by optimising an n-step lookahead loss Lg =E [Retin +7â maxy Q(sâ,aâ';0â) â Q(s, a; 6)"; where @~ are previous parameters and the optimisation is with respect to 0.
Policy gradient algorithms adjust the policy to maximise the expected reward, L, = âEs. 7 [Ri:c0], using the gradient Paver Mace] = E[% log r(als)(Q"(s,a) â V(s))] e [1999ap; in practice the true value functions Q7 and V⢠are substituted with approxima- tions. The Asynchronous Advantage Actor-Critic (A3C) algorithm (Mnih et al.| constructs an approximation to both the policy 7(a|s,@) and the value function V(s, 0) using parameters 0. Both policy and value are adjusted towards an n-step lookahead value, Rp-tin + Y"V (St4n41,9), using an entropy regularisation penalty, Lasc + Lyr + La â Esxn [oH (7 (s,-,9)], where Lyn = Ear [(Reten +9"V (Sr4ners8-) ~ V(s0.8))"]- In A3C many instances of the agent interact in parallel with many instances of the environment, which both accelerates and stabilises learning. The A3C agent architecture we build on uses an LSTM to jointly approximate both policy 7 and value function V, given the entire history of expe- rience as inputs (see Figure[I](a)).
3
# 3 AUXILIARY TASKS FOR REINFORCEMENT LEARNING
In this section we incorporate auxiliary tasks into the reinforcement learning framework in order to promote faster training, more robust learning, and ultimately higher performance for our agents. Section 3.1 introduces the use of auxiliary control tasks, Section 3.2 describes the addition of reward focussed auxiliary tasks, and Section 3.4 describes the complete UNREAL agent combining these auxiliary tasks.
3.1 AUXILIARY CONTROL TASKS
The auxiliary control tasks we consider are deï¬ned as additional pseudo-reward functions in the environment the agent is interacting with. We formally deï¬ne an auxiliary control task c by a reward R, where function r(c) : is the space of possible states and is the space of available includes both the history of observations and rewards as well actions. The underlying state space as the state of the agent itself, i.e. the activations of the hidden units of the network.
Given a set of auxiliary control tasks and let Ï be the agentâs policy on the base task. The overall objective is to maximise total performance across all these auxiliary tasks,
EÏc[R(c) 1: â EÏ[R1: arg max θ ] + λc ], â c (1)
# âC
R{,, = _,
where, R(c) is the discounted return for auxiliary reward r(c), and θ is the set of parameters of Ï and all Ï(c)âs. By sharing some of the parameters of Ï and all Ï(c) the agent must balance improving its performance with respect to the global reward rt with improving performance on the auxiliary tasks.
In principle, any reinforcement learning method could be applied to maximise these objectives. However, to efficiently learn to maximise many different pseudo-rewards simultaneously in par- allel from a single stream of experience, it is necessary to use off-policy reinforcement learn- ing. We focus on value-based RL methods that approximate the optimal action-values by Q- learning. Specifically, for each control task c we optimise an n-step Q-learning loss Ly = E [(Revin 47" maxar QO(s',aâ,6-) â QO (s,a, 6))â|. as described in (2016).
While many types of auxiliary reward functions can be deï¬ned from these quantities we focus on two speciï¬c types:
# e
# e
Pixel changes - Changes in the perceptual stream often correspond to important events in an environment. We train agents that learn a separate policy for maximally changing the pixels in each cell of an n n non-overlapping grid placed over the input image. We refer to these auxiliary tasks as pixel control. See Section 4 for a complete description. Network features - Since the policy or value networks of an agent learn to extract task- relevant high-level features of the environment (Mnih et al., 2015; Zahavy et al., 2016; Silver et al., 2016) they can be useful quantities for the agent to learn to control. Hence, the activation of any hidden unit of the agentâs neural network can itself be an auxiliary reward. We train agents that learn a separate policy for maximally activating each of the units in a speciï¬c hidden layer. We refer to these tasks as feature control.
The Figure 1 (b) shows an A3C agent architecture augmented with a set of auxiliary pixel control tasks. In this case, the base policy Ï shares both the convolutional visual stream and the LSTM with n tensor Qaux the auxiliary policies. The output of the auxiliary network head is an Nact à where Qaux(a, i, j) represents the networkâs current estimate of the optimal discounted expected change in cell (i, j) of the input after taking action a. We exploit the spatial nature of the auxiliary tasks by using a deconvolutional neural network to produce the auxiliary values Qaux.
3.2 AUXILIARY REWARD TASKS
In addition to learning generally about the dynamics of the environment, an agent must learn to maximise the global reward stream. To learn a policy to maximise rewards, an agent requires features
4
Agent Input nav_maze_all_random_02 samples
Figure 2: The raw RGB frame from the environment is the observation that is given as input to the agent, along with the last action and reward. This observation is shown for a sample of a maze from the nav maze all random 02 level in Labyrinth. The agent must navigate this unseen maze and pick up apples giving +1 reward and reach the goal giving +10 reward, after which it will respawn. Top down views of samples from this maze generator show the variety of mazes procedurally created. A video showing the agent playing Labyrinth levels can be viewed at https://youtu.be/Uz-zGYrYEjA
that recognise states that lead to high reward and value. An agent with a good representation of rewarding states, will allow the learning of good value functions, and in turn should allow the easy learning of a policy.
However, in many interesting environments reward is encountered very sparsely, meaning that it can take a long time to train feature extractors adept at recognising states which signify the onset of reward. We want to remove the perceptual sparsity of rewards and rewarding states to aid the training of an agent, but to do so in a way which does not introduce bias to the agentâs policy.
To do this, we introduce the auxiliary task of reward prediction â that of predicting the onset of immediate reward given some historical context. This task consists of processing a sequence of consecutive observations, and requiring the agent to predict the reward picked up in the subsequent unseen frame. This is similar to value learning focused on immediate reward (γ = 0).
Unlike learning a value function, which is used to estimate returns and as a baseline while learning a policy, the reward predictor is not used for anything other than shaping the features of the agent. This keeps us free to bias the data distribution, therefore biasing the reward predictor and feature shaping, without biasing the value function or policy.
We train the reward prediction task on sequences S; = (874%, 8râk41,---;87â1) to predict the reward r-, and sample S, from the experience of our policy 7 in a skewed manner so as to over- represent rewarding events (presuming rewards are sparse within the environment). Specifically, we sample such that zero rewards and non-zero rewards are equally represented, i.e. the predicted probability of a non-zero reward is P(r, # 0) = 0.5. The reward prediction is trained to minimise a loss Lpp. In our experiments we use a multiclass cross-entropy classification loss across three classes (zero, positive, or negative reward), although a mean-squared error loss is also feasible.
The auxiliary reward predictions may use a different architecture to the agentâs main policy. Rather than simply âhangingâ the auxiliary predictions off the LSTM, we use a simpler feedforward net- work that concatenates a stack of states SÏ after being encoded by the agentâs CNN, see Figure 1 (c). The idea is to simplify the temporal aspects of the prediction task in both the future direction (focus- ing only on immediate reward prediction rather than long-term returns) and past direction (focusing only on immediate predecessor states rather than the complete history); the features discovered in this manner is shared with the primary LSTM (via shared weights in the convolutional encoder) to enable the policy to be learned more efï¬ciently.
3.3 EXPERIENCE REPLAY
Experience replay has proven to be an effective mechanism for improving both the data efï¬ciency and stability of deep reinforcement learning algorithms (Mnih et al., 2015). The main idea is to store transitions in a replay buffer, and then apply learning updates to sampled transitions from this buffer.
Experience replay provides a natural mechanism for skewing the distribution of reward predic- tion samples towards rewarding events: we simply split the replay buffer into rewarding and non- rewarding subsets, and replay equally from both subsets. The skewed sampling of transitions from
5
a replay buffer means that rare rewarding states will be oversampled, and learnt from far more fre- quently than if we sampled sequences directly from the behaviour policy. This approach can be viewed as a simple form of prioritised replay (Schaul et al., 2015b).
In addition to reward prediction, we also use the replay buffer to perform value function replay. This amounts to resampling recent historical sequences from the behaviour policy distribution and performing extra value function regression in addition to the on-policy value function regression in A3C. By resampling previous experience, and randomly varying the temporal position of the truncation window over which the n-step return is computed, value function replay performs value iteration and exploits newly discovered features shaped by reward prediction. We do not skew the distribution for this case.
Experience replay is also used to increase the efï¬ciency and stability of the auxiliary control tasks. Q-learning updates are applied to sampled experiences that are drawn from the replay buffer, allow- ing features to be developed extremely efï¬ciently.
3.4 UNREAL AGENT
The UNREAL algorithm combines the beneï¬ts of two separate, state-of-the-art approaches to deep reinforcement learning. The primary policy is trained with A3C (Mnih et al., 2016): it learns from parallel streams of experience to gain efï¬ciency and stability; it is updated online using policy gra- dient methods; and it uses a recurrent neural network to encode the complete history of experience. This allows the agent to learn effectively in partially observed environments.
The auxiliary tasks are trained on very recent sequences of experience that are stored and randomly sampled; these sequences may be prioritised (in our case according to immediate rewards) (Schaul et al., 2015b); these targets are trained off-policy by Q-learning; and they may use simpler feedfor- ward architectures. This allows the representation to be trained with maximum efï¬ciency.
The UNREAL algorithm optimises a single combined loss function with respect to the joint param- eters of the agent, θ, that combines the A3C loss LPC, auxiliary reward prediction loss
LRP and replayed value loss LA3C + λVRLVR + λPC
Lvyr,
(c) UNREAL(θ) = Q + λRPLRP L L c (2)
where λVR, λPC, λRP are weighting terms on the individual loss components.
In practice, the loss is broken down into separate components that are applied either on-policy, LA3C is directly from experience; or off-policy, on replayed transitions. Speciï¬cally, the A3C loss LVR is optimised from replayed data, in addition minimised on-policy; while the value function loss to the A3C loss (of which it is one component, see Section 2). The auxiliary control loss LPC is optimised off-policy from replayed data, by n-step Q-learning. Finally, the reward loss LRP is optimised from rebalanced replay data.
# 4 EXPERIMENTS
In this section we give the results of experiments performed on the 3D environment Labyrinth in Section 4.1 and Atari in Section 4.2.
In all our experiments we used an A3C CNN-LSTM agent as our baseline and the UNREAL agent along with its ablated variants added auxiliary outputs and losses to this base agent. The agent is trained on-policy with 20-step returns and the auxiliary tasks are performed every 20 environment steps, corresponding to every update of the base A3C agent. The replay buffer stores the most recent 2k observations, actions, and rewards taken by the base agent. In Labyrinth we use the same set of 17 discrete actions for all games and on Atari the action set is game dependent (between 3 and 18 discrete actions). The full implementation details can be found in Section B.
4.1 LABYRINTH RESULTS
Labyrinth is a ï¬rst-person 3D game platform extended from OpenArena (contributors, 2005), which is itself based on Quake3 (id software, 1999). Labyrinth is comparable to other ï¬rst-person 3D game
6
Labyrinth Performance
# Labyrinth Robustness
Atari Performance Atari Robustness
Figure 3: An overview of performance averaged across all levels on Labyrinth (Top) and Atari (Bottom). In the ablated versions RP is reward prediction, VR is value function replay, and PC is pixel control, with the UNREAL agent being the combination of all. Left: The mean human-normalised performance over last 100 episodes of the top-3 jobs at every point in training. We achieve an average of 87% human-normalised score, with every element of the agent improving upon the 54% human-normalised score of vanilla A3C. Right: The ï¬nal human-normalised score of every job in our hyperparameter sweep, sorted by score. On both Labyrinth and Atari, the UNREAL agent increases the robustness to the hyperparameters (namely learning rate and entropy cost).
platforms for AI research like VizDoom (Kempka et al., 2016) or Minecraft (Tessler et al., 2016). However, in comparison, Labyrinth has considerably richer visuals and more realistic physics. Tex- tures in Labyrinth are often dynamic (animated) so as to convey a game world where walls and ï¬oors shimmer and pulse, adding signiï¬cant complexity to the perceptual task. The action space allows for ï¬ne-grained pointing in a fully 3D world. Unlike in VizDoom, agents can look up to the sky or down to the ground. Labyrinth also supports continuous motion unlike the Minecraft platform of (Oh et al., 2016), which is a 3D grid world.
We evaluated agent performance on 13 Labyrinth levels that tested a range of different agent abilities. A top-down visualization showing the layout of each level can be found in Figure 7 of the Appendix. A gallery of example images from the ï¬rst-person perspective of the agent are in Figure 8 of the Appendix. The levels can be divided into four categories:
and a stairway to melon 01). The goal of these levels is to collect apples (small positive reward) and melons (large positive reward) while avoiding lemons (small negative reward). and 2. Navigation levels with a 1, 2, 3 { the agentâs ability to ï¬nd ). nav maze random goal 0 1, 2, 3 } { their way to a goal in a ï¬xed maze that remains the same across episodes. The starting location is random. In this case, agents could encode the structure of the maze in network weights. In the random goal variant, the location of the goal changes in every episode. The optimal policy is to ï¬nd the goalâs location at the start of each episode and then use long-term knowledge of the maze layout to return to it as quickly as possible from any location. The static variant is simpler in that the goal location is always ï¬xed for all episodes and only the agentâs starting location changes so the optimal policy does not require the ï¬rst step of exploring to ï¬nd the current goal location.
3. Procedurally-generated navigation levels requiring effective exploration of a new maze ). These generated on-the-ï¬y at the start of each episode (nav maze all random 0 1, 2, 3 } { levels test the agentâs ability to effectively explore a totally new environment. The optimal
7
policy would begin by exploring the maze to rapidly learn its layout and then exploit that knowledge to repeatedly return to the goal as many times as possible before the end of the episode (between 60 and 300 seconds).
4. Laser-tag levels requiring agents to wield laser-like science ï¬ction gadgets to tag bots con- trolled by the gameâs in-built AI (lt horse shoe color and lt hallway slope). A reward of 1 is delivered whenever the agent tags a bot by reducing its shield to 0. These levels approximate the default OpenArena/Quake3 gameplay mode. In lt hallway slope there is a sloped arena, requiring the agent to look up and down. In lt horse shoe color, the colors and textures of the bots are randomly generated at the start of each episode. This prevents agents from relying on color for bot detection. These levels test aspects of ï¬ne-control (for aiming), planning (to anticipate where bots are likely to move), strategy (to control key areas of the map such as gadget spawn points), and robustness to the substantial vi- sual complexity arising from the large numbers of independently moving objects (gadget projectiles and bots).
4.1.1 RESULTS
We compared the full UNREAL agent to a basic A3C LSTM agent along with several ablated versions of UNREAL with different components turned off. A video of the ï¬nal agent perfor- mance, as well as visualisations of the activations and auxiliary task outputs can be viewed at https://youtu.be/Uz-zGYrYEjA.
Figure 3 (right) shows curves of mean human-normalised scores over the 13 Labyrinth levels. Adding each of our proposed auxiliary tasks to an A3C agent substantially improves the perfor- mance. Combining different auxiliary tasks leads to further improvements over the individual auxil- iary tasks. The UNREAL agent, which combines all three auxiliary tasks, achieves more than twice the ï¬nal human-normalised mean performance of A3C, increasing from 54% to 87% (45% to 92% for median performance). This includes a human-normalised score of 116% on lt hallway slope and 100% on nav maze random goal 02.
Perhaps of equal importance, aside from ï¬nal performance on the games, UNREAL is signiï¬cantly faster at learning and therefore more data efï¬cient, achieving a mean speedup of the number of steps to reach A3C best performance of 10 on nav maze random goal 02. This translates in a drastic improvement in the data efï¬ciency of UN- REAL over A3C, requiring less than 10% of the data to reach the ï¬nal performance of A3C. We can also measure the robustness of our learning algorithms to hyperparameters by measuring the perfor- mance over all hyperparameters (namely learning rate and entropy cost). This is shown in Figure 3 Top: every auxiliary task in our agent improves robustness. A breakdown of the performance of A3C, UNREAL and UNREAL without pixel control on the individual Labyrinth levels is shown in Figure 4.
Unsupervised Reinforcement Learning In order to better understand the beneï¬ts of auxiliary control tasks we compared it to two simple baselines on three Labyrinth levels. The ï¬rst baseline was A3C augmented with a pixel reconstruction loss, which has been shown to improve performance on 3D environments (Kulkarni et al., 2016). The second baseline was A3C augmented with an input change prediction loss, which can be seen as simply predicting the immediate auxiliary reward instead of learning to control. Finally, we include preliminary results for A3C augmented with the feature control auxiliary task on one of the levels. We retuned the hyperparameters of all methods (including learning rate and the weight placed on the auxiliary loss) for each of the three Labyrinth levels. Figure 5 shows the learning curves for the top 5 hyperparameter settings on three Labyrinth navigation levels. The results show that learning to control pixel changes is indeed better than simply predicting immediate pixel changes, which in turn is better than simply learning to reconstruct the input. In fact, learning to reconstruct only led to faster initial learning and actually made the ï¬nal scores worse when compared to vanilla A3C. Our hypothesis is that input reconstruction hurts ï¬nal performance because it puts too much focus on reconstructing irrelevant parts of the visual input instead of visual cues for rewards, which rewarding objects are rarely visible. Encouragingly, we saw an improvement from including the feature control auxiliary task. Combining feature control with other auxiliary tasks is a promising future direction.
8
AUC Performance Data Efficiency TopS Speedup It hallway slope 27% 70% Bx It horse. shoe color % Ess, % fenestens Poin. oz nav_maze all random.01 lll 71% ss. es Bs. 251% ier ig ânav.maze all random_02 B70% a Bix a 3x a Bix Ex mm | is fmm 30% 7% nay maze_all_random_03 _ -maze.allrandom 4 039% nav_maze random goal 01 02% â | nav maze random goal 02 Bihices, 509% Â¥ nav maze random goal 15% = dom goal 03 15% nav.maze static 01 88% Bx 22% : nav maze static 02 assy, " 210% 2 nav maze static.03 Menâ a. cess sr, Tx seekavoid arena O1 o be stsiray-tosmelon 0 ts Mean oom foe 243% 3x Median 7% | 210% Be
# UNREAL
# ABC+RP+VR
Figure 4: A breakdown of the improvement over A3C due to our auxiliary tasks for each level on Labyrinth. The values for A3C+RP+VR (reward prediction and value function replay) and UNREAL (reward prediction, value function replay and pixel control) are normalised by the A3C value. AUC Performance gives the robust- ness to hyperparameters (area under the robustness curve Figure 3 Right). Data Efï¬ciency is area under the mean learning curve for the top-5 jobs, and Top5 Speedup is the speedup for the mean of the top-5 jobs to reach the maximum top-5 mean score set by A3C. Speedup is not deï¬ned for stairway to melon as A3C did not learn throughout training.
ray_maze random, goal o1 nay_maze_all random 01 â asc â A3C + Input reconstruction â A3C + Input change prediction ABC + Pixel Contral â A3C + Feature Control A3C + Pixel Control
Figure 5: Comparison of various forms of self-supervised learning on random maze navigation. Adding an input reconstruction loss to the objective leads to faster learning compared to an A3C baseline. Predicting changes in the inputs works better than simple image reconstruction. Learning to control changes leads to the best results.
4.2 ATARI
We applied the UNREAL agent as well as UNREAL without pixel control to 57 Atari games from the Arcade Learning Environment (Bellemare et al., 2012) domain. We use the same evaluation protocol as for our Labyrinth experiments where we evaluate 50 different random hyper parameter settings (learning rate and entropy cost) on each game. The results are shown in the bottom row of Figure 3. The left side shows the average performance curves of the top 3 agents for all three meth- ods the right half shows sorted average human-normalised scores for each hyperparameter setting. More detailed learning curves for individual levels can be found in Figure 7. We see that UNREAL surpasses the current state-of-the-art agents, i.e. A3C and Prioritized Dueling DQN (Wang et al., 2016), across all levels attaining 880% mean and 250% median performance. Notably, UNREAL is also substantially more robust to hyper parameter settings than A3C.
# 5 CONCLUSION
We have shown how augmenting a deep reinforcement learning agent with auxiliary control and re- ward prediction tasks can drastically improve both data efï¬ciency and robustness to hyperparameter settings. Most notably, our proposed UNREAL architecture more than doubled the previous state- of-the-art results on the challenging set of 3D Labyrinth levels, bringing the average scores to over 87% of human scores. The same UNREAL architecture also signiï¬cantly improved both the learning speed and the robustness of A3C over 57 Atari games.
9
# ACKNOWLEDGEMENTS
We thank Charles Beattie, Julian Schrittwieser, Marcus Wainwright, and Stig Petersen for environ- ment design and development, and Amir Sadik and Sarah York for expert human game testing. We also thank Joseph Modayil, Andrea Banino, Hubert Soyer, Razvan Pascanu, and Raia Hadsell for many helpful discussions.
# REFERENCES
Andr´e Barreto, R´emi Munos, Tom Schaul, and David Silver. Successor features for transfer in reinforcement learning. arXiv preprint arXiv:1606.05312, 2016.
Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning envi- ronment: An evaluation platform for general agents. Journal of Artiï¬cial Intelligence Research, 2012.
# OpenArena contributors. The openarena manual. 2005. URL http://openarena.wikia.
com/wiki/Manual.
Peter Dayan. Improving generalization for temporal difference learning: The successor representa- tion. Neural Computation, 5(4):613â624, 1993.
Felix A Gers, J¨urgen Schmidhuber, and Fred Cummins. Learning to forget: Continual prediction with lstm. Neural computation, 12(10):2451â2471, 2000.
id software. Quake3. 1999. URL https://github.com/id-Software/ Quake-III-Arena.
MichaŠKempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Ja´skowski. Viz- doom: A doom-based ai research platform for visual reinforcement learning. arXiv preprint arXiv:1605.02097, 2016.
George Konidaris and Andre S Barreto. Skill discovery in continuous reinforcement learning do- mains using skill chaining. In Advances in Neural Information Processing Systems, pp. 1015â 1023, 2009.
Tejas D Kulkarni, Ardavan Saeedi, Simanta Gautam, and Samuel J Gershman. Deep successor reinforcement learning. arXiv preprint arXiv:1606.02396, 2016.
Guillaume Lample and Devendra Singh Chaplot. Playing FPS games with deep reinforcement learn- ing. CoRR, abs/1609.05521, 2016.
Xiujun Li, Lihong Li, Jianfeng Gao, Xiaodong He, Jianshu Chen, Li Deng, and Ji He. Recurrent reinforcement learning: A hybrid approach. arXiv preprint arXiv:1509.03044, 2015.
Long-Ji Lin and Tom M Mitchell. Memory approaches to reinforcement learning in non-markovian domains. Technical report, Carnegie Mellon University, School of Computer Science, 1992.
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Andrea Banino, Hubert Soyer, Andy Ballard, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, and Raia Hadsell. Learning to navigate in complex environments. 2016.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier- stra, and Martin Riedmiller. Playing atari with deep reinforcement learning. In NIPS Deep Learn- ing Workshop. 2013.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Pe- tersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 02 2015. URL http://dx.doi.org/10.1038/ nature14236.
10
Volodymyr Mnih, Adri`a Puigdom`enech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning (ICML), pp. 1928â1937, 2016.
Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action-conditional video prediction using deep networks in atari games. In Advances in Neural Information Process- ing Systems, pp. 2863â2871, 2015.
Junhyuk Oh, Valliappa Chockalingam, Satinder Singh, and Honglak Lee. Control of memory, active perception, and action in minecraft. arXiv preprint arXiv:1605.09128, 2016.
Jing Peng and Ronald J Williams. Incremental multi-step q-learning. Machine Learning, 22(1-3): 283â290, 1996.
Daniel L Schacter, Donna Rose Addis, Demis Hassabis, Victoria C Martin, R Nathan Spreng, and Karl K Szpunar. The future of memory: remembering, imagining, and the brain. Neuron, 76(4): 677â694, 2012.
Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function approxima- tors. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pp. 1312â1320, 2015a.
Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. arXiv preprint arXiv:1511.05952, 2015b.
J¨urgen Schmidhuber. Formal theory of creativity, fun, and intrinsic motivation (1990â2010). IEEE Transactions on Autonomous Mental Development, 2(3):230â247, 2010.
David Silver and Kamil Ciosek. Compositional planning using optimal option models. arXiv preprint arXiv:1206.6473, 2012.
David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016.
Richard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gradient methods for reinforcement learning with function approximation. In NIPS, volume 99, pp. 1057â 1063, 1999a.
Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artiï¬cial intelligence, 1999b.
Richard S Sutton, Joseph Modayil, Michael Delp, Thomas Degris, Patrick M Pilarski, Adam White, and Doina Precup. Horde: A scalable real-time architecture for learning knowledge from unsuper- vised sensorimotor interaction. In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, pp. 761â768. International Foundation for Autonomous Agents and Multiagent Systems, 2011.
Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J Mankowitz, and Shie Mannor. A deep hierar- chical approach to lifelong learning in minecraft. arXiv preprint arXiv:1604.07255, 2016.
Z. Wang, N. de Freitas, and M. Lanctot. Dueling Network Architectures for Deep Reinforcement Learning. In Proceedings of the 33rd International Conference on Machine Learning (ICML), 2016.
Christopher John Cornish Hellaby Watkins. Learning from delayed rewards. PhD thesis, University of Cambridge England, 1989.
Christopher Xie, Sachin Patil, Teodor Mihai Moldovan, Sergey Levine, and Pieter Abbeel. Model- based reinforcement learning with parametrized physical models and optimism-driven explo- ration. CoRR, abs/1509.06824, 2015.
Tom Zahavy, Nir Ben Zrihem, and Shie Mannor. Graying the black box: Understanding dqns. In Proceedings of the 33rd International Conference on Machine Learning, 2016.
11
# A ATARI GAMES
Figure 6: Learning curves for three example Atari games. Semi-transparent lines are agents with different seeds and hyperparameters, the bold line is a mean over population and dotted line is the best agent (in terms of ï¬nal performance).
# B IMPLEMENTATION DETAILS
The input to the agent at each timestep was an 84 84 RGB image. All agents processed the input with the convolutional neural network (CNN) originally used for Atari by Mnih et al. (2013). The network consists of two convolutional layers. The ï¬rst one has 16 8 8 ï¬lters applied with stride 4, while the second one has 32 4 4 ï¬lters with stride 2. This is followed by a fully connected layer with 256 units. All three layers are followed by a ReLU non-linearity. All agents used an LSTM with forget gates (Gers et al., 2000) with 256 cells which take in the CNN-encoded observation concatenated with the previous action taken and curren:t reward. The policy and value function are linear projections of the LSTM output. The agent is trained with 20-step unrolls. The action space of the agent in the environment is game dependent for Atari (between 3 and 18 discrete actions), and 17 discrete actions for Labyrinth. Labyrinth runs at 60 frames-per-second. We use an action repeat of four, meaning that each action is repeated four times, with the agent receiving the ï¬nal fourth frame as input to the next processing step.
80 crop of the For the pixel control auxiliary tasks we trained policies to control the central 80 inputs. The cropped region was subdivided into a 20 4 cells. The instantaneous reward in each cell was deï¬ned as the average absolute difference from the previous frame, where the average is taken over both pixels and channels in the cell. The output tensor of auxiliary values, Qaux, is produced from the LSTM outputs by a deconvolutional network. The 7 spatial feature map with a linear layer followed by a LSTM outputs are ï¬rst mapped to a 32 ReLU. Deconvolution layers with 1 and Nact ï¬lters of size 4 7 into a value tensor and an advantage tensor respectively. The spatial map is then decoded into Q-values using the dueling parametrization (Wang et al., 2016) producing the Nact à The architecture for feature control was similar. We learned to control the second hidden layer, which is a spatial feature map with size 32 9. Similarly to pixel control, we exploit the spatial structure in the data and used a deconvolutional network to produce Qaux from the LSTM outputs. Further details are included in the supplementary materials.
Ã
The reward prediction task is performed on a sequence of three observations, which are fed through three instances of the agentâs CNN. The three encoded CNN outputs are concatenated and fed through a fully connected layer of 128 units with ReLU activations, followed by a ï¬nal linear three- class classiï¬er and softmax. The reward is predicted as one of three classes: positive, negative, or zero and trained with a task weight λRP = 1. The value function replay is performed on a sequence of length 20 with a task weight λVR = 1.
The auxiliary tasks are performed every 20 environment steps, corresponding to every update of the base A3C agent, once the replay buffer has ï¬lled with agent experience. The replay buffer stores the most recent 2k observations, actions, and rewards taken by the base agent.
The agents are optimised over 32 asynchronous threads with shared RMSprop (Mnih et al., 2016). The learning rates are sampled from a log-uniform distribution between 0.0001 and 0.005. The entropy costs are sampled from the log-uniform distribution between 0.0005 and 0.01. Task weight λPC is sampled from log-uniform distribution between 0.01 and 0.1 for Labyrinth and 0.0001 and 0.01 for Atari (since Atari games are not homogeneous in terms of pixel intensities changes, thus we need to ï¬t this normalization factor).
12
C LABYRINTH LEVELS
+10 Melon -ILemon Agent +1 Apple
# stairway_to_melon
stairway to melon
seekavoid arena 01
Agent TLAPPIC 16 Goal
Agent +1Apple +10Goal
# nav maze
01
nav maze
â
# nav maze
â
02
nav maze 03 lt horse shoe color
+10 Goal +1 Apple Agente
â
âagent Powte-ups
# lt_hallway_slope
levels show Figure 7: Top-down renderings of each Labyrinth level. The nav maze one example maze layout. In the all random case, a new maze was randomly generated at the start of each episode.
13
stairway_to_melon nav_maze*01
Figure 8: Example images from the agentâs egocentric viewpoint for each Labyrinth level.
14 | {
"id": "1605.02097"
} |
1611.02779 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | 6 1 0 2
v o N 0 1 ] I A . s c [
2 v 9 7 7 2 0 . 1 1 6 1 : v i X r a
Under review as a conference paper at ICLR 2017
# RL?: FAST REINFORCEMENT LEARNING VIA SLOW REINFORCEMENT LEARNING
Yan Duan/?, John Schulmanâ, Xi Chenâ*, Peter L. Bartlettâ, Ilya Sutskever', Pieter Abbeelât Â¥ UC Berkeley, Department of Electrical Engineering and Computer Science
OpenAI
{rocky, joschu, peter}@openai.com, peter@berkeley.edu, {ilyasu, pieter}@openai.com
# ABSTRACT
Deep reinforcement learning (deep RL) has been successful in learning sophis- ticated behaviors automatically; however, the learning process requires a huge number of trials. In contrast, animals can learn new tasks in just a few trials, bene- fiting from their prior knowledge about the world. This paper seeks to bridge this gap. Rather than designing a âfastâ reinforcement learning algorithm, we propose to represent it as a recurrent neural network (RNN) and learn it from data. In our proposed method, RL?, the algorithm is encoded in the weights of the RNN, which are learned slowly through a general-purpose (âslowâ) RL algorithm. The RNN receives all information a typical RL algorithm would receive, including ob- servations, actions, rewards, and termination flags; and it retains its state across episodes in a given Markov Decision Process (MDP). The activations of the RNN store the state of the âfastâ RL algorithm on the current (previously unseen) MDP. We evaluate RL? experimentally on both small-scale and large-scale problems. On the small-scale side, we train it to solve randomly generated multi-armed ban- dit problems and finite MDPs. After RL? is trained, its performance on new MDPs is close to human-designed algorithms with optimality guarantees. On the large- scale side, we test RL? on a vision-based navigation task and show that it scales up to high-dimensional problems.
# 1 INTRODUCTION
In recent years, deep reinforcement learning has achieved many impressive results, including playing Atari games from raw pixels (Guo et al., 2014; Mnih et al., 2015; Schulman et al., 2015), and acquiring advanced manipulation and locomotion skills (Levine et al., 2016; Lillicrap et al., 2015; Watter et al., 2015; Heess et al., 2015; Schulman et al., 2015; 2016). However, many of the successes come at the expense of high sample complexity. For example, the state-of-the-art Atari results require tens of thousands of episodes of experience (Mnih et al., 2015) per game. To master a game, one would need to spend nearly 40 days playing it with no rest. In contrast, humans and animals are capable of learning a new task in a very small number of trials. Continuing the previous example, the human player in Mnih et al. (2015) only needed 2 hours of experience before mastering a game. We argue that the reason for this sharp contrast is largely due to the lack of a good prior, which results in these deep RL agents needing to rebuild their knowledge about the world from scratch.
Although Bayesian reinforcement learning provides a solid framework for incorporating prior knowledge into the learning process (Strens, 2000; Ghavamzadeh et al., 2015; Kolter & Ng, 2009), exact computation of the Bayesian update is intractable in all but the simplest cases. Thus, practi- cal reinforcement learning algorithms often incorporate a mixture of Bayesian and domain-specific ideas to bring down sample complexity and computational burden. Notable examples include guided policy search with unknown dynamics (Levine & Abbeel, 2014) and PILCO (Deisenroth & Ras- mussen, 2011). These methods can learn a task using a few minutes to a few hours of real experience, compared to days or even weeks required by previous methods (Schulman et al., 2015; 2016; Lilli- crap et al., 2015). However, these methods tend to make assumptions about the environment (e.g., instrumentation for access to the state at learning time), or become computationally intractable in high-dimensional settings (Wahlstrém et al., 2015).
Under review as a conference paper at ICLR 2017 Rather than hand-designing domain-specific reinforcement learning algorithms, we take a different approach in this paper: we view the learning process of the agent itself as an objective, which can be optimized using standard reinforcement learning algorithms. The objective is averaged across all possible MDPs according to a specific distribution, which reflects the prior that we would like to distill into the agent. We structure the agent as a recurrent neural network, which receives past rewards, actions, and termination flags as inputs in addition to the normally received observations. Furthermore, its internal state is preserved across episodes, so that it has the capacity to perform learning in its own hidden activations. The learned agent thus also acts as the learning algorithm, and can adapt to the task at hand when deployed. We evaluate this approach on two sets of classical problems, multi-armed bandits and tabular MDPs. These problems have been extensively studied, and there exist algorithms that achieve asymptoti- cally optimal performance. We demonstrate that our method, named RL?, can achieve performance comparable with these theoretically justified algorithms. Next, we evaluate RL? on a vision-based navigation task implemented using the ViZDoom environment (Kempka et al., 2016), showing that RL? can also scale to high-dimensional problems. 2 METHOD 2.1 PRELIMINARIES We define a discrete-time finite-horizon discounted Markov decision process (MDP) by a tuple M = (S,A,P,7r, po, 7,1), in which S is a state set, A an action set, P : S x Ax S + R, a transition probability distribution, r : S x A â [âRmax. Rmax] a bounded reward function, pp : S + Ry an initial state distribution, y ⬠(0, 1] a discount factor, and T the horizon. In policy search methods, we typically optimize a stochastic policy 7g : S x A â R, parametrized by 0. The objective is to maximize its expected discounted return, (79) = E, an 9'r(s¢, az)], where T = (89, a0, .--) denotes the whole trajectory, s9 ~ po(80), a4 ~ To (az|Sz), and 5441 ~ P(se41|S¢, at). 2.2 FORMULATION We now describe our formulation, which casts learning an RL algorithm as a reinforcement learning problem, and hence the name RL?. We assume knowledge of a set of MDPs, denoted by M, and a distribution over them: pry : M â R ,. We only need to sample from this distribution. We use n. to denote the total number of episodes allowed to spend with a specific MDP. We define a trial to be such a series of episodes of interaction with a fixed MDP. --Agent o-- ho Trial 1 Trial 2 Figure 1: Procedure of agent-environment interaction This process of interaction between an agent and the environment is illustrated in Figure 1. Here, each trial happens to consist of two episodes, hence n = 2. For each trial, a separate MDP is drawn from py, and for each episode, a fresh so is drawn from the initial state distribution specific to the corresponding MDP. Upon receiving an action a; produced by the agent, the environment computes reward r;, steps forward, and computes the next state s;,,. If the episode has terminated, it sets termination flag d, to 1, which otherwise defaults to 0. Together, the next state s,41, action
Under review as a conference paper at ICLR 2017
az, reward r;, and termination flag d;, are concatenated to form the input to the policy', which, conditioned on the hidden state h;41, generates the next hidden state h;2 and action a;,,. At the end of an episode, the hidden state of the policy is preserved to the next episode, but not preserved between trials.
The objective under this formulation is to maximize the expected total discounted reward accumu- lated during a single trial rather than a single episode. Maximizing this objective is equivalent to minimizing the cumulative pseudo-regret (Bubeck & Cesa-Bianchi, 2012). Since the underlying MDP changes across trials, as long as different strategies are required for different MDPs, the agent must act differently according to its belief over which MDP it is currently in. Hence, the agent is forced to integrate all the information it has received, including past actions, rewards, and termi- nation flags, and adapt its strategy continually. Hence, we have set up an end-to-end optimization process, where the agent is encouraged to learn a âfastâ reinforcement learning algorithm.
For clarity of exposition, we have defined the âinnerâ problem (of which the agent sees n each trials) to be an MDP rather than a POMDP. However, the method can also be applied in the partially- observed setting without any conceptual changes. In the partially observed setting, the agent is faced with a sequence of POMDPs, and it receives an observation 0; instead of state s; at time t. The visual navigation experiment in Section 3.3, is actually an instance of the this POMDP setting.
2.3 POLICY REPRESENTATION
We represent the policy as a general recurrent neural network. Each timestep, it receives the tuple (s,a,7r,d) as input, which is embedded using a function ¢(s,a,7,d) and provided as input to an RNN. To alleviate the difficulty of training RNNs due to vanishing and exploding gradients (Bengio et al., 1994), we use Gated Recurrent Units (GRUs) (Cho et al., 2014) which have been demonstrated to have good empirical performance (Chung et al., 2014; Jozefowicz et al., 2015). The output of the GRU is fed to a fully connected layer followed by a softmax function, which forms the distribution over actions.
We have also experimented with alternative architectures which explicitly reset part of the hidden state each episode of the sampled MDP, but we did not find any improvement over the simple archi- tecture described above.
2.4 POLICY OPTIMIZATION
After formulating the task as a reinforcement learning problem, we can readily use standard off-the- shelf RL algorithms to optimize the policy. We use a first-order implementation of Trust Region Policy Optimization (TRPO) (Schulman et al., 2015), because of its excellent empirical perfor- mance, and because it does not require excessive hyperparameter tuning. For more details, we refer the reader to the original paper. To reduce variance in the stochastic gradient estimation, we use a baseline which is also represented as an RNN using GRUs as building blocks. We optionally apply Generalized Advantage Estimation (GAE) (Schulman et al., 2016) to further reduce the variance.
# 3 EVALUATION
We designed experiments to answer the following questions:
e Can RL? learn algorithms that achieve good performance on MDP classes with special structure, relative to existing algorithms tailored to this structure that have been proposed in the literature?
e Can RL? scale to high-dimensional tasks?
For the first question, we evaluate RL? on two sets of tasks, multi-armed bandits (MAB) and tabular MDPs. These problems have been studied extensively in the reinforcement learning literature, and this body of work includes algorithms with guarantees of asymptotic optimality. We demonstrate that our approach achieves comparable performance to these theoretically justified algorithms.
'To make sure that the inputs have a consistent dimension, we use placeholder values for the initial input to the policy.
Under review as a conference paper at ICLR 2017
For the second question, we evaluate RL? on a vision-based navigation task. Our experiments show that the learned policy makes effective use of the learned visual information and also short-term information acquired from previous episodes.
3.1 MULTI-ARMED BANDITS
Multi-armed bandit problems are a subset of MDPs where the agentâs environment is stateless. Specifically, there are k arms (actions), and at every time step, the agent pulls one of the arms, say i, and receives a reward drawn from an unknown distribution: our experiments take each arm to be a Bernoulli distribution with parameter p;. The goal is to maximize the total reward obtained over a fixed number of time steps. The key challenge is balancing exploration and exploitationâ âexploringâ each arm enough times to estimate its distribution (p;), but eventually switching over to âexploitationâ of the best arm. Despite the simplicity of multi-arm bandit problems, their study has led to a rich theory and a collection of algorithms with optimality guarantees.
Using RL?, we can train an RNN policy to solve bandit problems by training it on a given distribution pm. If the learning is successful, the resulting policy should be able to perform competitively with the theoretically optimal algorithms. We randomly generated bandit problems by sampling each parameter p; from the uniform distribution on [0, 1]. After training the RNN policy with RLâ, we compared it against the following strategies:
e Random: this is a baseline strategy, where the agent pulls a random arm each time.
e Gittins index (Gittins, 1979): this method gives the Bayes optimal solution in the dis- counted infinite-horizon case, by computing an index separately for each arm, and taking the arm with the largest index. While this work shows it is sufficient to independently com- pute an index for each arm (hence avoiding combinatorial explosion with the number of arms), it doesnât show how to tractably compute these individual indices exactly. We fol- low the practical approximations described in Gittins et al. (2011), Chakravorty & Mahajan (2013), and Whittle (1982), and choose the best-performing approximation for each setup.
e UCBI (Auer, 2002): this method estimates an upper-confidence bound, and pulls the arm with the largest value of ucb;(t) = fi;(tâ1)+¢,/ Hep: where /i;(t â 1) is the estimated mean parameter for the ith arm, T;(¢â1) is the number of times the ith arm has been pulled, and c is a tunable hyperparameter (Audibert & Munos, 2011). We initialize the statistics with exactly one success and one failure, which corresponds to a Beta(1, 1) prior.
e Thompson sampling (TS) (Thompson, 1933): this is a simple method which, at each time step, samples a list of arm means from the posterior distribution, and choose the best arm according to this sample. It has been demonstrated to compare favorably to UCB1 empir- ically (Chapelle & Li, 2011). We also experiment with an optimistic variant (OTS) (May et al., 2012), which samples N times from the posterior, and takes the one with the highest probability.
e ¢«-Greedy: in this strategy, the agent chooses the arm with the best empirical mean with probability 1 â â¬, and chooses a random arm with probability «. We use the same initial- ization as UCB1.
e Greedy: this is a special case of e-Greedy with « = 0.
The Bayesian methods, Gittins index and Thompson sampling, take advantage of the distribution pm; and we provide these methods with the true distribution. For each method with hyperparame- ters, we maximize the score with a separate grid search for each of the experimental settings. The hyperparameters used for TRPO are shown in the appendix.
The results are summarized in Table 1. Learning curves for various settings are shown in Figure 2. We observe that our approach achieves performance that is almost as good as the the reference meth- ods, which were (human) designed specifically to perform well on multi-armed bandit problems. It is worth noting that the published algorithms are mostly designed to minimize asymptotic regret (rather than finite horizon regret), hence there tends to be a little bit of room to outperform them in the finite horizon settings.
Under review as a conference paper at ICLR 2017
Table 1: MAB Results. Each grid cell records the total reward averaged over 1000 different instances of the bandit problem. We consider k ⬠{5,10,50} bandits and n ⬠{10,100,500} episodes of interaction. We highlight the best-performing algorithms in each setup according to the computed mean, and we also highlight the other algorithms in that row whose performance is not significantly different from the best one (determined by a one-sided t-test with p = 0.05).
Setup Random _ Gittins TS OTS UCB1 «Greedy Greedy RL? n=10,k=5 5.0 6.6 5.7 6.5 6.7 6.6 6.6 6.7 n=10,k=10 5.0 6.6 5.5 6.2 6.7 6.6 6.6 6.7 n=10,k=50 5.1 6.5 5.2 5.5 6.6 6.5 6.5 6.8 n=100,k=5 49.9 78.3 74.7 77.9 78.0 75.4 74.8 78.7 n=100,k=10 49.9 82.8 76.7 81.4 82.4 77.4 77.1 83.5 n=100,k =50 49.8 85.2 64.5 67.7 84.3 78.3 78.0 84.9 n=500,k=5 249.8 405.8 402.0 406.7 405.8 388.2 380.6 401.6 n=500,k =10 249.0 437.8 429.5 438.9 437.1 408.0 395.0 432.5 n=500,k =50 249.6 463.7 427.2 437.6 457.6 413.6 402.8 438.9 Normalize total reward (a)n = 10 (b) n = 100 (c) n = 500
Figure 2: RL? learning curves for multi-armed bandits. Performance is normalized such that Gittins index scores 1, and random policy scores 0.
We observe that there is a noticeable gap between Gittins index and RL? in the most challenging scenario, with 50 arms and 500 episodes. This raises the question whether better architectures or better (slow) RL algorithms should be explored. To determine the bottleneck, we trained the same policy architecture using supervised learning, using the trajectories generated by the Gittins index approach as training data. We found that the learned policy, when executed in test domains, achieved the same level of performance as the Gittins index approach, suggesting that there is room for improvement by using better RL algorithms.
3.2 TABULAR MDPs
The bandit problem provides a natural and simple setting to investigate whether the policy learns to trade off between exploration and exploitation. However, the problem itself involves no sequen- tial decision making, and does not fully characterize the challenges in solving MDPs. Hence, we perform further experiments using randomly generated tabular MDPs, where there is a finite num- ber of possible states and actionsâsmall enough that the transition probability distribution can be explicitly given as a table. We compare our approach with the following methods:
e Random: the agent chooses an action uniformly at random for each time step;
e PSRL (Strens, 2000; Osband et al., 2013): this is a direct generalization of Thompson sam- pling to MDPs, where at the beginning of each episode, we sample an MDP from the pos- terior distribution, and take actions according to the optimal policy for the entire episode. Similarly, we include an optimistic variant (OPSRL), which has also been explored in Os- band & Van Roy (2016).
e BEB (Kolter & Ng, 2009): this is a model-based optimistic algorithm that adds an explo- ration bonus to (thus far) infrequently visited states and actions.
Under review as a conference paper at ICLR 2017
e UCRL2 (Jaksch et al., 2010): this algorithm computes, at each iteration, the optimal pol- icy against an optimistic MDP under the current belief, using an extended value iteration procedure.
e «-Greedy: this algorithm takes actions optimal against the MAP estimate according to the current posterior, which is updated once per episode.
e Greedy: a special case of e-Greedy with « = 0.
# Table 2: Random MDP Results
Setup Random PSRL OPSRL UCRL2 BEB «-Greedy Greedy RL? n=10 100.1 138.1 144.1 146.6 150.2 132.8 134.8 156.2 n=25 250.2 408.8 425.2 424.1 427.8 377.3 368.8 445.7 n=50 499.7 904.4 930.7 918.9 917.8 823.3 769.3 936.1 n=75 749.9 1417.1 1449.2 1427.6 1422.6 1293.9 1172.9 1428.8 n=100 999.4 1939.5 1973.9 1942.1 1935.1 1778.2 1578.5 1913.7
The distribution over MDPs is constructed with |S| = 10, |A| = 5. The rewards follow a Gaus- sian distribution with unit variance, and the mean parameters are sampled independently from Normal(1,1). The transitions are sampled from a flat Dirichlet distribution. This construction matches the commonly used prior in Bayesian RL methods. We set the horizon for each episode to be T = 10, and an episode always starts on the first state.
g a E 5 z 0 1000 5000 Iteration
Figure 3: RL? learning curves for tabular MDPs. Performance is normalized such that OPSRL scores 1, and random policy scores 0.
The results are summarized in Table 2, and the learning curves are shown in Figure 3. We follow the same evaluation procedure as in the bandit case. We experiment with n ⬠{10, 25,50, 75, 100}. For fewer episodes, our approach surprisingly outperforms existing methods by a large margin. The advantage is reversed as n increases, suggesting that the reinforcement learning problem in the outer loop becomes more challenging to solve. We think that the advantage for small n comes from the need for more aggressive exploitation: since there are 140 degrees of freedom to estimate in order to characterize the MDP, and by the 10th episode, we will not have enough samples to form a good estimate of the entire dynamics. By directly optimizing the RNN in this setting, our approach should be able to cope with this shortage of samples, and decides to exploit sooner compared to the reference algorithms.
3.3. VISUAL NAVIGATION
The previous two tasks both only involve very low-dimensional state spaces. To evaluate the fea- sibility of scaling up RLâ, we further experiment with a challenging vision-based task, where the
Under review as a conference paper at ICLR 2017 agent is asked to navigate a randomly generated maze to find a randomly placed targetâ. The agent receives a +1 reward when it reaches the target, â0.001 when it hits the wall, and â0.04 per time step to encourage it to reach targets faster. It can interact with the maze for multiple episodes, dur- ing which the maze structure and target position are held fixed. The optimal strategy is to explore the maze efficiently during the first episode, and after locating the target, act optimally against the current maze and target based on the collected information. An illustration of the task is given in Figure 4. de (a) Sample observation (b) Layout of the 5 x 5 maze in (a) (c) Layout of a9 x 9 maze Figure 4: Visual navigation. The target block is shown in red, and occupies an entire grid in the maze layout. Visual navigation alone is a challenging task for reinforcement learning. The agent only receives very sparse rewards during training, and does not have the primitives for efficient exploration at the beginning of training. It also needs to make efficient use of memory to decide how it should explore the space, without forgetting about where it has already explored. Previously, Oh et al. (2016) have studied similar vision-based navigation tasks in Minecraft. However, they use higher-level actions for efficient navigation. Similar high-level actions in our task would each require around 5 low-level actions combined in the right way. In contrast, our RL? agent needs to learn these higher-level actions from scratch. We use a simple training setup, where we use small mazes of size 5 x 5, with 2 episodes of interac- tion, each with horizon up to 250. Here the size of the maze is measured by the number of grid cells along each wall in a discrete representation of the maze. During each trial, we sample 1| out of 1000 randomly generated configurations of map layout and target positions. During testing, we evaluate on 1000 separately generated configurations. In addition, we also study its extrapolation behavior along two axes, by (1) testing on large mazes of size 9 x 9 (see Figure 4c) and (2) running the agent for up to 5 episodes in both small and large mazes. For the large maze, we also increase the horizon per episode by 4x due to the increased size of the maze. Table 3: Results for visual navigation. These metrics are computed using the best run among all runs shown in Figure 5. In 3c, we measure the proportion of mazes where the trajectory length in the second episode does not exceed the trajectory length in the first episode. (a) Average length of successful trajectories (b) %Success (c) %Improved Episode Small Large Episode Small Large Small Large 1 1.3 180.1+6.0 1 99.3% 97.1% 91.7% 71.4% 2 0.9 151.8+45.9 2 99.6% 96.7% 3 1.0 169. 6.3 3 99.7% 95.8% 4 1.1 162. 6.4 4 99.4% 95.6% 5 1.1 169.346.5 5 99.6% 96.1% ?Videos for the task are available at https: //goo.gl/rDDBpb.
(a) Average length of successful trajectories (b) %Success (c) %Improved Episode Small Large Episode Small Large Small Large 1 1.3 180.1+6.0 1 99.3% 97.1% 91.7% 71.4% 2 0.9 151.8+45.9 2 99.6% 96.7% 3 1.0 169. 6.3 3 99.7% 95.8% 4 1.1 162. 6.4 4 99.4% 95.6% 5 1.1 169.346.5 5 99.6% 96.1%
Under review as a conference paper at ICLR 2017 Total reward 0 300 rood 1500 ~â«2000~=~=«S00~=«S000~=S=SC« S00, Iteration Figure 5: RL? learning curves for visual navigation. Each curve shows a different random initial- ization of the RNN weights. Performance varies greatly across different initializations. The results are summarized in Table 3, and the learning curves are shown in Figure 5. We observe that there is a significant reduction in trajectory lengths between the first two episodes in both the smaller and larger mazes, suggesting that the agent has learned how to use information from past episodes. It also achieves reasonable extrapolation behavior in further episodes by maintaining its performance, although there is a small drop in the rate of success in the larger mazes. We also observe that on larger mazes, the ratio of improved trajectories is lower, likely because the agent has not learned how to act optimally in the larger mazes. Still, even on the small mazes, the agent does not learn to perfectly reuse prior information. An illustration of the agentâs behavior is shown in Figure 6. The intended behavior, which occurs most frequently, as shown in 6a and 6b, is that the agent should remember the targetâs location, and utilize it to act optimally in the second episode. However, occasionally the agent forgets about where the target was, and continues to explore in the second episode, as shown in 6c and 6d. We believe that better reinforcement learning techniques used as the outer-loop algorithm will improve these results in the future. \ 4 | Uy] (a) Good behavior, Ist (b) Good behavior, 2nd (c) Bad behavior, Ist (d) Bad behavior, 2nd episode episode episode episode Figure 6: Visualization of the agentâs behavior. In each scenario, the agent starts at the center of the blue block, and the goal is to reach anywhere in the red block. 4 RELATED WORK The concept of using prior experience to speed up reinforcement learning algorithms has been ex- plored in the past in various forms. Earlier studies have investigated automatic tuning of hyper- parameters, such as learning rate and temperature (Ishii et al., 2002; Schweighofer & Doya, 2003), as a form of meta-learning. Wilson et al. (2007) use hierarchical Bayesian methods to maintain a posterior over possible models of dynamics, and apply optimistic Thompson sampling according to the posterior. Many works in hierarchical reinforcement learning propose to extract reusable skills from previous tasks to speed up exploration in new tasks (Singh, 1992; Perkins et al., 1999). We refer the reader to Taylor & Stone (2009) for a more thorough survey on the multi-task and transfer learning aspects.
Under review as a conference paper at ICLR 2017
More recently, Fu et al. (2015) propose a model-based approach on top of iLQG with unknown dynamics (Levine & Abbeel, 2014), which uses samples collected from previous tasks to build a neural network prior for the dynamics, and can perform one-shot learning on new, but related tasks thanks to reduced sample complexity. There has been a growing interest in using deep neural networks for multi-task learning and transfer learning (Parisotto et al., 2015; Rusu et al., 2015; 2016a; Devin et al., 2016; Rusu et al., 201 6b).
In the broader context of machine learning, there has been a lot of interest in one-shot learning for object classification (Vilalta & Drissi, 2002; Fei-Fei et al., 2006; Larochelle et al., 2008; Lake et al., 2011; Koch, 2015). Our work draws inspiration from a particular line of work (Younger et al., 2001; Santoro et al., 2016; Vinyals et al., 2016), which formulates meta-learning as an optimization problem, and can thus be optimized end-to-end via gradient descent. While these work applies to the supervised learning setting, our work applies in the more general reinforcement learning setting. Although the reinforcement learning setting is more challenging, the resulting behavior is far richer: our agent must not only learn to exploit existing information, but also learn to explore, a problem that is usually not a factor in supervised learning. Another line of work (Hochreiter et al., 2001; Younger et al., 2001; Andrychowicz et al., 2016; Li & Malik, 2016) studies meta-learning over the optimization process. There, the meta-learner makes explicit updates to a parametrized model. In comparison, we do not use a directly parametrized policy; instead, the recurrent neural network agent acts as the meta-learner and the resulting policy simultaneously.
Our formulation essentially constructs a partially observable MDP (POMDP) which is solved in the outer loop, where the underlying MDP is unobserved by the agent. This reduction of an unknown MDP to a POMDP can be traced back to dual control theory (Feldbaum, 1960), where âdualâ refers to the fact that one is controlling both the state and the state estimate. Feldbaum pointed out that the solution can in principle be computed with dynamic programming, but doing so is usually im- practical. POMDPs with such structure have also been studied under the name âmixed observability MDPsâ (Ong et al., 2010). However, the method proposed there suffers from the usual challenges of solving POMDPs in high dimensions.
# 5 DISCUSSION
This paper suggests a different approach for designing better reinforcement learning algorithms: instead of acting as the designers ourselves, learn the algorithm end-to-end using standard rein- forcement learning techniques. That is, the âfastâ RL algorithm is a computation whose state is stored in the RNN activations, and the RNNâs weights are learned by a general-purpose âslowâ re- inforcement learning algorithm. Our method, RL?, has demonstrated competence comparable with theoretically optimal algorithms in small-scale settings. We have further shown its potential to scale to high-dimensional tasks.
In the experiments, we have identified opportunities to improve upon RL?: the outer-loop reinforce- ment learning algorithm was shown to be an immediate bottleneck, and we believe that for settings with extremely long horizons, better architecture may also be required for the policy. Although we have used generic methods and architectures for the outer-loop algorithm and the policy, doing this also ignores the underlying episodic structure. We expect algorithms and policy architectures that exploit the problem structure to significantly boost the performance.
ACKNOWLEDGMENTS
We would like to thank our colleagues at Berkeley and OpenAI for insightful discussions. This research was funded in part by ONR through a PECASE award. Yan Duan was also supported by a Berkeley AI Research lab Fellowship and a Huawei Fellowship. Xi Chen was also supported by a Berkeley AI Research lab Fellowship. We gratefully acknowledge the support of the NSF through grant IIS-1619362 and of the ARC through a Laureate Fellowship (FL110100281) and through the ARC Centre of Excellence for Mathematical and Statistical Frontiers.
# REFERENCES
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. arXiv preprint
Under review as a conference paper at ICLR 2017
arXiv: 1606.04474, 2016.
Jean-Yves Audibert and Rémi Munos. Introduction to bandits: Algorithms and theory. JCML Tutorial on bandits, 2011.
Peter Auer. Using confidence bounds for exploitation-exploration trade-offs. Journal of Machine Learning Research, 3(Nov):397-422, 2002.
Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difficult. JEEE transactions on neural networks, 5(2):157-166, 1994.
Sébastien Bubeck and Nicolo Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi- armed bandit problems. arXiv preprint arXiv: 1204.5721, 2012.
Jhelum Chakravorty and Aditya Mahajan. Multi-armed bandits, gittins index, and its calculation. Methods and Applications of Statistics in Clinical Trials: Planning, Analysis, and Inferential Methods, 2:416-435, 2013.
Olivier Chapelle and Lihong Li. An empirical evaluation of thompson sampling. In Advances in neural information processing systems, pp. 2249-2257, 2011.
Kyunghyun Cho, Bart Van Merriénboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014.
Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv: 1412.3555, 2014.
Marc Deisenroth and Carl E Rasmussen. Pilco: A model-based and data-efficient approach to policy search. In Proceedings of the 28th International Conference on machine learning (ICML-11), pp. 465-472, 2011.
Coline Devin, Abhishek Gupta, Trevor Darrell, Pieter Abbeel, and Sergey Levine. Learning modular neural network policies for multi-task and multi-robot transfer. arXiv preprint arXiv: 1609.07088, 2016.
Li Fei-Fei, Rob Fergus, and Pietro Perona. One-shot learning of object categories. JEEE transactions on pattern analysis and machine intelligence, 28(4):594â611, 2006.
AA Feldbaum. Dual control theory. i. Avtomatika i Telemekhanika, 21(9):1240-1249, 1960.
Justin Fu, Sergey Levine, and Pieter Abbeel. One-shot learning of manipulation skills with online dynamics adaptation and neural network priors. arXiv preprint arXiv: 1509.06841, 2015.
Mohammad Ghavamzadeh, Shie Mannor, Joelle Pineau, Aviv Tamar, et al. Bayesian reinforcement learning: a survey. World Scientific, 2015.
John Gittins, Kevin Glazebrook, and Richard Weber. Multi-armed bandit allocation indices. John Wiley & Sons, 2011.
John C Gittins. Bandit processes and dynamic allocation indices. Journal of the Royal Statistical Society. Series B (Methodological), pp. 148-177, 1979.
Xiaoxiao Guo, Satinder Singh, Honglak Lee, Richard L Lewis, and Xiaoshi Wang. Deep learning for real-time atari game play using offline monte-carlo tree search planning. In Advances in neural information processing systems, pp. 3338-3346, 2014.
Nicolas Heess, Gregory Wayne, David Silver, Tim Lillicrap, Tom Erez, and Yuval Tassa. Learning continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems, pp. 2944-2952, 2015.
Sepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent. In International Conference on Artificial Neural Networks, pp. 87-94. Springer, 2001.
10
Under review as a conference paper at ICLR 2017
Shin Ishii, Wako Yoshida, and Junichiro Yoshimoto. Control of exploitationâexploration meta- parameter in reinforcement learning. Neural networks, 15(4):665â687, 2002.
Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11(Apr):1563â1600, 2010.
Rafal Jézefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recur- rent network architectures. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pp. 2342-2350, 2015. URL http: //jmlr.org/proceedings/papers/v37/jozefowicz15.html.
Michat Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Jaskowski. Viz- doom: A doom-based ai research platform for visual reinforcement learning. arXiv preprint arXiv: 1605.02097, 2016.
Gregory Koch. Siamese neural networks for one-shot image recognition. PhD thesis, University of Toronto, 2015.
J Zico Kolter and Andrew Y Ng. Near-bayesian exploration in polynomial time. In Proceedings of the 26th Annual International Conference on Machine Learning, pp. 513-520. ACM, 2009.
Brenden M Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua B Tenenbaum. One shot learning of simple visual concepts. In Proceedings of the 33rd Annual Conference of the Cognitive Science Society, volume 172, pp. 2, 2011.
Hugo Larochelle, Dumitru Erhan, and Yoshua Bengio. Zero-data learning of new tasks. In AAAI, volume 1, pp. 3, 2008.
Sergey Levine and Pieter Abbeel. Learning neural network policies with guided policy search under unknown dynamics. In Advances in Neural Information Processing Systems, pp. 1071-1079, 2014.
Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuo- motor policies. Journal of Machine Learning Research, 17(39):1â40, 2016.
Ke Li and Jitendra Malik. Learning to optimize. arXiv preprint arXiv: 1606.01885, 2016.
Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv: 1509.02971, 2015.
Benedict C May, Nathan Korda, Anthony Lee, and David S Leslie. Optimistic bayesian sampling in contextual-bandit problems. Journal of Machine Learning Research, 13(Jun):2069-2106, 2012.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle- mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015.
Junhyuk Oh, Valliappa Chockalingam, Satinder Singh, and Honglak Lee. Control of memory, active perception, and action in minecraft. arXiv preprint arXiv: 1605.09128, 2016.
Sylvie CW Ong, Shao Wei Png, David Hsu, and Wee Sun Lee. Planning under uncertainty for robotic tasks with mixed observability. The International Journal of Robotics Research, 29(8): 1053-1068, 2010.
Jan Osband and Benjamin Van Roy. Why is posterior sampling better than optimism for reinforce- ment learning. arXiv preprint arXiv: 1607.00215, 2016.
Jan Osband, Dan Russo, and Benjamin Van Roy. (more) efficient reinforcement learning via poste- rior sampling. In Advances in Neural Information Processing Systems, pp. 3003-3011, 2013.
Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Actor-mimic: Deep multitask and transfer reinforcement learning. arXiv preprint arXiv: 1511.06342, 2015.
11
Under review as a conference paper at ICLR 2017
Theodore J Perkins, Doina Precup, et al. Using options for knowledge transfer in reinforcement learning. University of Massachusetts, Amherst, MA, USA, Tech. Rep, 1999.
Andrei A Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirk- patrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy distil- lation. arXiv preprint arXiv:1511.06295, 2015.
Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint arXiv: 1606.04671, 2016a.
Andrei A Rusu, Matej Vecerik, Thomas Rothérl, Nicolas Heess, Razvan Pascanu, and Raia Hadsell. Sim-to-real robot learning from pixels with progressive nets. arXiv preprint arXiv: 1610.04286, 2016b.
Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. One- shot learning with memory-augmented neural networks. arXiv preprint arXiv: 1605.06065, 2016.
John Schulman, Sergey Levine, Philipp Moritz, Michael I Jordan, and Pieter Abbeel. Trust region policy optimization. CoRR, abs/1502.05477, 2015.
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High- dimensional continuous control using generalized advantage estimation. In Jnternational Con- ference on Learning Representations (ICLR2016), 2016.
Nicolas Schweighofer and Kenji Doya. Meta-learning in reinforcement learning. Neural Networks, 16(1):5-9, 2003.
Satinder Pal Singh. Transfer of learning by composing solutions of elemental sequential tasks. Machine Learning, 8(3-4):323-339, 1992.
Malcolm Strens. A bayesian framework for reinforcement learning. In JCML, pp. 943-950, 2000.
Matthew E Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research, 10(Jul):1633â1685, 2009.
William R Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285-294, 1933.
Ricardo Vilalta and Youssef Drissi. A perspective view and survey of meta-learning. Artificial Intelligence Review, 18(2):77-95, 2002.
Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Match- ing networks for one shot learning. arXiv preprint arXiv: 1606.04080, 2016.
Niklas Wahlstro6m, Thomas B Schon, and Marc Peter Deisenroth. From pixels to torques: Policy learning with deep dynamical models. arXiv preprint arXiv: 1502.02251, 2015.
Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In Advances in Neural Information Processing Systems, pp. 2746-2754, 2015.
Peter Whittle. Optimization over time. John Wiley & Sons, Inc., 1982.
Aaron Wilson, Alan Fern, Soumya Ray, and Prasad Tadepalli. Multi-task reinforcement learning: a hierarchical bayesian approach. In Proceedings of the 24th international conference on Machine learning, pp. 1015-1022. ACM, 2007.
A Steven Younger, Sepp Hochreiter, and Peter R Conwell. Meta-learning with backpropagation. In Neural Networks, 2001. Proceedings. IJCNNâ0O1. International Joint Conference on, volume 3. IEEE, 2001.
12
Under review as a conference paper at ICLR 2017
# APPENDIX
# A DETAILED EXPERIMENT SETUP
Common to all experiments: as mentioned in Section 2.2, we use placeholder values when neces- sary. For example, at t = 0 there is no previous action, reward, or termination flag. Since all of our experiments use discrete actions, we use the embedding of the action 0 as a placeholder for actions, and 0 for both the rewards and termination flags. To form the input to the GRU, we use the values for the rewards and termination flags as-is, and embed the states and actions as described separately below for each experiments. These values are then concatenated together to form the joint embedding.
For the neural network architecture, We use rectified linear units throughout the experiments as the hidden activation, and we apply weight normalization without data-dependent initialization (Sali- mans & Kingma, 2016) to all weight matrices. The hidden-to-hidden weight matrix uses an orthog- onal initialization (Saxe et al., 2013), and all other weight matrices use Xavier initialization (Glorot & Bengio, 2010). We initialize all bias vectors to 0. Unless otherwise mentioned, the policy and the baseline uses separate neural networks with the same architecture until the final layer, where the number of outputs differ.
All experiments are implemented using TensorFlow (Abadi et al., 2016) and rllab (Duan et al., 2016). We use the implementations of classic algorithms provided by the TabulaRL package (Os- band, 2016).
A.1 MULTI-ARMED BANDITS
The parameters for TRPO are shown in Table 1. Since the environment is stateless, we use a constant embedding 0 as a placeholder in place of the states, and a one-hot embedding for the actions.
Table 1: Hyperparameters for TRPO: multi-armed bandits
Discount 0.99 GAE X 0.3 Policy Iters | Up to 1000 #GRU Units | 256 Mean KL 0.01 Batch size 250000
A.2. TABULAR MDPs
The parameters for TRPO are shown in Table 2. We use a one-hot embedding for the states and actions separately, which are then concatenated together.
Table 2: Hyperparameters for TRPO: tabular MDPs
Discount 0.99 GAE \ 0.3 Policy Iters | Up to 10000 #GRU Units | 256 Mean KL 0.01 Batch size 250000
A.3 VISUAL NAVIGATION
The parameters for TRPO are shown in Table 3. For this task, we use a neural network to form the joint embedding. We rescale the images to have width 40 and height 30 with RGB channels preserved, and we recenter the RGB values to lie within range [â1, 1]. Then, this preprocessed
13
Under review as a conference paper at ICLR 2017
image is passed through 2 convolution layers, each with 16 filters of size 5 x 5 and stride 2. The action is first embedded into a 256-dimensional vector where the embedding is learned, and then concatenated with the flattened output of the final convolution layer. The joint vector is then fed to a fully connected layer with 256 hidden units.
Unlike previous experiments, we let the policy and the baseline share the same neural network. We found this to improve the stability of training baselines and also the end performance of the policy, possibly due to regularization effects and better learned features imposed by weight sharing. Similar weight-sharing techniques have also been explored in (Mnih et al., 2016).
Table 3: Hyperparameters for TRPO: visual navigation
Discount 0.99 GAE X 0.99 Policy Iters | Up to 5000 #GRU Units | 256 Mean KL 0.01 Batch size 50000
# REFERENCES
Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv: 1603.04467, 2016.
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. arXiv preprint arXiv: 1604.06778, 2016.
Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Aistats, volume 9, pp. 249-256, 2010.
Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv: 1602.01783, 2016.
Tan Osband. TabulaRL. https://github.com/iosband/TabulaRL, 2016.
Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to ac- celerate training of deep neural networks. arXiv preprint arXiv: 1602.07868, 2016.
Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynam- ics of learning in deep linear neural networks. arXiv preprint arXiv: 1312.6120, 2013.
14 | {
"id": "1511.06295"
} |
1611.02163 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | 7 1 0 2
y a M 2 1 ] G L . s c [
4 v 3 6 1 2 0 . 1 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# UNROLLED GENERATIVE ADVERSARIAL NETWORKS
Luke Metzâ Google Brain lmetz@google.com
Ben Pooleâ Stanford University poole@cs.stanford.edu
David Pfau Google DeepMind pfau@google.com
Jascha Sohl-Dickstein Google Brain jaschasd@google.com
# ABSTRACT
We introduce a method to stabilize Generative Adversarial Networks (GANs) by deï¬ning the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal dis- criminator in the generatorâs objective, which is ideal but infeasible in practice, and using the current value of the discriminator, which is often unstable and leads to poor solutions. We show how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator.
# INTRODUCTION
The use of deep neural networks as generative models for complex data has made great advances in recent years. This success has been achieved through a surprising diversity of training losses and model architectures, including denoising autoencoders (Vincent et al., 2010), variational au- toencoders (Kingma & Welling, 2013; Rezende et al., 2014; Gregor et al., 2015; Kulkarni et al., 2015; Burda et al., 2015; Kingma et al., 2016), generative stochastic networks (Alain et al., 2015), diffusion probabilistic models (Sohl-Dickstein et al., 2015), autoregressive models (Theis & Bethge, 2015; van den Oord et al., 2016a;b), real non-volume preserving transformations (Dinh et al., 2014; 2016), Helmholtz machines (Dayan et al., 1995; Bornschein et al., 2015), and Generative Adversar- ial Networks (GANs) (Goodfellow et al., 2014).
1.1 GENERATIVE ADVERSARIAL NETWORKS
While most deep generative models are trained by maximizing log likelihood or a lower bound on log likelihood, GANs take a radically different approach that does not require inference or explicit calculation of the data likelihood. Instead, two models are used to solve a minimax game: a genera- tor which samples data, and a discriminator which classiï¬es the data as real or generated. In theory these models are capable of modeling an arbitrarily complex probability distribution. When using the optimal discriminator for a given class of generators, the original GAN proposed by Goodfellow et al. minimizes the Jensen-Shannon divergence between the data distribution and the generator, and extensions generalize this to a wider class of divergences (Nowozin et al., 2016; Sonderby et al., 2016; Poole et al., 2016).
The ability to train extremely ï¬exible generating functions, without explicitly computing likeli- hoods or performing inference, and while targeting more mode-seeking divergences as made GANs extremely successful in image generation (Odena et al., 2016; Salimans et al., 2016; Radford et al., 2015), and image super resolution (Ledig et al., 2016). The ï¬exibility of the GAN framework has also enabled a number of successful extensions of the technique, for instance for structured predic- tion (Reed et al., 2016a;b; Odena et al., 2016), training energy based models (Zhao et al., 2016), and combining the GAN loss with a mutual information loss (Chen et al., 2016).
âWork done as a member of the Google Brain Residency program (g.co/brainresidency) â Work completed as part of a Google Brain internship
1
Published as a conference paper at ICLR 2017
In practice, however, GANs suffer from many issues, particularly during training. One common failure mode involves the generator collapsing to produce only a single sample or a small family of very similar samples. Another involves the generator and discriminator oscillating during training, rather than converging to a ï¬xed point. In addition, if one agent becomes much more powerful than the other, the learning signal to the other agent becomes useless, and the system does not learn. To train GANs many tricks must be employed, such as careful selection of architectures (Radford et al., 2015), minibatch discrimination (Salimans et al., 2016), and noise injection (Salimans et al., 2016; Sonderby et al., 2016). Even with these tricks the set of hyperparameters for which training is successful is generally very small in practice.
Once converged, the generative models produced by the GAN training procedure normally do not cover the whole distribution (Dumoulin et al., 2016; Che et al., 2016), even when targeting a mode- covering divergence such as KL. Additionally, because it is intractable to compute the GAN training loss, and because approximate measures of performance such as Parzen window estimates suffer from major ï¬aws (Theis et al., 2016), evaluation of GAN performance is challenging. Currently, human judgement of sample quality is one of the leading metrics for evaluating GANs. In practice this metric does not take into account mode dropping if the number of modes is greater than the number of samples one is visualizing. In fact, the mode dropping problem generally helps visual sample quality as the model can choose to focus on only the most common modes. These common modes correspond, by deï¬nition, to more typical samples. Additionally, the generative model is able to allocate more expressive power to the modes it does cover than it would if it attempted to cover all modes.
1.2 DIFFERENTIATING THROUGH OPTIMIZATION
Many optimization schemes, including SGD, RMSProp (Tieleman & Hinton, 2012), and Adam (Kingma & Ba, 2014), consist of a sequence of differentiable updates to parameters. Gradients can be backpropagated through unrolled optimization updates in a similar fashion to backpropagation through a recurrent neural network. The parameters output by the optimizer can thus be included, in a differentiable way, in another objective (Maclaurin et al., 2015). This idea was ï¬rst suggested for minimax problems in (Pearlmutter & Siskind, 2008), while (Zhang & Lesser, 2010) provided a theoretical analysis and experimental results on differentiating through a single step of gradient ascent for simple matrix games. Differentiating through unrolled optimization was ï¬rst scaled to deep networks in (Maclaurin et al., 2015), where it was used for hyperparameter optimization. More recently, (Belanger & McCallum, 2015; Han et al., 2016; Andrychowicz et al., 2016) backpropagate through optimization procedures in contexts unrelated to GANs or minimax games.
In this work we address the challenges of unstable optimization and mode collapse in GANs by unrolling optimization of the discriminator objective during training.
2 METHOD
2.1 GENERATIVE ADVERSARIAL NETWORKS
The GAN learning problem is to ï¬nd the optimal parameters θâ in a minimax objective, G for a generator function G (z; θG)
θâ G = argmin θG f (θG, θD) (1)
# max θD f (θG, θâ
= argmin θG D (θG)) (2)
θâ D (θG) = argmax f (θG, θD) , θD (3)
where f is commonly chosen to be
f (θG, θD) = Exâ¼pdata [log (D (x; θD))] + Ezâ¼N (0,I) [log (1 â D (G (z; θG) ; θD))] . Here x â X is the data variable, z â Z is the latent variable, pdata is the data distribution, the discriminator D (·; θD) : X â [0, 1] outputs the estimated probability that a sample x comes from the data distribution, θD and θG are the discriminator and generator parameters, and the generator function G (·; θG) : Z â X transforms a sample in the latent space into a sample in the data space.
2
(4)
Published as a conference paper at ICLR 2017
For the minimax loss in Eq. 4, the optimal discriminator Dâ (x) is a known smooth function of the generator probability pG (x) (Goodfellow et al., 2014),
Dâ (x) = pdata (x) pdata (x) + pG (x) . (5)
When the generator loss in Eq. 2 is rewritten directly in terms of pG (x) and Eq. 5 rather than θG and θâ D (θG), then it is similarly a smooth function of pG (x). These smoothness guarantees are typically lost when D (x; θD) and G (z; θG) are drawn from parametric families. They nonetheless suggest that the true generator objective in Eq. 2 will often be well behaved, and is a desirable target for direct optimization. Explicitly solving for the optimal discriminator parameters θâ D (θG) for every update step of the generator G is computationally infeasible for discriminators based on neural networks. Therefore this minimax optimization problem is typically solved by alternating gradient descent on θG and ascent on θD. The optimal solution θâ = {θâ D} is a ï¬xed point of these iterative learning dynamics. Addition- ally, if f (θG, θD) is convex in θG and concave in θD, then alternating gradient descent (ascent) trust region updates are guaranteed to converge to the ï¬xed point, under certain additional weak assump- tions (Juditsky et al., 2011). However in practice f (θG, θD) is typically very far from convex in θG and concave in θD, and updates are not constrained in an appropriate way. As a result GAN training suffers from mode collapse, undamped oscillations, and other problems detailed in Section 1.1. In order to address these difï¬culties, we will introduce a surrogate objective function fK (θG, θD) for training the generator which more closely resembles the true generator objective f (θG, θâ
2.2 UNROLLING GANS
A local optimum of the discriminator parameters θâ iterative optimization procedure, D can be expressed as the ï¬xed point of an
θ0 D = θD (6)
. df (0g, 0% ok + nf (6c, D lim 65,
θk+1 D = θk dθk D D (7)
θâ D (θG) = lim kââ (8)
where ηk is the learning rate schedule. For clarity, we have expressed Eq. 7 as a full batch steepest gradient ascent equation. More sophisticated optimizers can be similarly unrolled. In our experi- ments we unroll Adam (Kingma & Ba, 2014).
By unrolling for Kv steps, we create a surrogate objective for the update of the generator, fic 0a, 9D) = f (8a, 05 (Oa,9D)) -
When K = 0 this objective corresponds exactly to the standard GAN objective, while as K â â it corresponds to the true generator objective function f (θG, θâ D (G)). By adjusting the number of unrolling steps K, we are thus able to interpolate between standard GAN training dynamics with their associated pathologies, and more costly gradient descent on the true generator loss.
2.3 PARAMETER UPDATES
The generator and discriminator parameter updates using this surrogate loss are
# dfK (θG, θD) dθG df (θG, θD) dθD
θG â θG â η (10)
θD â θD + η . (11)
For clarity we use full batch steepest gradient descent (ascent) with stepsize η above, while in ex- periments we instead use minibatch Adam for both updates. The gradient in Eq. 10 requires back- propagating through the optimization process in Eq. 7. A clear description of differentiation through
3
(9)
Published as a conference paper at ICLR 2017
â> Forward Pass a â»> 0, Gradients 9, 8, 2 6, Gradients > â» Dp! . 8p) SGD | f 05.) > SGD | £,(8,,8,) Unrolling - â~ SGD a A 4 Gradients 8, 8, 8,
Figure 1: An illustration of the computation graph for an unrolled GAN with 3 unrolling steps. The generator update in Equation 10 involves backpropagating the generator gradient (blue arrows) through the unrolled optimization. Each step k in the unrolled optimization uses the gradients of fk with respect to θk D, as described in Equation 7 and indicated by the green arrows. The discriminator update in Equation 11 does not depend on the unrolled optimization (red arrow).
gradient descent is given as Algorithm 2 in (Maclaurin et al., 2015), though in practice the use of an automatic differentiation package means this step does not need to be programmed explicitly. A pictorial representation of these updates is provided in Figure 1.
It is important to distinguish this from an approach suggested in (Goodfellow et al., 2014), that several update steps of the discriminator parameters should be run before each single update step for the generator. In that approach, the update steps for both models are still gradient descent (ascent) with respect to ï¬xed values of the other model parameters, rather than the surrogate loss we describe in Eq. 9. Performing K steps of discriminator update between each single step of generator update corresponds to updating the generator parameters θG using only the ï¬rst term in Eq. 12 below.
2.4 THE MISSING GRADIENT TERM
To better understand the behavior of the surrogate loss fK (θG, θD), we examine its gradient with respect to the generator parameters θG,
dfx (Gc,8p) _ OF (8a,05 (Oc,4D)) | AF (8a,9F (8a, OD)) As (8a, 9D) dO 0c 0K (6c,4p) dq (12)
Standard GAN training corresponds exactly to updating the generator parameters using only the ï¬rst term in this gradient, with θK D (θG, θD) being the parameters resulting from the discriminator update step. An optimal generator for any ï¬xed discriminator is a delta function at the x to which the discriminator assigns highest data probability. Therefore, in standard GAN training, each generator update step is a partial collapse towards a delta function.
The second term captures how the discriminator would react to a change in the generator. It reduces the tendency of the generator to engage in mode collapse. For instance, the second term reï¬ects that as the generator collapses towards a delta function, the discriminator reacts and assigns lower probability to that state, increasing the generator loss. It therefore discourages the generator from collapsing, and may improve stability.
As K â â, θK = 0, and therefore the second term in Eq. 12 goes to 0 (Danskin, 1967). The gradient of the unrolled surrogate loss fK (θG, θD) with respect to θG is thus identical to the gradient of the standard GAN loss f (θG, θD) both when K = 0 and when K â â, where we take K â â to imply that in the standard GAN the discriminator is also fully optimized between each generator update. Between these two extremes, fK (θG, θD) captures additional information about the response of the discriminator to changes in the generator.
4
Published as a conference paper at ICLR 2017
2.5 CONSEQUENCES OF THE SURROGATE LOSS
GANs can be thought of as a game between the discriminator (D) and the generator (G). The agents take turns taking actions and updating their parameters until a Nash equilibrium is reached. The optimal action for D is to evaluate the probability ratio pG(x)+pdata(x) for the generatorâs move x (Eq. 5). The optimal generator action is to move its mass to maximize this ratio.
The initial move for G will be to move as much mass as its parametric family and update step permits to the single point that maximizes the ratio of probability densities. The action D will then take is quite simple. It will track that point, and to the extent allowed by its own parametric family and update step assign low data probability to it, and uniform probability everywhere else. This cycle of G moving and D following will repeat forever or converge depending on the rate of change of the two agents. This is similar to the situation in simple matrix games like rock-paper-scissors and matching pennies, where alternating gradient descent (ascent) with a ï¬xed learning rate is known not to converge (Singh et al., 2000; Bowling & Veloso, 2002).
In the unrolled case, however, this undesirable behavior no longer occurs. Now Gâs actions take into account how D will respond. In particular, G will try to make steps that D will have a hard time responding to. This extra information helps the generator spread its mass to make the next D step less effective instead of collapsing to a point.
In principle, a surrogate loss function could be used for both D and G. In the case of 1-step unrolled optimization this is known to lead to convergence for games in which gradient descent (ascent) fails (Zhang & Lesser, 2010). However, the motivation for using the surrogate generator loss in Section 2.2, of unrolling the inner of two nested min and max functions, does not apply to using a surrogate discriminator loss. Additionally, it is more common for the discriminator to overpower the generator than vice-versa when training a GAN. Giving more information to G by allowing it to âsee into the futureâ may thus help the two models be more balanced.
# 3 EXPERIMENTS
In this section we demonstrate improved mode coverage and stability by applying this technique to ï¬ve datasets of increasing complexity. Evaluation of generative models is a notoriously hard problem (Theis et al., 2016). As such the de facto standard in GAN literature has become sample quality as evaluated by a human and/or evaluated by a heuristic (Inception score for example, (Salimans et al., 2016)). While these evaluation metrics do a reasonable job capturing sample quality, they fail to capture sample diversity. In our ï¬rst 2 experiments diversity is easily evaluated via visual inspection. In our later experiments this is not the case, and we will use a variety of methods to quantify coverage of samples. Our measures are individually strongly suggestive of unrolling reducing mode-collapse and improving stability, but none of them alone are conclusive. We believe that taken together however, they provide extremely compelling evidence for the advantages of unrolling.
When doing stochastic optimization, we must choose which minibatches to use in the unrolling updates in Eq. 7. We experimented with both a ï¬xed minibatch and re-sampled minibatches for each unrolling step, and found it did not signiï¬cantly impact the result. We use ï¬xed minibatches for all experiments in this section.
We provide a reference implementation of this technique at github.com/poolio/unrolled gan.
3.1 MIXTURE OF GAUSSIANS DATASET
To illustrate the impact of discriminator unrolling, we train a simple GAN architecture on a 2D mixture of 8 Gaussians arranged in a circle. For a detailed list of architecture and hyperparameters see Appendix A. Figure 2 shows the dynamics of this model through time. Without unrolling the generator rotates around the valid modes of the data distribution but is never able to spread out mass. When adding in unrolling steps G quickly learns to spread probability mass and the system converges to the data distribution.
In Appendix B we perform further experiments on this toy dataset. We explore how unrolling compares to historical averaging, and compares to using the unrolled discriminator to update the
5
Published as a conference paper at ICLR 2017
- FV MO > 22/2: . . a > 7 - - ° Step 0 Step 5k Step 10k Step 15k Step 20k Step 25k Target
Figure 2: Unrolling the discriminator stabilizes GAN training on a toy 2D mixture of Gaussians dataset. Columns show a heatmap of the generator distribution after increasing numbers of training steps. The ï¬nal column shows the data distribution. The top row shows training for a GAN with 10 unrolling steps. Its generator quickly spreads out and converges to the target distribution. The bottom row shows standard GAN training. The generator rotates through the modes of the data distribution. It never converges to a ï¬xed distribution, and only ever assigns signiï¬cant probability mass to a single data mode at once.
REGAN EROS DAVIS HN WOK ~â~VeKSourd arHOaâ yy i aPenNOQw new oawoed ~wOeOSENK LHHANL OD O~D 8 ae Ce ee PRIWWRODG wD Im DWN HW AN*OSAND SOvwe~n~ak SSN GG D2 eR 80 GY Fo Fe Ps BIE RAVES KK HOUSES OD PT PWIYPwWwWgG we tS S w PTIBvsxorust OWA PHAVHO~VAT Pee suâDHRe~ erereraererereres UelererereurGbGEbEGEEEEEELEELEEEL UeleveeewrejeEGggggGggGEbGhGhbeh Uv eGggGggeGeebGsebyeby, UvlereuwwwrwjGEGgggeGgEbGbeGheb UelereerngbGEGEGEEGEEEELEEEELEL UelereverrrrGbEEEEEEEELEELEELEL UorvrwwwnjegGggGbgGEEELELELEL steps 20k steps SOK steps 100k steps $3 4 a4 OL 33 3 & wz & G éé éé éé éé éé éé éé éé Fa é é # é é é é F FH FH TH FH FH % % H % F 10 ~
Figure 3: Unrolled GAN training increases stability for an RNN generator and convolutional dis- criminator trained on MNIST. The top row was run with 20 unrolling steps. The bottom row is a standard GAN, with 0 unrolling steps. Images are samples from the generator after the indicated number of training steps.
generator, but without backpropagating through the generator. In both cases we ï¬nd that the unrolled objective performs better.
3.2 PATHOLOGICAL MODEL WITH MISMATCHED GENERATOR AND DISCRIMINATOR
To evaluate the ability of this approach to improve trainability, we look to a traditionally challenging family of models to train â recurrent neural networks (RNNs). In this experiment we try to generate MNIST samples using an LSTM (Hochreiter & Schmidhuber, 1997). MNIST digits are 28x28 pixel images. At each timestep of the generator LSTM, it outputs one column of this image, so that after 28 timesteps it has output the entire sample. We use a convolutional neural network as the discriminator. See Appendix C for the full model and training details. Unlike in all previously successful GAN models, there is no symmetry between the generator and the discriminator in this task, resulting in a more complex power balance. Results can be seen in Figure 3. Once again, without unrolling the model quickly collapses, and rotates through a sequence of single modes. Instead of rotating spatially, it cycles through proto-digit like blobs. When running with unrolling steps the generator disperses and appears to cover the whole data distribution, as in the 2D example.
6
Published as a conference paper at ICLR 2017
Unrolling steps 1/4 size of D compared to G Modes generated KL(model ||data) 1/2 size of D compared to G Modes generated KL(model ||data) Discriminator Size 0 30.6 ± 20.73 5.99 ± 0.42 628.0 ± 140.9 2.58 ±0.751 1 65.4 ± 34.75 5.911 ± 0.14 523.6 ± 55.768 2.44 ±0.26 5 236.4 ± 63.30 4.67 ± 0.43 732.0 ± 44.98 1.66 ± 0.090
Table 1: Unrolled GANs cover more discrete modes when modeling a dataset with 1,000 data modes, corresponding to all combinations of three MNIST digits (103 digit combinations). The number of modes covered is given for different numbers of unrolling steps, and for two different architectures. The reverse KL divergence between model and data is also given. Standard error is provided for both measures.
3.3 MODE AND MANIFOLD COLLAPSE USING AUGMENTED MNIST
GANs suffer from two different types of model collapse â collapse to a subset of data modes, and collapse to a sub-manifold within the data distribution. In these experiments we isolate both effects using artiï¬cially constructed datasets, and demonstrate that unrolling can largely rescue both types of collapse.
3.3.1 DISCRETE MODE COLLAPSE
To explore the degree to which GANs drop discrete modes in a dataset, we use a technique similar to one from (Che et al., 2016). We construct a dataset by stacking three randomly chosen MNIST digits, so as to construct an RGB image with a different MNIST digit in each color channel. This new dataset has 1,000 distinct modes, corresponding to each combination of the ten MNIST classes in the three channels.
We train a GAN on this dataset, and generate samples from the trained model (25,600 samples for all experiments). We then compute the predicted class label of each color channel using a pre-trained MNIST classiï¬er. To evaluate performance, we use two metrics: the number of modes for which the generator produced at least one sample, and the KL divergence between the model and the expected data distribution. Within this discrete label space, a KL divergence can be estimated tractably be- tween the generated samples and the data distribution over classes, where the data distribution is a uniform distribution over all 1,000 classes.
As presented in Table 1, as the number of unrolling steps is increased, both mode coverage and re- verse KL divergence improve. Contrary to (Che et al., 2016), we found that reasonably sized models (such as the one used in Section 3.4) covered all 1,000 modes even without unrolling. As such we use smaller convolutional GAN models. Details on the models used are provided in Appendix E.
We observe an additional interesting effect in this experiment. The beneï¬ts of unrolling increase as the discriminator size is reduced. We believe unrolling effectively increases the capacity of the discriminator. The unrolled discriminator can better react to any speciï¬c way in which the generator is producing non-data-like samples. When the discriminator is weak, the positive impact of unrolling is thus larger.
# 3.3.2 MANIFOLD COLLAPSE
In addition to discrete modes, we examine the effect of unrolling when modeling continuous mani- folds. To get at this quantity, we constructed a dataset consisting of colored MNIST digits. Unlike in the previous experiment, a single MNIST digit was chosen, and then assigned a single monochro- matic color. With a perfect generator, one should be able to recover the distribution of colors used to generate the digits. We use colored MNIST digits so that the generator also has to model the digits, which makes the task sufï¬ciently complex that the generator is unable to perfectly solve it. The color of each digit is sampled from a 3D normal distribution. Details of this dataset are provided in Appendix F. We will examine the distribution of colors in the samples generated by the trained GAN. As will also be true in the CIFAR10 example in Section 3.4, the lack of diversity in gener- ated colors is almost invisible using only visual inspection of the samples. Samples can be found in Appendix F.
7
Published as a conference paper at ICLR 2017
Unrolling steps JS divergence with 1/4 layer size JS divergence with 1/2 layer size JS divergence with 1/1 layer size 0 0.073 ± 0.0058 0.095 ± 0.011 0.034 ± 0.0034 1 0.142 ± 0.028 0.119 ± 0.010 0.050 ± 0.0026 5 0.049 ± 0.0021 0.055 ± 0.0049 0.027 ± 0.0028 10 0.075 ± 0.012 0.074± 0.016 0.025 ± 0.00076
Table 2: Unrolled GANs better model a continuous distribution. GANs are trained to model ran- domly colored MNIST digits, where the color is drawn from a Gaussian distribution. The JS diver- gence between the data and model distributions over digit colors is then reported, along with standard error in the JS divergence. More unrolling steps, and larger models, lead to better JS divergence.
Figure 4: Visual perception of sample quality and diversity is very similar for models trained with different numbers of unrolling steps. Actual sample diversity is higher with more unrolling steps. Each pane shows samples generated after training a model on CIFAR10 with 0, 1, 5, and 10 steps of unrolling.
In order to recover the color the GAN assigned to the digit, we used k-means with 2 clusters, to pick out the foreground color from the background. We then performed this transformation for both the training data and the generated images. Next we ï¬t a Gaussian kernel density estimator to both distributions over digit colors. Finally, we computed the JS divergence between the model and data distributions over colors. Results can be found in Table 2 for several model sizes. Details of the models are provided in Appendix F.
In general, the best performing models are unrolled for 5-10 steps, and larger models perform better than smaller models. Counter-intuitively, taking 1 unrolling step seems to hurt this measure of diversity. We suspect that this is due to it introducing oscillatory dynamics into training. Taking more unrolling steps however leads to improved performance with unrolling.
IMAGE MODELING OF CIFAR10
Here we test our technique on a more traditional convolutional GAN architecture and task, similar to those used in (Radford et al., 2015; Salimans et al., 2016). In the previous experiments we tested models where the standard GAN training algorithm would not converge. In this section we improve a standard model by reducing its tendency to engage in mode collapse. We ran 4 conï¬gurations of this model, varying the number of unrolling steps to be 0, 1, 5, or 10. Each conï¬guration was run 5 times with different random seeds. For full training details see Appendix D. Samples from each of the 4 conï¬gurations can be found in Figure 4. There is no obvious difference in visual quality across these model conï¬gurations. Visual inspection however provides only a poor measure of sample diversity.
By training with an unrolled discriminator, we expect to generate more diverse samples which more closely resemble the underlying data distribution. We introduce two techniques to examine sample diversity: inference via optimization, and pairwise distance distributions.
8
Published as a conference paper at ICLR 2017
Unrolling Steps Average MSE Percent Best Rank 0 steps 0.0231 ± 0.0024 0.63% 1 step 0.0195 ± 0.0021 22.97% 5 steps 0.0200 ± 0.0023 15.31% 10 steps 0.0181 ± 0.0018 61.09%
Table 3: GANs trained with unrolling are better able to match images in the training set than standard GANs, likely due to mode dropping by the standard GAN. Results show the MSE between training images and the best reconstruction for a model with the given number of unrolling steps. The fraction of training images best reconstructed by a given model is given in the ï¬nal column. The best reconstructions is found by optimizing the latent representation z to produce the closest matching pixel output G (z; θG). Results are averaged over all 5 runs of each model with different random seeds.
# INFERENCE VIA OPTIMIZATION
Since likelihood cannot be tractably computed, over-ï¬tting of GANs is typically tested by taking samples and computing the nearest-neighbor images in pixel space from the training data (Goodfel- low et al., 2014). We will do the reverse, and measure the ability of the generative model to generate images that look like speciï¬c samples from the training data. If we did this by generating random samples from the model, we would need an exponentially large number of samples. We instead treat ï¬nding the nearest neighbor xnearest to a target image xtarget as an optimization task,
||G (z; θG) â xtarget||2 2 znearest = argmin (13)
# z xnearest = G (znearest; θG) .
(14)
This concept of backpropagating to generate images has been widely used in visualizing features from discriminative networks (Simonyan et al., 2013; Yosinski et al., 2015; Nguyen et al., 2016) and has been applied to explore the visual manifold of GANs in (Zhu et al., 2016).
We apply this technique to each of the models trained. We optimize with 3 random starts using LBFGS, which is the optimizer typically used in similar settings such as style transfer (Johnson et al., 2016; Champandard, 2016). Results comparing average mean squared errors between xnearest and xtarget in pixel space can be found in Table 3. In addition we compute the percent of images for which a certain conï¬guration achieves the lowest loss when compared to the other conï¬gurations.
In the zero step case, there is poor reconstruction and less than 1% of the time does it obtain the lowest error of the 4 conï¬gurations. Taking 1 unrolling step results in a signiï¬cant improvement in MSE. Taking 10 unrolling steps results in more modest improvement, but continues to reduce the reconstruction MSE.
To visually see this, we compare the result of the optimization process for 0, 1, 5, and 10 step configurations in Figure [5] To select for images where differences in behavior is most apparent, we sort the data by the absolute value of a fractional difference in MSE between the 0 and 10 step lostepâhostep models, step ester â| This highlights examples where either the 0 or 10 step model cannot T(lostep-tliostep) accurately fit the data example but the other can. In Appendix [G]we show the same comparison for models initialized using different random seeds. Many of the zero step images are fuzzy and ill- defined suggesting that these images cannot be generated by the standard GAN generative model, and come from a dropped mode. As more unrolling steps are added, the outlines become more clear and well defined â the model covers more of the distribution and thus can recreate these samples.
# 3.4.2 PAIRWISE DISTANCES
A second complementary approach is to compare statistics of data samples to the corresponding statistics for samples generated by the various models. One particularly simple and relevant statistic is the distribution over pairwise distances between random pairs of samples. In the case of mode collapse, greater probability mass will be concentrated in smaller volumes, and the distribution over inter-sample distances should be skewed towards smaller distances. We sample random pairs of images from each model, as well as from the training data, and compute histograms of the (2 distances between those sample pairs. As illustrated in Figure (6 the standard GAN, with zero unrolling steps, has its probability mass skewed towards smaller ¢2 intersample distances, compared
9
Published as a conference paper at ICLR 2017
Data 0 step 1 step 5 step 10step
Data 0 step 1 step 5 step 10step
Data 0 step 1 step 5 step 10step Data 0 step 1 step 5 step 10step
Figure 5: Training set images are more accurately reconstructed using GANs trained with unrolling than by a standard (0 step) GAN, likely due to mode dropping by the standard GAN. Raw data is on the left, and the optimized images to reach this target follow for 0, 1, 5, and 10 unrolling steps. The reconstruction MSE is listed below each sample. A random 1280 images where selected from the training set, and corresponding best reconstructions for each model were found via optimiza- tion. Shown here are the eight images with the largest absolute fractional difference between GANs trained with 0 and 10 unrolling steps.
to real data. As the number of unrolling steps is increased, the histograms over intersample distances increasingly come to resemble that for the data distribution. This is further evidence in support of unrolling decreasing the mode collapse behavior of GANs.
# 4 DISCUSSION
In this work we developed a method to stabilize GAN training and reduce mode collapse by deï¬ning the generator objective with respect to unrolled optimization of the discriminator. We then demon- strated the application of this method to several tasks, where it either rescued unstable training, or reduced the tendency of the model to drop regions of the data distribution.
The main drawback to this method is computational cost of each training step, which increases linearly with the number of unrolling steps. There is a tradeoff between better approximating the true generator loss and the computation required to make this estimate. Depending on the architecture, one unrolling step can be enough. In other more unstable models, such as the RNN case, more are needed to stabilize training. We have some initial positive results suggesting it may be sufï¬cient to further perturb the training gradient in the same direction that a single unrolling step perturbs it. While this is more computationally efï¬cient, further investigation is required.
The method presented here bridges some of the gap between theoretical and practical results for training of GANs. We believe developing better update rules for the generator and discriminator is an important line of work for GAN training. In this work we have only considered a small fraction of the design space. For instance, the approach could be extended to unroll G when updating D as well â letting the discriminator react to how the generator would move. It is also possible to unroll sequences of G and D updates. This would make updates that are recursive: G could react to maximize performance as if G and D had already updated.
# ACKNOWLEDGMENTS
We would like to thank Laurent Dinh, David Dohan, Vincent Dumoulin, Liam Fedus, Ishaan Gul- rajani, Julian Ibarz, Eric Jang, Matthew Johnson, Marc Lanctot, Augustus Odena, Gabriel Pereyra,
10
Published as a conference paper at ICLR 2017
Pairwise L2 Norm Distribution â data >? Bus â lstep £10 Bos > 0.0 235 0.4 0.6 08 1.0 B25 255 â data ais â 5step 1.0 0.5 0.0 3.0 04 0.6 08 1.0 2 â data 15 â 10 step 1.0 0.5 0.0 04 0.6 08 1.0 [2 norm
Figure 6: As the number of unrolling steps in GAN training is increased, the distribution of pairwise distances between model samples more closely resembles the same distribution for the data. Here we plot histograms of pairwise distances between randomly selected samples. The red line gives pairwise distances in the data, while each of the ï¬ve blue lines in each plot represents a model trained with a different random seed. The vertical lines are the medians of each distribution.
Colin Raffel, Sam Schoenholz, Ayush Sekhari, Jon Shlens, and Dale Schuurmans for insightful conversation, as well as the rest of the Google Brain Team.
11
Published as a conference paper at ICLR 2017
# REFERENCES
Guillaume Alain, Yoshua Bengio, Li Yao, Jason Yosinski, Eric Thibodeau-Laufer, Saizheng Zhang, and Pascal Vincent. Gsns : Generative stochastic networks. arXiv preprint arXiv:1503.05571, 2015.
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. arXiv preprint arXiv:1606.04474, 2016.
David Belanger and Andrew McCallum. Structured prediction energy networks. arXiv preprint arXiv:1511.06350, 2015.
Jorg Bornschein, Samira Shabanian, Asja Fischer, and Yoshua Bengio. Bidirectional helmholtz machines. arXiv preprint arXiv:1506.03877, 2015.
Michael Bowling and Manuela Veloso. Multiagent learning using a variable learning rate. Artiï¬cial Intelligence, 136(2):215â250, 2002.
Yuri Burda, Roger B. Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015.
Alex J. Champandard. Semantic style transfer and turning two-bit doodles into ï¬ne artworks. arXiv preprint arXiv:1603.01768, 2016.
Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, and Wenjie Li. Mode regularized generative adversarial networks. arXiv preprint arXiv: 1612.02136, 2016.
Info- gan: Interpretable representation learning by information maximizing generative adversarial nets. arXiv preprint arXiv:1606.03657, 2016.
John M Danskin. The theory of max-min and its application to weapons allocation problems, vol- ume 5. Springer Science & Business Media, 1967.
Peter Dayan, Geoffrey E Hinton, Radford M Neal, and Richard S Zemel. The helmholtz machine. Neural computation, 7(5):889â904, 1995.
Laurent Dinh, David Krueger, and Yoshua Bengio. NICE: non-linear independent components esti- mation. arXiv preprint arXiv:1410.8516, 2014.
Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real NVP. arXiv preprint arXiv:1605.08803, 2016.
Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropi- etro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016.
Xavier Glorot and Yoshua Bengio. Understanding the difï¬culty of training deep feedforward neural networks. In JMLR W&CP: Proceedings of the Thirteenth International Conference on Artiï¬cial Intelligence and Statistics (AISTATS 2010), volume 9, pp. 249â256, May 2010.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural Information Pro- cessing Systems 27, pp. 2672â2680. Curran Associates, Inc., 2014. URL http://papers. nips.cc/paper/5423-generative-adversarial-nets.pdf.
Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. DRAW: A recurrent neural network for image generation. In Proceedings of The 32nd International Conference on Machine Learn- ing, pp. 1462â1471, 2015. URL http://www.jmlr.org/proceedings/papers/v37/ gregor15.html.
Tian Han, Yang Lu, Song-Chun Zhu, and Ying Nian Wu. Alternating back-propagation for generator network, 2016. URL https://arxiv.org/abs/1606.08571.
12
Published as a conference paper at ICLR 2017
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735â 1780, November 1997. ISSN 0899-7667. doi: 10.1162/neco.1997.9.8.1735. URL http://dx. doi.org/10.1162/neco.1997.9.8.1735.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pp. 448â456, 2015. URL http://jmlr. org/proceedings/papers/v37/ioffe15.html.
Justin Johnson, Alexandre Alahi, and Fei-Fei Li. Perceptual losses for real-time style transfer and super-resolution. arXiv preprint arXiv:1603.08155, 2016.
Anatoli Juditsky, Arkadi Nemirovski, et al. First order methods for nonsmooth convex large-scale optimization, i: general purpose methods. Optimization for Machine Learning, pp. 121â148, 2011.
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Diederik P Kingma and Max Welling. Auto-encoding variational bayes, 2013. URL https: //arxiv.org/abs/1312.6114.
Diederik P. Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverse autoregressive ï¬ow. 2016.
Tejas D. Kulkarni, Will Whitney, Pushmeet Kohli, and Joshua B. Tenenbaum. Deep convolutional inverse graphics network. arXiv preprint arXiv:1503.03167, 2015.
Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Aitken, Alykhan Tejani, Jo- hannes Totz, Zehan Wang, and Wenzhe Shi. Photo-realistic single image super-resolution using a generative adversarial network, 2016. URL https://arxiv.org/abs/1609.04802.
Dougal Maclaurin, David Duvenaud, and Ryan P. Adams. Gradient-based hyperparameter optimiza- tion through reversible learning, 2015.
Anh Nguyen, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. arXiv preprint arXiv:1605.09304, 2016.
Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. arXiv preprint arXiv:1606.00709, 2016.
Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxil- iary classiï¬er gans. arXiv preprint arXiv:1610.09585, 2016.
Barak A. Pearlmutter and Jeffrey Mark Siskind. Reverse-mode ad in a functional framework: Lambda the ultimate backpropagator. ACM Trans. Program. Lang. Syst., 30(2):7:1â7:36, March 2008. ISSN 0164-0925. doi: 10.1145/1330017.1330018. URL http://doi.acm.org/10. 1145/1330017.1330018.
Ben Poole, Alexander A Alemi, Jascha Sohl-Dickstein, and Anelia Angelova. Improved generator objectives for gans. arXiv preprint arXiv:1612.02780, 2016.
Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
Scott Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, and Honglak Lee. Learn- ing what and where to draw. In NIPS, 2016a.
Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text-to-image synthesis. In Proceedings of The 33rd International Confer- ence on Machine Learning, 2016b.
13
Published as a conference paper at ICLR 2017
Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and In International Conference on Machine variational inference in deep latent gaussian models. Learning. Citeseer, 2014.
Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016.
Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Vi- sualising image classiï¬cation models and saliency maps. arXiv preprint arXiv:1312.6034, 2013.
Satinder Singh, Michael Kearns, and Yishay Mansour. Nash convergence of gradient dynamics in general-sum games. In Proceedings of the Sixteenth conference on Uncertainty in artiï¬cial intelligence, pp. 541â548. Morgan Kaufmann Publishers Inc., 2000.
Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsuper- vised learning using nonequilibrium thermodynamics. In Proceedings of The 32nd International Conference on Machine Learning, pp. 2256â2265, 2015. URL http://arxiv.org/abs/ 1503.03585.
Casper Kaae Sonderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszar. Amortised map inference for image super-resolution, 2016. URL https://arxiv.org/abs/1610. 04490v1.
In Advances in Neu- ral Information Processing Systems 28, Dec 2015. URL http://arxiv.org/abs/1506. 03478/.
L. Theis, A. van den Oord, and M. Bethge. A note on the evaluation of generative models. In In- ternational Conference on Learning Representations, Apr 2016. URL http://arxiv.org/ abs/1511.01844.
T. Tieleman and G. Hinton. Lecture 6.5âRmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012.
A¨aron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, abs/1601.06759, 2016a. URL http://arxiv.org/abs/ 1601.06759.
A¨aron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Ko- arXiv preprint ray Kavukcuoglu. Conditional image generation with pixelcnn decoders. arXiv:1606.05328, 2016b.
Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res., 11:3371â3408, December 2010. ISSN 1532-4435. URL http://dl.acm.org/citation.cfm?id=1756006.1953039.
Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579, 2015.
Chongjie Zhang and Victor R Lesser. Multi-agent learning with policy prediction. In Proceedings of the Twenty-Fourth AAAI Conference on Artiï¬cial Intelligence, 2010.
Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126, 2016.
Jun-Yan Zhu, Philipp Kr¨ahenb¨uhl, Eli Shechtman, and Alexei A. Efros. Generative visual manipula- tion on the natural image manifold. In Proceedings of European Conference on Computer Vision (ECCV), 2016.
14
Published as a conference paper at ICLR 2017
# Appendix
# A 2D GAUSSIAN TRAINING DETAILS
Network architecture and experimental details for the experiment in Section 3.1 are as follows:
The dataset is sampled from a mixture of 8 Gaussians of standard deviation 0.02. The means are equally spaced around a circle of radius 2.
The generator network consists of a fully connected network with 2 hidden layers of size 128 with relu activations followed by a linear projection to 2 dimensions. All weights are initialized to be orthogonal with scaling of 0.8.
The discriminator network ï¬rst scales its input down by a factor of 4 (to roughly scale to (-1,1)), followed by 1 layer fully connected network with relu activations to a linear layer to of size 1 to act as the logit.
The generator minimizes LG = log(D(x)) + log(1 â D(G(z))) and the discriminator minimizes LD = âlog(D(x)) â log(1 â D(G(z))) where x is sampled from the data distribution and z â¼ N (0, I256). Both networks are optimized using Adam (Kingma & Ba, 2014) with a learning rate of 1e-4 and β1=0.5.
The network is trained by alternating updates of the generator and the discriminator. One step consists of either G or D updating.
# B MORE MIXTURE OF GAUSSIAN EXPERIMENTS
B.1 EFFECTS OF TIME DELAY / HISTORICAL AVERAGING
Another comparison we looked at was with regard to historical averaging based approaches. Re- cently similarly inspired approaches have been used in (Salimans et al., 2016) to stabilize training. For our study, we looked at taking an ensemble of discriminators over time.
First, we looked at taking an ensemble of the last N steps, as shown in Figure App.1.
. . 1 - . - Fy . - - ° . 2 2 - . - ° 5 e - = a . bd 4 - a i * - bad @ 20 * . = . _ 3 O 5 ° ⬠3 . 2 = 50 - . . â ~ 5 - oO 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 Update Steps
Figure App.1: Historical averaging does not visibly increase stability on the mixture of Gaussians task. Each row corresponds to an ensemble of discriminators which consists of the indicated number of immediately preceding discriminators. The columns correspond to different numbers of training steps.
To further explore this idea, we ran experiments with an ensemble of 5 discriminators, but with different periods between replacing discriminators in the ensemble. For example, if I sample at a rate of 100, it would take 500 steps to replace all 5 discriminators. Results can be seen in Figure App.2.
We observe that given longer and longer time delays, the model becomes less and less stable. We hypothesize that this is due to the initial shape of the discriminator loss surface. When training, the discriminatorâs estimates of probability densities are only accurate on regions where it was trained. When ï¬xing this discriminator, we are removing the feedback between the generator exploitation
15
Published as a conference paper at ICLR 2017
8, e e ° . ° 2 ba p : . £ - G 2 ce . gi « = |. . c .- - . o g o 2 100 . = = 3 5 a a © 1000 @ . a . o 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 Update Steps
Figure App.2: Introducing longer time delays between the discriminator ensemble results in insta- bility and probability distributions that are not in the window being visualized. The x axis is the number of weight updates and the y axis is how many steps to skip between discriminator updates when selecting the ensemble of 5 discriminators.
and the discriminators ability to move. As a result, the generator is able to exploit these ï¬xed areas of poor performance for older discriminators in the ensemble. New discriminators (over)compensate for this, leading the system to diverge.
B.2 EFFECTS OF THE SECOND GRADIENT
A second factor we analyzed is the effect of backpropagating the learning signal through the un- rolling in Equation 12. We can turn on or off this backpropagation through the unrolling by in- troducing stop gradient calls into our computation graph between each unrolling step. With the stop gradient in place, the update signal corresponds only to the ï¬rst term in Equation 12. We looked at 3 conï¬gurations: without stop gradients; vanilla unrolled GAN, with stop gradients; and with stop gradients but taking the average over the k unrolling steps instead of taking the ï¬nal value. Results can be see in Figure App.3.
We initially observed no difference between unrolling with and without the second gradient, as both required 3 unrolling steps to become stable. When the discriminator is unrolled to convergence, the second gradient term becomes zero. Due to the simplicity of the problem, we suspect that the discriminator nearly converged for every generator step, and the second gradient term was thus irrelevant.
To test this, we modiï¬ed the dynamics to perform ï¬ve generator steps for each discriminator update. Results are shown in Figure App.4. With the discriminator now kept out of equilibrium, successful training can be achieved with half as many unrolling steps when using both terms in the gradient than when only including the ï¬rst term.
# C RNN MNIST TRAINING DETAILS
The network architecture for the experiment in Section 3.2 is as follows:
The MNIST dataset is scaled to [-1, 1).
The generator ï¬rst scales the 256D noise vector through a 256 unit fully connected layer with relu activation. This is then fed into the initial state of a 256D LSTM(Hochreiter & Schmidhuber, 1997) that runs 28 steps corresponding to the number of columns in MNIST. The resulting sequence of ac- tivations is projected through a fully connected layer with 28 outputs with a tanh activation function. All weights are initialized via the âXavierâ initialization (Glorot & Bengio, 2010). The forget bias on the LSTM is initialized to 1.
The discriminator network feeds the input into a Convolution(16, stride=2) followed by a Convo- lution(32, stride=2) followed by Convolution(32, stride=2). All convolutions have stride 2. As in (Radford et al., 2015) leaky rectiï¬ers are used with a 0.3 leak. Batch normalization is applied after each layer (Ioffe & Szegedy, 2015). The resulting 4D tensor is then ï¬attened and a linear projection is performed to a single scalar.
16
Published as a conference paper at ICLR 2017
Unrolled GAN Unrolled GAN without second gradient
0 - o - . . - . C . . . 1 a * *- . o . . . . bw MAO > memos 10 tes. & 4 d oa a 5 a Doe o- eo OO OO > 0 5000 10000 15000 20000 = 25000 _~âs 30000» 35000 += 40000» 45000 = 50000 Update Steps
- - 5 5 0 : - - - - . . 1 - bed - - wa ~ - bed o ~ = . a 2 mn, a3 e J : â -?* oO a â = O5 - : Ly «- N £ SS 2 3 o- ees 2c 0 le OC 0 5000 10000 15000 © 20000 :»= 25000 _~â«-30000»S «35000 ©= 40000» «45000 += 50000 Update Steps
Figure App.3: If the discriminator remains nearly at its optimum during learning, then performance is nearly identical with and without the second gradient term in Equation 12. As shown in Figure App.4, when the discriminator lags behind the generator, backpropagating through unrolling aids convergence.
The generator network minimises LG = log(D(G(z))) and the discriminator minimizes LD = log(D(x)) + log(1 â D(G(z))). Both networks are trained with Adam(Kingma & Ba, 2014) with learning rates of 1e-4 and β1=0.5. The network is trained alternating updating the generator and the discriminator for 150k steps. One step consists of just 1 network update.
# D CIFAR10/MNIST TRAINING DETAILS
The network architectures for the discriminator, generator, and encoder as as follows. All convolu- tions have a kernel size of 3x3 with batch normalization. The discriminator uses leaky ReLUâs with a 0.3 leak and the generator uses standard ReLU.
The generator network is deï¬ned as:
number outputs Input: z â¼ N (0, I256) Fully connected Reshape to image 4,4,512 Transposed Convolution Transposed Convolution Transposed Convolution Convolution 4 * 4 * 512 256 128 64 1 or 3 2 2 2 1
17
Published as a conference paper at ICLR 2017
Unrolled GAN with 5 G Steps per D
ca . 0 * . - . . 1 a ~ ° - a ° c a 2 . m3 e - - . a o . . 0 £ = = : i Ss P " . é. Pore x aie a 2) ens 3 10 Be le @ Yl o> eee - ees i 4 i So Sod ans Dod 30 . be ec Cc ma ote 0 5000 10000 +â-15000+~â« 20000 ~â«25000 «30000 :~=« 35000 ~=«40000+~=«45000 +~â-50000 Update Steps
Unrolled GAN with 5 G Steps per D without second gradient
.- . 5 ; . ; - a . . a 5 a Gy 7 â 2 3 ns y . 7 C aD = gs - bd - Ee . - 5 oe _â ~~. ik 20 \ ee Ea ane 7 0 5000 10000 15000 20000 25000 «30000» «35000 ©= 40000» «45000 ~ââ 50000 Update Steps
Figure App.4: Backpropagating through the unrolling process aids convergence when the dis- criminator does not fully converge between generator updates. When taking 5 generator steps per discriminator step unrolling greatly increases stability, requiring only 5 unrolling steps to converge. Without the second gradient it requires 10 unrolling steps. Also see Figure App.3.
The discriminator network is deï¬ned as:
number outputs Input: x â¼ pdata or G Convolution Convolution Convolution Flatten Fully Connected 64 128 256 1 2 2 2
The generator network minimises LG = log(D(G(z))) and the discriminator minimizes LD = log(D(x)) + log(1 â D(G(z))). The networks are trained with Adam with a generator learning rate of 1e-4, and a discriminator learning rate of 2e-4. The network is trained alternating updating the generator and the discriminator for 100k steps. One step consists of just 1 network update.
18
Published as a conference paper at ICLR 2017
E 1000 CLASS MNIST
number outputs Input: z â¼ N (0, I256) Fully connected Reshape to image 4,4,64 Transposed Convolution Transposed Convolution Transposed Convolution Convolution 4 * 4 * 64 32 16 8 3 2 2 2 1
# stride
The discriminator network is parametrized by a size X and is deï¬ned as follows. In our tests, we used X of 1/4 and 1/2.
number outputs stride Input: x â¼ pdata or G Convolution Convolution Convolution Flatten Fully Connected 8*X 16*X 32*X 1 2 2 2
F COLORED MNIST DATASET
F.1 DATASET
To generate this dataset we ï¬rst took the mnist digit, I, scaled between 0 and 1. For each image we sample a color, C, normally distributed with mean=0 and std=0.5. To generate a colored digit between (-1, 1) we do I â C + (I â 1). Finally, we add a small amount of pixel independent noise sampled from a normal distribution with std=0.2, and the resulting values are cliped between (-1, 1). When visualized, this generates images and samples that can be seen in ï¬gure App.5. Once again it is very hard to visually see differences in sample diversity when comparing the 128 and the 512 sized models.
Figure App.5: Right: samples from the data distribution. Middle: Samples from 1/4 size model with 0 look ahead steps (worst diversity). Left: Samples from 1/1 size model with 10 look ahead steps (most diversity).
F.2 MODELS
The models used in this section are parametrized by a variable X to control capacity. A value of X=1 is same architecture used in the cifar10 experiments. We used 1/4, 1/2 and 1 as these values.
The generator network is deï¬ned as:
19
Published as a conference paper at ICLR 2017
number outputs stride Input: z â¼ N (0, I256) Fully connected Reshape to image 4,4,512*X Transposed Convolution Transposed Convolution Transposed Convolution Convolution 4 * 4 * 512*X 256*X 128*X 64*X 3 2 2 2 1
The discriminator network is deï¬ned as:
number outputs stride Input: x â¼ pdata or G Convolution Convolution Convolution Flatten Fully Connected 64*X 128*X 256*X 1 2 2 2
# G OPTIMIZATION BASED VISUALIZATIONS
More examples of model based optimization. We performed 5 runs with different seeds of each of of the unrolling steps conï¬guration. Bellow are comparisons for each run index. Ideally this would be a many to many comparison, but for space efï¬ciency we grouped the runs by the index in which they were run.
20
Published as a conference paper at ICLR 2017
' Qivii 0.0133 f- I 7 i iy vl re 0.0128 âi re 0.0133, = CUE . E.
0251 ae % asi oe 0.0465 0.0302 0.0272 0.0206 0.012 0.007 0.0085 eee 0.0252 0.0167 0.0172 0.0268 0.0157 0.0154 0.0121 0.0235 0.0185, * 0.0325 0.0295 0.0222
Figure App.6: Samples from 1/5 with different random seeds.
21
Published as a conference paper at ICLR 2017
0.0151 0.0058 0.0035 0.0106 0.0082 mh wd 0.0217 0.011 0.013 0.0168 |___ ge a wi lad wf Bere) a 0627 0.0393 0.0536 0.032 a 4] * ¥ 10286 0.0079 0.0168 0.0101 0.0104 0.0087 0.0276 0.0217 0.0193 0.0146 » 0.0151 0.0129 as ~
0.0338 0.024 0.0232 0.0178 Fall 0.0273 0.0168 0. 0.0145 Se 0.0355 0.0232 0.0262 BB) 0.0151 0.0127 0.0137 0.008 0.0213 0.0159 0.0221 0.0114 Aaa 0.0305 & 0255 0.0199 0.0046 0.0039 0.0027 Pn a nn 0.0292 0.0239 0.0211 0.016 0.0201 0.0217 0. 4 EVE 0.0213 0.0207 0.0339 0.0215 pepe je 0.0211 0.0291 0.0226 0.015 0.0156
Figure App.7: Samples from 2/5 with different random seeds.
22
Published as a conference paper at ICLR 2017
J = 0147 0.0173 â0.0242 â0, 0156 0.0144 © tt 0. rez) 0.0174 f ; F | 0.0133
0.0352 02. oo 0.0109 0.0111 0.0221 0.0133 0.0144 0.0086 0.0157 0.0196 0.0111 CE 0.0357 0.0351 0.0258 SA SNEN 0.015 0.0107 0.0219 0. 0105 0.013 = =0.0112 0.0105 0.0177 0.019 d 0.0146 a 0.0169 0.015
Figure App.8: Samples from 3/5 with different random seeds.
23
Published as a conference paper at ICLR 2017
0.0259 0.016 0.0288 0.0464 0.0261 0.0269 0.0239 0.0366 0.0248 0.0205 0.0178 ee oe a ng Ry 0.0336 0.0228 0.0557 0.0322 0.0304 0.0282
Data O step 0.024 0.0244 0.0212 Faas 0.0361 0.0289 ââ 0.0219 0.0122 Ha ° re] N ~ roo) 0.0314 0.0217 Pte 0.0142 0.0084 »S 0.0294 0.0163 0.0362 0.0494 0.0277 Be 0.0375 0.0323 0.0247 0.0206
Figure App.9: Samples from 4/5 with different random seeds.
24
Published as a conference paper at ICLR 2017
0.0128 0. 0058 0. 0065 0.0392 0. ~s] 0.0218 0.0177 0.0402 0.0308 0.0286 0.0184 ms Me _c! 0.0119 0. 0077 0.0402 0.0299 0.0233 0.0188 te fe oe 0.026 0.0144 0.0165 0.0122 0.0097 eae 0061 0.005 0046 Bl 0.0105 0.0051 0.005 LAA omit 0.0236 0.0256 0.0158
0.0557 0.0373 0.0344 0.0271 Cie 0.031 0.0364 0.0276 pth fens be 0.0115 0.0137 0.0154 0.0123 0.0183 0.014 0.0552 â0. 0314 *o. 0307 0.0271 0.0285 0.0735 0.0204 0.0291 0.0163 0.0261 0.0135 0.015 0. 0286 0. 0189 0.02 0.027 0.019 0.019 0.0135 Mlle dolad 0.0156 0.0091 0.012 0.0078
Figure App.10: Samples from 5/5 with different random seeds.
25 | {
"id": "1511.06350"
} |
1611.02205 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | 7 1 0 2
b e F 7 ] G L . s c [
2 v 5 0 2 2 0 . 1 1 6 1 : v i X r a
# PLAYING SNES IN THE RETRO LEARNING ENVIRONMENT
Nadav Bhonker*, Shai Rozenberg* and Itay Hubara Department of Electrical Engineering Technion, Israel Institute of Technology (*) indicates equal contribution {nadavbh,shairoz}@tx.technion.ac.il itayhubara@gmail.com
# ABSTRACT
Mastering a video game requires skill, tactics and strategy. While these attributes may be acquired naturally by human players, teaching them to a computer pro- gram is a far more challenging task. In recent years, extensive research was carried out in the ï¬eld of reinforcement learning and numerous algorithms were intro- duced, aiming to learn how to perform human tasks such as playing video games. As a result, the Arcade Learning Environment (ALE) (Bellemare et al., 2013) has become a commonly used benchmark environment allowing algorithms to train on various Atari 2600 games. In many games the state-of-the-art algorithms outper- form humans. In this paper we introduce a new learning environment, the Retro Learning Environment â RLE, that can run games on the Super Nintendo Enter- tainment System (SNES), Sega Genesis and several other gaming consoles. The environment is expandable, allowing for more video games and consoles to be easily added to the environment, while maintaining the same interface as ALE. Moreover, RLE is compatible with Python and Torch. SNES games pose a signif- icant challenge to current algorithms due to their higher level of complexity and versatility.
# INTRODUCTION
Controlling artiï¬cial agents using only raw high-dimensional input data such as image or sound is a difï¬cult and important task in the ï¬eld of Reinforcement Learning (RL). Recent breakthroughs in the ï¬eld allow its utilization in real-world applications such as autonomous driving (Shalev-Shwartz et al., 2016), navigation (Bischoff et al., 2013) and more. Agent interaction with the real world is usually either expensive or not feasible, as the real world is far too complex for the agent to perceive. Therefore in practice the interaction is simulated by a virtual environment which receives feedback on a decision made by the algorithm. Traditionally, games were used as a RL environment, dating back to Chess (Campbell et al., 2002), Checkers (Schaeffer et al., 1992), backgammon (Tesauro, 1995) and the more recent Go (Silver et al., 2016). Modern games often present problems and tasks which are highly correlated with real-world problems. For example, an agent that masters a racing game, by observing a simulated driverâs view screen as input, may be usefull for the development of an autonomous driver. For high-dimensional input, the leading benchmark is the Arcade Learning Environment (ALE) (Bellemare et al., 2013) which provides a common interface to dozens of Atari 2600 games, each presents a different challenge. ALE provides an extensive benchmarking plat- form, allowing a controlled experiment setup for algorithm evaluation and comparison. The main challenge posed by ALE is to successfully play as many Atari 2600 games as possible (i.e., achiev- ing a score higher than an expert human player) without providing the algorithm any game-speciï¬c information (i.e., using the same input available to a human - the game screen and score). A key work to tackle this problem is the Deep Q-Networks algorithm (Mnih et al., 2015), which made a breakthrough in the ï¬eld of Deep Reinforcement Learning by achieving human level performance on 29 out of 49 games. In this work we present a new environment â the Retro Learning Environ- ment (RLE). RLE sets new challenges by providing a uniï¬ed interface for Atari 2600 games as well as more advanced gaming consoles. As a start we focused on the Super Nintendo Entertainment
1
System (SNES). Out of the ï¬ve SNES games we tested using state-of-the-art algorithms, only one was able to outperform an expert human player. As an additional feature, RLE supports research of multi-agent reinforcement learning (MARL) tasks (Bus¸oniu et al., 2010). We utilize this feature by training and evaluating the agents against each other, rather than against a pre-conï¬gured in-game AI. We conducted several experiments with this new feature and discovered that agents tend to learn how to overcome their current opponent rather than generalize the game being played. However, if an agent is trained against an ensemble of different opponents, its robustness increases. The main contributions of the paper are as follows:
⢠Introducing a novel RL environment with signiï¬cant challenges and an easy agent evalu- ation technique (enabling agents to compete against each other) which could lead to new and more advanced RL algorithms.
⢠A new method to train an agent by enabling it to train against several opponents, making the ï¬nal policy more robust.
⢠Encapsulating several different challenges to a single RL environment.
2 RELATED WORK
2.1 ARCADE LEARNING ENVIRONMENT
The Arcade Learning Environment is a software framework designed for the development of RL algorithms, by playing Atari 2600 games. The interface provided by ALE allows the algorithms to select an action and receive the Atari screen and a reward in every step. The action is the equivalent to a humanâs joystick button combination and the reward is the difference between the scores at time stamp t and t â 1. The diversity of games for Atari provides a solid benchmark since different games have signiï¬cantly different goals. Atari 2600 has over 500 games, currently over 70 of them are implemented in ALE and are commonly used for algorithm comparison.
2.2 INFINITE MARIO
Inï¬nite Mario (Togelius et al., 2009) is a remake of the classic Super Mario game in which levels are randomly generated. On these levels the Mario AI Competition was held. During the competition, several algorithms were trained on Inï¬nite Mario and their performances were measured in terms of the number of stages completed. As opposed to ALE, training is not based on the raw screen data but rather on an indication of Marioâs (the playerâs) location and objects in its surrounding. This environment no longer poses a challenge for state of the art algorithms. Its main shortcoming lie in the fact that it provides only a single game to be learnt. Additionally, the environment provides hand-crafted features, extracted directly from the simulator, to the algorithm. This allowed the use of planning algorithms that highly outperform any learning based algorithm.
2.3 OPENAI GYM
The OpenAI gym (Brockman et al., 2016) is an open source platform with the purpose of creating an interface between RL environments and algorithms for evaluation and comparison purposes. OpenAI Gym is currently very popular due to the large number of environments supported by it. For example ALE, Go, MouintainCar and VizDoom (Zhu et al., 2016), an environment for the learning of the 3D ï¬rst-person-shooter game âDoomâ. OpenAI Gymâs recent appearance and wide usage indicates the growing interest and research done in the ï¬eld of RL.
2.4 OPENAI UNIVERSE
Universe (Universe, 2016) is a platform within the OpenAI framework in which RL algorithms can train on over a thousand games. Universe includes very advanced games such as GTA V, Portal as well as other tasks (e.g. browser tasks). Unlike RLE, Universe doesnât run the games locally and requires a VNC interface to a server that runs the games. This leads to a lower frame rate and thus longer training times.
2
2.5 MALMO
Malmo (Johnson et al., 2016) is an artiï¬cial intelligence experimentation platform of the famous game âMinecraftâ. Although Malmo consists of only a single game, it presents numerous challenges since the âMinecraftâ game can be conï¬gured differently each time. The input to the RL algorithms include speciï¬c features indicating the âstateâ of the game and the current reward.
2.6 DEEPMIND LAB
DeepMind Lab (?) is a ï¬rst-person 3D platform environment which allows training RL algorithms on several different challenges: static/random map navigation, collect fruit (a form of reward) and a laser-tag challenge where the objective is to tag the opponents controlled by the in-game AI. In LAB the agent observations are the game screen (with an additional depth channel) and the velocity of the character. LAB supports four games (one game - four different modes).
2.7 DEEP Q-LEARNING
In our work, we used several variant of the Deep Q-Network algorithm (DQN) (Mnih et al., 2015), an RL algorithm whose goal is to ï¬nd an optimal policy (i.e., given a current state, choose action that maximize the ï¬nal score). The state of the game is simply the game screen, and the action is a combination of joystick buttons that the game responds to (i.e., moving ,jumping). DQN learns through trial and error while trying to estimate the âQ-functionâ, which predicts the cumulative discounted reward at the end of the episode given the current state and action while following a policy Ï. The Q-function is represented using a convolution neural network that receives the screen as input and predicts the best possible action at itâs output. The Q-function weights θ are updated according to:
O41(S2, an) =O,+ (Rigi + ymax(Qi(se41, a; 6) _ Q1(S¢, at; 4))VoQu(se, at; %), (1)
where s;, S;41 are the current and next states, a; is the action chosen, a is the step size, y is the discounting factor R;,,1 is the reward received by applying a; at s;. 6â represents the previous weights of the network that are updated periodically. Other than DQN, we examined two leading algorithms on the RLE: Double Deep Q-Learning (D-DQN) (Van Hasselt et al. 2015p, a DQN based algorithm with a modified network update rule. Dueling Double DQN (Wang et al} 2015p, a modification of D-DQNâs architecture in which the Q-function is modeled using a state (screen) dependent estimator and an action dependent estimator.
3 THE RETRO LEARNING ENVIRONMENT
3.1 SUPER NINTENDO ENTERTAINMENT SYSTEM
The Super Nintendo Entertainment System (SNES) is a home video game console developed by Nintendo and released in 1990. A total of 783 games were released, among them, the iconic Super Mario World, Donkey Kong Country and The Legend of Zelda. Table (1) presents a comparison between Atari 2600, Sega Genesis and SNES game consoles, from which it is clear that SNES and Genesis games are far more complex.
3.2 IMPLEMENTATION
To allow easier integration with current platforms and algorithms, we based our environment on the ALE, with the aim of maintaining as much of its interface as possible. While the ALE is highly coupled with the Atari emulator, Stella1, RLE takes a different approach and separates the learning environment from the emulator. This was achieved by incorporating an interface named LibRetro (li- bRetro site), that allows communication between front-end programs to game-console emulators. Currently, LibRetro supports over 15 game consoles, each containing hundreds of games, at an esti- mated total of over 7,000 games that can potentially be supported using this interface. Examples of supported game consoles include Nintendo Entertainment System, Game Boy, N64, Sega Genesis,
# 1http://stella.sourceforge.net/
3
Saturn, Dreamcast and Sony PlayStation. We chose to focus on the SNES game console imple- mented using the snes9x2 as itâs games present interesting, yet plausible to overcome challenges. Additionally, we utilized the Genesis-Plus-GX3 emulator, which supports several Sega consoles: Genesis/Mega Drive, Master System, Game Gear and SG-1000.
3.3 SOURCE CODE
RLE is fully available as open source software for use under GNUâs General Public License4. The environment is implemented in C++ with an interface to algorithms in C++, Python and Lua. Adding a new game to the environment is a relatively simple process.
# 3.4 RLE INTERFACE
RLE provides a uniï¬ed interface to all games in its supported consoles, acting as an RL-wrapper to the LibRetro interface. Initialization of the environment is done by providing a game (ROM ï¬le) and a gaming-console (denoted by âcoreâ). Upon initialization, the ï¬rst state is the initial frame of the game, skipping all menu selection screens. The cores are provided with the RLE and installed together with the environment. Actions have a bit-wise representation where each controller button is represented by a one-hot vector. Therefore a combination of several buttons is possible using the bit-wise OR operator. The number of valid buttons combinations is larger than 700, therefore only the meaningful combinations are provided. The environments observation is the game screen, provided as a 3D array of 32 bit per pixel with dimensions which vary depending on the game. The reward can be deï¬ned differently per game, usually we set it to be the score difference between two consecutive frames. By setting different conï¬guration to the environment, it is possible to alter in-game properties such as difï¬culty (i.e easy, medium, hard), its characters, levels, etc.
Table 1: Atari 2600, SNES and Genesis comparison Atari 2600 SNES Genesis Number of Games CPU speed ROM size RAM size Color depth Screen Size Number of controller buttons Possible buttons combinations 565 1.19MHz 2-4KB 128 bytes 8 bit 160x210 5 18 783 3.58MHz 0.5-6MB 128KB 16 bit 256x224 or 512x448 12 over 720 928 7.6 MHz 16 MBytes 72KB 16 bit 320x224 11 over 100
3.5 ENVIRONMENT CHALLENGES
Integrating SNES and Genesis with RLE presents new challenges to the ï¬eld of RL where visual information in the form of an image is the only state available to the agent. Obviously, SNES games are signiï¬cantly more complex and unpredictable than Atari games. For example in sports games, such as NBA, while the player (agent) controls a single player, all the other nine playersâ behavior is determined by pre-programmed agents, each exhibiting random behavior. In addition, many SNES games exhibit delayed rewards in the course of their play (i.e., reward for an actions is given many time steps after it was performed). Similarly, in some of the SNES games, an agent can obtain a reward that is indirectly related to the imposed task. For example, in platform games, such as Super Mario, reward is received for collecting coins and defeating enemies, while the goal of the challenge is to reach the end of the level which requires to move to keep moving to the right. Moreover, upon completing a level, a score bonus is given according to the time required for its completion. Therefore collecting coins or defeating enemies is not necessarily preferable if it consumes too much time. Analysis of such games is presented in section 4.2. Moreover, unlike Atari that consists of
# 2http://www.snes9x.com/ 3https://github.com/ekeeke/Genesis-Plus-GX 4https://github.com/nadavbh12/Retro-Learning-Environment
4
eight directions and one action button, SNES has eight-directions pad and six actions buttons. Since combinations of buttons are allowed, and required at times, the actual actions space may be larger than 700, compared to the maximum of 18 actions in Atari. Furthermore, the background in SNES is very rich, ï¬lled with details which may move locally or across the screen, effectively acting as non-stationary noise since it provided little to no information regarding the state itself. Finally, we note that SNES utilized the ï¬rst 3D games. In the game Wolfenstein, the player must navigate a maze from a ï¬rst-person perspective, while dodging and attacking enemies. The SNES offers plenty of other 3D games such as ï¬ight and racing games which exhibit similar challenges. These games are much more realistic, thus inferring from SNES games to âreal worldâ tasks, as in the case of self driving cars, might be more beneï¬cial. A visual comparison of two games, Atari and SNES, is presented in Figure (1).
pT q
Figure 1: Atari 2600 and SNES game screen comparison: Left: âBoxingâ an Atari 2600 ï¬ghting game , Right: âMortal Kombatâ a SNES ï¬ghting game. Note the exceptional difference in the amount of details between the two games. Therefore, distinguishing a relevant signal from noise is much more difï¬cult.
Table 2: Comparison between RLE and the latest RL environments
Characteristics Number of Games In game adjustments1 Frame rate Observation (Input) RLE 8 out of 7000+ Yes 530fps2(SNES) screen, RAM OpenAI Universe 1000+ NO 60fps Screen Iniï¬nte Mario 1 No 5675fps2 hand crafted features ALE 74 No 120fps screen, RAM Project Malmo 1 Yes <7000fps hand crafted features DeepMind Lab 4 Yes <1000fps screen + depth and velocity
1 Allowing changes in-the game conï¬gurations (e.g., changing difï¬culty, characters, etc.)
2 Measured on an i7-5930k CPU
4 EXPERIMENTS
4.1 EVALUATION METHODOLOGY
The evaluation methodology that we used for benchmarking the different algorithms is the popular method proposed by (Mnih et al., 2015). Each examined algorithm is trained until either it reached convergence or 100 epochs (each epoch corresponds to 50,000 actions), thereafter it is evaluated by performing 30 episodes of every game. Each episode ends either by reaching a terminal state or after 5 minutes. The results are averaged per game and compared to the average result of a human player. For each game the human player was given two hours for training, and his performances were evaluated over 20 episodes. As the various algorithms donât use the game audio in the learning process, the audio was muted for both the agent and the human. From both, humans and agents
5
score, a random agent score (an agent performing actions randomly) was subtracted to assure that learning indeed occurred. It is important to note that DQNâs e-greedy approach (select a random action with a small probability â¬) is present during testing thus assuring that the same sequence of actions isnât repeated. While the screen dimensions in SNES are larger than those of Atari, in our experiments we maintained the same pre-processing of DQN (i.e., downscaling the image to 84x84 pixels and converting to gray-scale). We argue that downscaling the image size doesnât affect a humanâs ability to play the game, therefore suitable for RL algorithms as well. To handle the large action space, we limited the algorithmâs actions to the minimal button combinations which provide unique behavior. For example, on many games the R and L action buttons donât have any use therefore their use and combinations were omitted.
4.1.1 RESULTS
A thorough comparison of the four different agentsâ performances on SNES games can be seen in Figure 2 The full results can be found in Table (3). Only in the game Mortal Kombat a trained agent was able to surpass a expert human player performance as opposed to Atari games where the same algorithms have surpassed a human player on the vast majority of the games.
One example is Wolfenstein game, a 3D first-person shooter game, requires solving 3D vision tasks, navigating in a maze and detecting object. As evident from figure 2). all agents produce poor results indicating a lack of the required properties. By using e-greedy approach the agents werenât able to explore enough states (or even other rooms in our case). The algorithmâs final policy appeared as a random walk in a 3D space. Exploration based on visited states such as presented in[Bellemare] might help addressing this issue. An interesting case is Gradius III, a side-scrolling, flight-shooter game. While the trained agent was able to master the technical aspects of the game, which includes shooting incoming enemies and dodging their projectiles, itâs final score is still far from a humanâs. This is due to a hidden game mechanism in the form of âpower-upsâ, which can be accumulated, and significantly increase the players abilities. The more power-ups collected without being use â the larger their final impact will be. While this game-mechanism is evident to a human, the agent acts myopically and uses the power-up straight awaâ
4.2 REWARD SHAPING
As part of the environment and algorithm evaluation process, we investigated two case studies. First is a game on which DQN had failed to achieve a better-than-random score, and second is a game on which the training duration was signiï¬cantly longer than that of other games.
In the first case study, we used a 2D back-view racing game F-Zeroâ. In this game, one is required to complete four laps of the track while avoiding other race cars. The reward, as defined by the score of the game, is only received upon completing a lap. This is an extreme case of a reward delay. A lap may last as long as 30 seconds, which span over 450 states (actions) before reward is received. Since DQNâs exploration is a simple e-greedy approach, it was not able to produce a useful strategy. We approached this issue using reward shaping, essentially a modification of the reward to be a function of the reward and the observation, rather than the reward alone. Here, we define the reward to be the sum of the score and the agentâs speed (a metric displayed on the screen of the game). Indeed when the reward was defined as such, the agents learned to finish the race in first place within a short training period.
The second case study is the famous game of Super Mario. In this game the agent, Mario, is required to reach the right-hand side of the screen, while avoiding enemies and collecting coins. We found this case interesting as it involves several challenges at once: dynamic background that can change drastically within a level, sparse and delayed rewards and multiple tasks (such as avoiding enemies and pits, advancing rightwards and collecting coins). To our surprise, DQN was able to reach the end of the level without any reward shaping, this was possible since the agent receives rewards for events (collecting coins, stomping on enemies etc.) that tend to appear to the right of the player, causing the agent to prefer moving right. However, the training time required for convergence was signiï¬cantly longer than other games. We deï¬ned the reward as the sum of the in-game reward and a bonus granted according the the playerâs position, making moving right preferable. This reward
5A video demonstration can be found at https://youtu.be/nUl9XLMveEU
6
RLE Benchmarks 120 100 m DQN ⢠D-DQN ⢠Duel-DDQN Normalized Score °°E-Zero (speed bonus) Gradius 3 Mortal Kombat Super Mario Wolfenstein Algorithms
Figure 2: DQN, DDQN and Duel-DDQN performance. Results were normalized by subtracting the a random agentâs score and dividing by the human player score. Thus 100 represents a human player and zero a random agent.
proved useful, as training time required for convergence decreased signiï¬cantly. The two games above can be seen in Figure (3).
Figure (4) illustrates the agentâs average value function . Though both were able complete the stage trained upon, the convergence rate with reward shaping is signiï¬cantly quicker due to the immediate realization of the agent to move rightwards.
oo000 ee ne â pugs C1 0°00"
Figure 3: Left: The game Super Mario with added bonus for moving right, enabling the agent to master them game after less training time. Right: The game F-Zero. By granting a reward for speed the agent was able to master this game, as oppose to using solely the in-game reward.
7
Super Mario Reward Shaping Comparison og Averaged Action Value (Q) g 02 â Super Mario With Right Bonus â Super Mario Without Right Bonus Epoch
Figure 4: Averaged action-value (Q) for Super Mario trained with reward bonus for moving right (blue) and without (red).
4.3 MULTI-AGENT REINFORCEMENT LEARNING
In this section we describe our experiments with RLEâs multi-agent capabilities. We consider the case where the number of agents, n = 2 and the goals of the agents are opposite, as in r1 = âr2. This scheme is known as fully competitive (Bus¸oniu et al., 2010). We used the simple single- agent RL approach (as described by Bus¸oniu et al. (2010) section 5.4.1) which is to apply to sin- gle agent approach to the multi-agent case. This approach was proved useful in Crites and Barto (1996) and Matari´c (1997). More elaborate schemes are possible such as the minimax-Q algo- rithm (Littman, 1994), (Littman, 2001). These may be explored in future works. We conducted three experiments on this setup: the ï¬rst use was to train two different agents against the in-game AI, as done in previous sections, and evaluate their performance by letting them compete against each other. Here, rather than achieving the highest score, the goal was to win a tournament which consist of 50 rounds, as common in human-player competitions. The second experiment was to initially train two agents against the in-game AI, and resume the training while competing against each other. In this case, we evaluated the agent by playing again against the in-game AI, separately. Finally, in our last experiment we try to boost the agent capabilities by alternated itâs opponents, switching between the in-game AI and other trained agents.
4.3.1 MULTI-AGENT REINFORCEMENT LEARNING RESULTS
We chose the game Mortal Kombat, a two character side viewed ï¬ghting game (a screenshot of the game can be seen in Figure (1), as a testbed for the above, as it exhibits favorable properties: both players share the same screen, the agentâs optimal policy is heavily dependent on the rivalâs behavior, unlike racing games for example. In order to evaluate two agents fairly, both were trained using the same characters maintaining the identity of rival and agent. Furthermore, to remove the impact of the starting positions of both agents on their performances, the starting positions were initialized randomly.
In the ï¬rst experiment we evaluated all combinations of DQN against D-DQN and Dueling D-DQN. Each agent was trained against the in-game AI until convergence. Then 50 matches were performed between the two agents. DQN lost 28 out of 50 games against Dueling D-DQN and 33 against D-DQN. D-DQN lost 26 time to Dueling D-DQN. This win balance isnât far from the random case, since the algorithms converged into a policy in which movement towards the opponent is not
8
required rather than generalize the game. Therefore, in many episodes, little interaction between the two agents occur, leading to a semi-random outcome.
In our second experiment, we continued the training process of a the D-DQN network by letting it compete against the Dueling D-DQN network. We evaluated the re-trained network by playing 30 episodes against the in-game AI. After training, D-DQN was able to win 28 out of 30 games, yet when faced again against the in-game AI its performance deteriorated drastically (from an average of 17000 to an average of -22000). This demonstrated a form of catastrophic forgetting (Goodfellow et al., 2013) even though the agents played the same game.
In our third experiment, we trained a Dueling D-DQN agent against three different rivals: the in- game AI, a trained DQN agent and a trained Dueling-DQN agent, in an alternating manner, such that in each episode a different rival was playing as the opponent with the intention of preventing the agent from learning a policy suitable for just one opponent. The new agent was able to achieve a score of 162,966 (compared to the ânormalâ dueling D-DQN which achieved 169,633). As a new and objective measure of generalization, weâve conï¬gured the in-game AI difï¬culty to be âvery hardâ (as opposed to the default âmediumâ difï¬culty). In this metric the alternating version achieved 83,400 compared to -33,266 of the dueling D-DQN which was trained in default setting. Thus, proving that the agent learned to generalize to other policies which werenât observed while training.
4.4 FUTURE CHALLENGES
As demonstrated, RLE presents numerous challenges that have yet to be answered. In addition to being able to learn all available games, the task of learning games in which reward delay is extreme, such as F-Zero without reward shaping, remains an unsolved challenge. Additionally, some games, such as Super Mario, feature several stages that differ in background and the levels structure. The task of generalizing platform games, as in learning on one stage and being tested on the other, is another unexplored challenge. Likewise surpassing human performance remains a challenge since current state-of-the-art algorithms still struggling with the many SNES games.
# 5 CONCLUSION
We introduced a rich environment for evaluating and developing reinforcement learning algorithms which presents signiï¬cant challenges to current state-of-the-art algorithms. In comparison to other environments RLE provides a large amount of games with access to both the screen and the in- game state. The modular implementation we chose allows extensions of the environment with new consoles and games, thus ensuring the relevance of the environment to RL algorithms for years to come (see Table (2)). Weâve encountered several games in which the learning process is highly dependent on the reward deï¬nition. This issue can be addressed and explored in RLE as reward deï¬nition can be done easily. The challenges presented in the RLE consist of: 3D interpretation, delayed reward, noisy background, stochastic AI behavior and more. Although some algorithms were able to play successfully on part of the games, to fully overcome these challenges, an agent must incorporate both technique and strategy. Therefore, we believe, that the RLE is a great platform for future RL research.
# 6 ACKNOWLEDGMENTS
The authors are grateful to the Signal and Image Processing Lab (SIPL) staff for their support, Alfred Agrell and the LibRetro community for their support and Marc G. Bellemare for his valuable inputs.
# REFERENCES
M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artiï¬cial Intelligence Research, 47:253â279, jun 2013.
M. G. Bellemare, S. Srinivasan, G. Ostrovski, T. Schaul, D. Saxton, and R. Munos. Unifying count- based exploration and intrinsic motivation. arXiv preprint arXiv:1606.01868, 2016.
9
B. Bischoff, D. Nguyen-Tuong, I.-H. Lee, F. Streichert, and A. Knoll. Hierarchical reinforcement learning for robot navigation. In ESANN, 2013.
G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
L. Bus¸oniu, R. BabuËska, and B. De Schutter. Multi-agent reinforcement learning: An overview. In Innovations in Multi-Agent Systems and Applications-1, pages 183â221. Springer, 2010.
M. Campbell, A. J. Hoane, and F.-h. Hsu. Deep blue. Artiï¬cial Intelligence, 134(1):57â83, 2002.
R. Crites and A. Barto. Improving elevator performance using reinforcement learning. In Advances in Neural Information Processing Systems 8. Citeseer, 1996.
I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y. Bengio. An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211, 2013.
M. Johnson, K. Hofmann, T. Hutton, and D. Bignell. The malmo platform for artiï¬cial intelligence experimentation. In International Joint Conference On Artiï¬cial Intelligence (IJCAI), page 4246, 2016.
libRetro site. Libretro. www.libretro.com. Accessed: 2016-11-03.
M. L. Littman. Markov games as a framework for multi-agent reinforcement learning. In Proceed- ings of the eleventh international conference on machine learning, volume 157, pages 157â163, 1994.
M. L. Littman. Value-function reinforcement learning in markov games. Cognitive Systems Re- search, 2(1):55â66, 2001.
M. J. Matari´c. Reinforcement learning in the multi-robot domain. In Robot colonies, pages 73â83. Springer, 1997.
V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Ried- miller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
J. Schaeffer, J. Culberson, N. Treloar, B. Knight, P. Lu, and D. Szafron. A world championship caliber checkers program. Artiï¬cial Intelligence, 53(2):273â289, 1992.
S. Shalev-Shwartz, N. Ben-Zrihem, A. Cohen, and A. Shashua. Long-term planning by short-term prediction. arXiv preprint arXiv:1602.01580, 2016.
D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016.
G. Tesauro. Temporal difference learning and td-gammon. Communications of the ACM, 38(3): 58â68, 1995.
J. Togelius, S. Karakovskiy, J. Koutn´ık, and J. Schmidhuber. Super mario evolution. In 2009 IEEE Symposium on Computational Intelligence and Games, pages 156â161. IEEE, 2009.
Universe. Universe. universe.openai.com, 2016. Accessed: 2016-12-13.
H. Van Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double q-learning. CoRR, abs/1509.06461, 2015.
Z. Wang, N. de Freitas, and M. Lanctot. Dueling network architectures for deep reinforcement learning. arXiv preprint arXiv:1511.06581, 2015.
Y. Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. Fei-Fei, and A. Farhadi. Target-driven visual navigation in indoor scenes using deep reinforcement learning. arXiv preprint arXiv:1609.05143, 2016.
10
# Appendices
Experimental Results
Table 3: Average results of DQN, D-DQN, Dueling D-DQN and a Human player
DQN D-DQN Dueling D-DQN Human F-Zero 3116 3636 5161 6298 Gradius III 7583 12343 16929 24440 Mortal Kombat 83733 56200 169300 132441 Super Mario 11765 16946 20030 36386 Wolfenstein 100 83 40 2952
11 | {
"id": "1609.05143"
} |
1611.01796 | Modular Multitask Reinforcement Learning with Policy Sketches | We describe a framework for multitask deep reinforcement learning guided by
policy sketches. Sketches annotate tasks with sequences of named subtasks,
providing information about high-level structural relationships among tasks but
not how to implement them---specifically not providing the detailed guidance
used by much previous work on learning policy abstractions for RL (e.g.
intermediate rewards, subtask completion signals, or intrinsic motivations). To
learn from sketches, we present a model that associates every subtask with a
modular subpolicy, and jointly maximizes reward over full task-specific
policies by tying parameters across shared subpolicies. Optimization is
accomplished via a decoupled actor--critic training objective that facilitates
learning common behaviors from multiple dissimilar reward functions. We
evaluate the effectiveness of our approach in three environments featuring both
discrete and continuous control, and with sparse rewards that can be obtained
only after completing a number of high-level subgoals. Experiments show that
using our approach to learn policies guided by sketches gives better
performance than existing techniques for learning task-specific or shared
policies, while naturally inducing a library of interpretable primitive
behaviors that can be recombined to rapidly adapt to new tasks. | http://arxiv.org/pdf/1611.01796 | Jacob Andreas, Dan Klein, Sergey Levine | cs.LG, cs.NE | To appear at ICML 2017 | null | cs.LG | 20161106 | 20170617 | 7 1 0 2 n u J 7 1 ] G L . s c [
2 v 6 9 7 1 0 . 1 1 6 1 : v i X r a
# Modular Multitask Reinforcement Learning with Policy Sketches
# Jacob Andreas 1 Dan Klein 1 Sergey Levine 1
# Abstract
We describe a framework for multitask deep re- inforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement themâspeciï¬cally not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion sig- nals, or intrinsic motivations). To learn from sketches, we present a model that associates ev- ery subtask with a modular subpolicy, and jointly maximizes reward over full task-speciï¬c poli- cies by tying parameters across shared subpoli- cies. Optimization is accomplished via a decou- pled actorâcritic training objective that facilitates learning common behaviors from multiple dis- similar reward functions. We evaluate the effec- tiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level sub- goals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learn- ing task-speciï¬c or shared policies, while nat- urally inducing a library of interpretable primi- tive behaviors that can be recombined to rapidly adapt to new tasks.
# 1. Introduction
v1: make planks
# â¢: make sticks
# Th
bi: get wood i ha bi:getwood 2 by: use workbench 12 53: use toolshed
Figure 1: Learning from policy sketches. The ï¬gure shows sim- pliï¬ed versions of two tasks (make planks and make sticks, each associated with its own policy (Î 1 and Î 2 respectively). These policies share an initial high-level action b1: both require the agent to get wood before taking it to an appropriate crafting sta- tion. Even without prior information about how the associated be- havior Ï1 should be implemented, knowing that the agent should initially follow the same subpolicy in both tasks is enough to learn a reusable representation of their shared structure.
delayed rewards or other long-term structure are often dif- ï¬cult to solve with ï¬at, monolithic policies, and a long line of prior work has studied methods for learning hier- archical policy representations (Sutton et al., 1999; Diet- terich, 2000; Konidaris & Barto, 2007; Hauser et al., 2008). While unsupervised discovery of these hierarchies is possi- ble (Daniel et al., 2012; Bacon & Precup, 2015), practical approaches often require detailed supervision in the form of explicitly speciï¬ed high-level actions, subgoals, or be- havioral primitives (Precup, 2000). These depend on state representations simple or structured enough that suitable reward signals can be effectively engineered by hand.
This paper describes a framework for learning compos- able deep subpolicies in a multitask setting, guided only by abstract sketches of high-level behavior. General rein- forcement learning algorithms allow agents to solve tasks in complex environments. But tasks featuring extremely
1University of California, Berkeley. Correspondence to: Jacob Andreas <jda@cs.berkeley.edu>.
But is such ï¬ne-grained supervision actually necessary to achieve the full beneï¬ts of hierarchy? Speciï¬cally, is it necessary to explicitly ground high-level actions into the representation of the environment? Or is it sufï¬cient to simply inform the learner about the abstract structure of policies, without ever specifying how high-level behaviors should make use of primitive percepts or actions?
Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s).
To answer these questions, we explore a multitask re- learning setting where the learner is pre- inforcement
# Th
# [ai
113
Modular Multitask Reinforcement Learning with Policy Sketches
sented with policy sketches. Policy sketches are short, un- grounded, symbolic representations of a task that describe its component parts, as illustrated in Figure 1. While sym- bols might be shared across tasks (get wood appears in sketches for both the make planks and make sticks tasks), the learner is told nothing about what these symbols mean, in terms of either observations or intermediate rewards.
We present an agent architecture that learns from policy sketches by associating each high-level action with a pa- rameterization of a low-level subpolicy, and jointly op- timizes over concatenated task-speciï¬c policies by tying parameters across shared subpolicies. We ï¬nd that this architecture can use the high-level guidance provided by sketches, without any grounding or concrete deï¬nition, to dramatically accelerate learning of complex multi-stage be- haviors. Our experiments indicate that many of the beneï¬ts to learning that come from highly detailed low-level su- pervision (e.g. from subgoal rewards) can also be obtained from fairly coarse high-level supervision (i.e. from policy sketches). Crucially, sketches are much easier to produce: they require no modiï¬cations to the environment dynam- ics or reward function, and can be easily provided by non- experts. This makes it possible to extend the beneï¬ts of hierarchical RL to challenging environments where it may not be possible to specify by hand the details of relevant subtasks. We show that our approach substantially outper- forms purely unsupervised methods that do not provide the learner with any task-speciï¬c guidance about how hierar- chies should be deployed, and further that the speciï¬c use of sketches to parameterize modular subpolicies makes bet- ter use of sketches than conditioning on them directly.
that are easily recombined. This makes it possible to eval- uate our approach under a variety of different data condi- tions: (1) learning the full collection of tasks jointly via reinforcement, (2) in a zero-shot setting where a policy sketch is available for a held-out task, and (3) in a adapta- tion setting, where sketches are hidden and the agent must learn to adapt a pretrained policy to reuse high-level ac- tions in a new task. In all cases, our approach substantially outperforms previous approaches based on explicit decom- position of the Q function along subtasks (Parr & Russell, 1998; Vogel & Jurafsky, 2010), unsupervised option dis- covery (Bacon & Precup, 2015), and several standard pol- icy gradient baselines.
We consider three families of tasks: a 2-D Minecraft- inspired crafting game (Figure 3a), in which the agent must acquire particular resources by ï¬nding raw ingredients, combining them together in the proper order, and in some cases building intermediate tools that enable the agent to al- ter the environment itself; a 2-D maze navigation task that requires the agent to collect keys and open doors, and a 3-D locomotion task (Figure 3b) in which a quadrupedal robot must actuate its joints to traverse a narrow winding cliff.
In all tasks, the agent receives a reward only after the ï¬nal goal is accomplished. For the most challenging tasks, in- volving sequences of four or ï¬ve high-level actions, a task- speciï¬c agent initially following a random policy essen- tially never discovers the reward signal, so these tasks can- not be solved without considering their hierarchical struc- ture. We have released code at http://github.com/ jacobandreas/psketch.
The present work may be viewed as an extension of recent approaches for learning compositional deep architectures from structured program descriptors (Andreas et al., 2016; Reed & de Freitas, 2016). Here we focus on learning in in- teractive environments. This extension presents a variety of technical challenges, requiring analogues of these methods that can be trained from sparse, non-differentiable reward signals without demonstrations of desired system behavior.
Our contributions are:
A general paradigm for multitask, hierarchical, deep reinforcement learning guided by abstract sketches of task-speciï¬c policies.
A concrete recipe for learning from these sketches, built on a general family of modular deep policy rep- resentations and a multitask actorâcritic training ob- jective.
The modular structure of our approach, which associates every high-level action symbol with a discrete subpolicy, naturally induces a library of interpretable policy fragments
# 2. Related Work
The agent representation we describe in this paper be- longs to the broader family of hierarchical reinforcement learners. As detailed in Section 3, our approach may be viewed as an instantiation of the options framework ï¬rst described by Sutton et al. (1999). A large body of work describes techniques for learning options and related ab- stract actions, in both single- and multitask settings. Most techniques for learning options rely on intermediate su- pervisory signals, e.g. to encourage exploration (Kearns & Singh, 2002) or completion of pre-deï¬ned subtasks (Kulka- rni et al., 2016). An alternative family of approaches em- ploys post-hoc analysis of demonstrations or pretrained policies to extract reusable sub-components (Stolle & Pre- cup, 2002; Konidaris et al., 2011; Niekum et al., 2015). Techniques for learning options with less guidance than the present work include Bacon & Precup (2015) and Vezhn- evets et al. (2016), and other general hierarchical policy learners include Daniel et al. (2012), Bakker & Schmidhu- ber (2004) and Menache et al. (2002). We will see that the minimal supervision provided by policy sketches re-
Modular Multitask Reinforcement Learning with Policy Sketches
sults in (sometimes dramatic) improvements over fully un- supervised approaches, while being substantially less oner- ous for humans to provide compared to the grounded su- pervision (such as explicit subgoals or feature abstraction hierarchies) used in previous work.
rather than direct supervision. Another closely related fam- ily of models includes neural programmers (Neelakantan et al., 2015) and programmerâinterpreters (Reed & de Fre- itas, 2016), which generate discrete computational struc- tures but require supervision in the form of output actions or full execution traces.
Once a collection of high-level actions exists, agents are faced with the problem of learning meta-level (typically semi-Markov) policies that invoke appropriate high-level actions in sequence (Precup, 2000). The learning problem we describe in this paper is in some sense the direct dual to the problem of learning these meta-level policies: there, the agent begins with an inventory of complex primitives and must learn to model their behavior and select among them; here we begin knowing the names of appropriate high-level actions but nothing about how they are imple- mented, and must infer implementations (but not, initially, abstract plans) from context. Our model can be combined with these approaches to support a âmixedâ supervision condition where sketches are available for some tasks but not others (Section 4.5).
Another closely related line of work is the Hierarchical Abstract Machines (HAM) framework introduced by Parr & Russell (1998). Like our approach, HAMs begin with a representation of a high-level policy as an automaton (or a more general computer program; Andre & Russell, 2001; Marthi et al., 2004) and use reinforcement learn- ing to ï¬ll in low-level details. Because these approaches attempt to learn a single representation of the Q function for all subtasks and contexts, they require extremely strong formal assumptions about the form of the reward function and state representation (Andre & Russell, 2002) that the present work avoids by decoupling the policy representa- tion from the value function. They perform less effectively when applied to arbitrary state representations where these assumptions do not hold (Section 4.3). We are addition- ally unaware of past work showing that HAM automata can be automatically inferred for new tasks given a pre-trained model, while here we show that it is easy to solve the cor- responding problem for sketch followers (Section 4.5).
Our approach is also inspired by a number of recent efforts toward compositional reasoning and interaction with struc- tured deep models. Such models have been previously used for tasks involving question answering (Iyyer et al., 2014; Andreas et al., 2016) and relational reasoning (Socher et al., 2012), and more recently for multi-task, multi-robot trans- fer problems (Devin et al., 2016). In the present workâas in existing approaches employing dynamically assembled modular networksâtask-speciï¬c training signals are prop- agated through a collection of composed discrete structures with tied weights. Here the composed structures spec- ify time-varying policies rather than feedforward computa- tions, and their parameters must be learned via interaction
We view the problem of learning from policy sketches as complementary to the instruction following problem stud- ied in the natural language processing literature. Existing work on instruction following focuses on mapping from natural language strings to symbolic action sequences that are then executed by a hard-coded interpreter (Branavan et al., 2009; Chen & Mooney, 2011; Artzi & Zettlemoyer, 2013; Tellex et al., 2011). Here, by contrast, we focus on learning to execute complex actions given symbolic repre- sentations as a starting point. Instruction following models may be viewed as joint policies over instructions and en- vironment observations (so their behavior is not deï¬ned in the absence of instructions), while the model described in this paper naturally supports adaptation to tasks where no sketches are available. We expect that future work might combine the two lines of research, bootstrapping policy learning directly from natural language hints rather than the semi-structured sketches used here.
# 3. Learning Modular Policies from Sketches
We consider a multitask reinforcement learning prob- lem arising from a family of infinite-horizon discounted Markov decision processes in a shared environment. This environment is specified by a tuple (S,.A, P, 7), with S a set of states, A a set of low-level actions, P:S x AxSâ R a transition probability distribution, and 7 a discount fac- tor. Each task + ⬠T is then specified by a pair (R-,p,), with R, : S â R a task-specific reward function and p, : S â Ran initial distribution over states. For a fixed sequence {(s;,a;)} of states and actions obtained from a rollout of a given policy, we will denote the empirical return starting in state 5; as qi == 72,4, 7 ~*~" R(s;). In addi- tion to the components of a standard multitask RL problem, we assume that tasks are annotated with sketches K,, each consisting of a sequence (b,1,b;2,...) of high-level sym- bolic labels drawn from a fixed vocabulary B.
# B
# 3.1. Model
We exploit the structural information provided by sketches by constructing for each symbol b a corresponding subpol- icy Ïb. By sharing each subpolicy across all tasks annotated with the corresponding symbol, our approach naturally learns the shared abstraction for the corresponding subtask, without requiring any information about the grounding of that task to be explicitly speciï¬ed by annotation.
Modular Multitask Reinforcement Learning with Policy Sketches
Algorithm 1 TRAIN-STEP(Î , curriculum) 1: 2: while 3: 4: 5: 6: 7: 8: // update parameters do 9: for b 10: 11: 12: 13: 14:
D â â
|D|
// sample task Ï from curriculum (Section 3.3) Ï // do rollout d =
) ·
â¼
{ D â D âª
} â¼
B,r ⬠T do
# â T
â B {
= 7}
# d= //update subpolicy -%+3N, // update critic one Ie + H Da
# â D
c,(s;))
# log m(ai|si)) (4 (Ver(si)) (gi â er
â
er (8i))
14s one Ie + H Da (Ver(si)) (gi â er (8i))
â
â
â
Algorithm 2 TRAIN-LOOP() 1: // initialize subpolicies randomly 2: TI = INIT() 3: lmax < 1 5 Tmin <- â0O 6: / initialize â¬nax-step curriculum 1 Tl ={râ¬T:|K,| < imax} 8: curriculum(-) = Unif(7â) 9: while rmin < Tgooa do 10: // update parameters (Algorithm 11: TRAIN-STEP(II, curriculum) 12: curriculum(r) « I[7 ⬠13: Tmin <â Minze7 Er, 14: bmax < â¬max + 1
Tmin <- â0O / initialize â¬nax-step curriculum uniformly Tl ={râ¬T:|K,| < imax} curriculum(-) = Unif(7â) while rmin < Tgooa do
# imax}
|
// update parameters (Algorithm 1) TRAIN-STEP(II, curriculum) curriculum(r) « I[7 ⬠T'|(lLâEr,) Tmin <â Minze7 Er, < â¬max + 1
# ËErÏ )
« I[7 Minze7 Er,
12: curriculum(r) « I[7 ⬠T'|(lLâEr,) WreT
# Ï â
# â T
â
# â¬max + 1
# bmax <
# â T
At each timestep, a subpolicy may select either a low-level action a or a special STOP action. We denote the â A + := augmented state space . At a high } level, this framework is agnostic to the implementation of subpolicies: any function that takes a representation of the current state onto a distribution over
over all θb to maximize expected discounted reward
JM) = 9° IM) = SOEs an, [D0 Re (s1)]
across all tasks Ï .
# â T
# A
In this paper, we focus on the case where each Ïb is rep- resented as a neural network.1 These subpolicies may be viewed as options of the kind described by Sutton et al. (1999), with the key distinction that they have no initiation semantics, but are instead invokable everywhere, and have no explicit representation as a function from an initial state to a distribution over ï¬nal states (instead implicitly using the STOP action to terminate).
# 3.2. Policy Optimization
Here that optimization is accomplished via a simple decou- pled actorâcritic method. In a standard policy gradient ap- proach, with a single policy Ï with parameters θ, we com- pute gradient steps of the form (Williams, 1992):
VoI(m) = > (Vo log m(ai|si)) (ai _ e(si)), (1) a
â
Given a ï¬xed sketch (b1, b2, . . . ), a task-speciï¬c policy Î Ï is formed by concatenating its associated subpolicies in se- quence. In particular, the high-level policy maintains a sub- policy index i (initially 0), and executes actions from Ïbi until the STOP symbol is emitted, at which point control is passed to Ïbi+1. We may thus think of Î Ï as inducing a , with transitions: Markov chain over the state space
where the baseline or âcriticâ c can be chosen indepen- dently of the future without introducing bias into the gra- dient. Recalling our previous deï¬nition of qi as the empir- ical return starting from si, this form of the gradient cor- responds to a generalized advantage estimator (Schulman et al., 2015a) with λ = 1. Here c achieves close to the optimal variance (Greensmith et al., 2004) when it is set
# S Ã B aâAÏbi(a | s)
(s, bi) â (sâ, bi) â (8, bi41) with pr. )),<47,(als) - P(sâ with pr. 7», (STOP|s) 8,@)
â
|
Note that II, is semi-Markov with respect to projection of the augmented state space S x B onto the underlying state space S. We denote the complete family of task-specific policies II := {I}, and let each 7, be an arbitrary function of the current environment state parameterized by some weight vector ,. The learning problem is to optimize
1 For ease of presentation, this section assumes that these sub- policy networks are independently parameterized. As described in Section 4.2, it is also possible to share parameters between sub- policies, and introduce discrete subtask structure by way of an embedding of each symbol b.
Figure 2: Model overview. Each subpolicy Ï is uniquely associ- ated with a symbol b implemented as a neural network that maps from a state si to distributions over A+, and chooses an action ai by sampling from this distribution. Whenever the STOP action is sampled, control advances to the next subpolicy in the sketch.
Modular Multitask Reinforcement Learning with Policy Sketches
exactly equal to the state-value function VÏ(si) = EÏqi for the target policy Ï starting in state si.
The situation becomes slightly more complicated when generalizing to modular policies built by sequencing sub- policies. In this case, we will have one subpolicy per sym- bol but one critic per task. This is because subpolicies Ïb might participate in a number of composed policies Î Ï , each associated with its own reward function RÏ . Thus in- dividual subpolicies are not uniquely identiï¬ed with value functions, and the aforementioned subpolicy-speciï¬c state- value estimator is no longer well-deï¬ned. We extend the actorâcritic method to incorporate the decoupling of poli- cies from value functions by allowing the critic to vary per- sample (that is, per-task-and-timestep) depending on the reward function with which the sample is associated. Not- θb J(Î Ï ), i.e. the sum of ing that t:bâKÏ â gradients of expected rewards across all tasks in which Ïb participates, we have:
Vo (Il) = }> VoJ(I-) = a (Va, log m5(azilSri)) (Gi â er (Sri), (2)
where each state-action pair (sÏ i, aÏ i) was selected by the subpolicy Ïb in the context of the task Ï .
these steps, which is driven by a curriculum learning pro- cedure, is speciï¬ed in Algorithm 2.) This is an on-policy algorithm. In each step, the agent samples tasks from a task distribution provided by a curriculum (described in the fol- lowing subsection). The current family of policies Î is used to perform rollouts in each sampled task, accumulat- ing the resulting tuples of (states, low-level actions, high- level symbols, rewards, and task identities) into a dataset . D reaches a maximum size D, it is used to compute Once gradients w.r.t. both policy and critic parameters, and the parameter vectors are updated accordingly. The step sizes α and β in Algorithm 1 can be chosen adaptively using any ï¬rst-order method.
# 3.3. Curriculum Learning
For complex tasks, like the one depicted in Figure 3b, it is difï¬cult for the agent to discover any states with positive reward until many subpolicy behaviors have already been learned. It is thus a better use of the learnerâs time to focus on âeasyâ tasks, where many rollouts will result in high reward from which appropriate subpolicy behavior can be inferred. But there is a fundamental tradeoff involved here: if the learner spends too much time on easy tasks before being made aware of the existence of harder ones, it may overï¬t and learn subpolicies that no longer generalize or exhibit the desired structural properties.
Now minimization of the gradient variance requires that each cÏ actually depend on the task identity. (This fol- lows immediately by applying the corresponding argument in Greensmith et al. (2004) individually to each term in the sum over Ï in Equation 2.) Because the value function is itself unknown, an approximation must be estimated from data. Here we allow these cÏ to be implemented with an arbitrary function approximator with parameters Î·Ï . This is trained to minimize a squared error criterion, with gradi- ents given by
Vn. [5 Lae)? | = » (Vner(si)) (gi â er(si))- GB)
Alternative forms of the advantage estimator (e.g. the TD residual RÏ (si)+γVÏ (si+1) VÏ (si) or any other member â of the generalized advantage estimator family) can be eas- ily substituted by simply maintaining one such estimator per task. Experiments (Section 4.4) show that condition- ing on both the state and the task identity results in notice- able performance improvements, suggesting that the vari- ance reduction provided by this objective is important for efï¬cient joint learning of modular policies.
To avoid both of these problems, we use a curriculum learn- ing scheme (Bengio et al., 2009) that allows the model to smoothly scale up from easy tasks to more difï¬cult ones while avoiding overï¬tting. Initially the model is pre- sented with tasks associated with short sketches. Once av- erage reward on all these tasks reaches a certain threshold, the length limit is incremented. We assume that rewards across tasks are normalized with maximum achievable re- ward 0 < qi < 1. Let ËErÏ denote the empirical estimate of the expected reward for the current policy on task Ï . Then ËErÏ , at each timestep, tasks are sampled in proportion to 1 which by assumption must be positive.
Intuitively, the tasks that provide the strongest learning sig- nal are those in which (1) the agent does not on average achieve reward close to the upper bound, but (2) many episodes result in high reward. The expected reward com- ponent of the curriculum addresses condition (1) by en- suring that time is not spent on nearly solved tasks, while the length bound component of the curriculum addresses condition (2) by ensuring that tasks are not attempted until high-reward episodes are likely to be encountered. Experi- ments show that both components of this curriculum learn- ing scheme improve the rate at which the model converges to a good policy (Section 4.4).
The complete procedure for computing a single gradient step is given in Algorithm 1. (The outer training loop over
The complete curriculum-based training procedure is spec- iï¬ed in Algorithm 2. Initially, the maximum sketch length
Modular Multitask Reinforcement Learning with Policy Sketches
Lmax is set to 1, and the curriculum initialized to sample length-1 tasks uniformly. (Neither of the environments we consider in this paper feature any length-1 tasks; in this case, observe that Algorithm 2 will simply advance to length-2 tasks without any parameter updates.) For each setting of f,x, the algorithm uses the current collection of task policies II to compute and apply the gradient step described in Algorithm 1. The rollouts obtained from the call to TRAIN-STEP can also be used to compute reward estimates fr,; these estimates determine a new task distri- bution for the curriculum. The inner loop is repeated un- til the reward threshold rgooq is exceeded, at which point émax 1S incremented and the process repeated over a (now- expanded) collection of tasks.
# 4. Experiments
(a) (b)
7: get gold bi: getwood by: get iron bs: use workbench bs: get gold
7: go to goal bi: north K bp: east bs: east
We evaluate the performance of our approach in three envi- ronments: a crafting environment, a maze navigation en- vironment, and a cliff traversal environment. These en- vironments involve various kinds of challenging low-level control: agents must learn to avoid obstacles, interact with various kinds of objects, and relate ï¬ne-grained joint ac- tivation to high-level locomotion goals. They also feature hierarchical structure: most rewards are provided only af- ter the agent has completed two to ï¬ve high-level actions in the appropriate sequence, without any intermediate goals to indicate progress towards completion.
Figure 3: Examples from the crafting and cliff environments used in this paper. An additional maze environment is also investigated. (a) In the crafting environment, an agent seeking to pick up the gold nugget in the top corner must ï¬rst collect wood (1) and iron (2), use a workbench to turn them into a bridge (3), and use the (b) In the cliff environment, the bridge to cross the water (4). agent must reach a goal position by traversing a winding sequence of tiles without falling off. Control takes place at the level of individual joint angles; high-level behaviors like âmove northâ must be learned.
# 4.1. Implementation
In all our experiments, we implement each subpolicy as a feedforward neural network with ReLU nonlinearities and a hidden layer with 128 hidden units, and each critic as a linear function of the current state. Each subpolicy network receives as input a set of features describing the current state of the environment, and outputs a distribution over actions. The agent acts at every timestep by sampling from this distribution. The gradient steps given in lines 8 and 9 of Algorithm 1 are implemented using RMSPROP (Tiele- man, 2012) with a step size of 0.001 and gradient clipping to a unit norm. We take the batch size D in Algorithm 1 to be 2000, and set γ = 0.9 in both environments. For cur- riculum learning, the improvement threshold rgood is 0.8.
# 4.2. Environments
The crafting environment (Figure 3a) is inspired by the popular game Minecraft, but is implemented in a discrete 2-D world. The agent may interact with objects in the world by facing them and executing a special USE action. Interacting with raw materials initially scattered around the environment causes them to be added to an inventory. Inter- acting with different crafting stations causes objects in the agentâs inventory to be combined or transformed. Each task
in this game corresponds to some crafted object the agent must produce; the most complicated goals require the agent to also craft intermediate ingredients, and in some cases build tools (like a pickaxe and a bridge) to reach ingredients located in initially inaccessible regions of the environment.
The maze environment (not pictured) corresponds closely to the the âlight worldâ described by Konidaris & Barto (2007). The agent is placed in a discrete world consist- ing of a series of rooms, some of which are connected by doors. Some doors require that the agent ï¬rst pick up a key to open them. For our experiments, each task corre- sponds to a goal room (always at the same position relative to the agentâs starting position) that the agent must reach by navigating through a sequence of intermediate rooms. The agent has one sensor on each side of its body, which reports the distance to keys, closed doors, and open doors in the corresponding direction. Sketches specify a particu- lar sequence of directions for the agent to traverse between rooms to reach the goal. The sketch always corresponds to a viable traversal from the start to the goal position, but other (possibly shorter) traversals may also exist.
The cliff environment (Figure 3b) is intended to demon- strate the applicability of our approach to problems in- volving high-dimensional continuous control. In this en- vironment, a quadrupedal robot (Schulman et al., 2015b) is placed on a variable-length winding path, and must navi-
Modular Multitask Reinforcement Learning with Policy Sketches
(a) (b) (c)
Figure 4: Comparing modular learning from sketches with standard RL baselines. Modular is the approach described in this paper, while Independent learns a separate policy for each task, Joint learns a shared policy that conditions on the task identity, Q automaton learns a single network to map from states and action symbols to Q values, and OptâCrit is an unsupervised option learner. Performance for the best iteration of the (off-policy) Q automaton is plotted. Performance is shown in (a) the crafting environment, (b) the maze environment, and (c) the cliff environment. The modular approach is eventually able to achieve high reward on all tasks, while the baseline models perform considerably worse on average.
gate to the end without falling off. This task is designed to provide a substantially more challenging RL problem, due to the fact that the walker must learn the low-level walk- ing skill before it can make any progress, but has simpler hierarchical structure than the crafting environment. The agent receives a small reward for making progress toward the goal, and a large positive reward for reaching the goal square, with a negative reward for falling off the path.
A listing of tasks and sketches is given in Appendix A.
# 4.3. Multitask Learning
The primary experimental question in this paper is whether the extra structure provided by policy sketches alone is enough to enable fast learning of coupled policies across tasks. We aim to explore the differences between the approach described in Section 3 and relevant prior work that performs either unsupervised or weakly supervised multitask learning of hierarchical policy structure. Speciï¬- cally, we compare our modular to approach to:
1. Structured hierarchical reinforcement learners:
The joint and independent models performed best when trained with the same curriculum described in Section 3.3, while the optionâcritic model performed best with a lengthâweighted curriculum that has access to all tasks from the beginning of training.
Learning curves for baselines and the modular model are shown in Figure 4. It can be seen that in all environments, our approach substantially outperforms the baselines: it in- duces policies with substantially higher average reward and converges more quickly than the policy gradient baselines. It can further be seen in Figure 4c that after policies have been learned on simple tasks, the model is able to rapidly adapt to more complex ones, even when the longer tasks involve high-level actions not required for any of the short tasks (Appendix A).
Having demonstrated the overall effectiveness of our ap- proach, our remaining experiments explore (1) the impor- tance of various components of the training procedure, and (2) the learned modelsâ ability to generalize or adapt to held-out tasks. For compactness, we restrict our consid- eration on the crafting domain, which features a larger and more diverse range of tasks and high-level actions.
(a) the fully unsupervised optionâcritic algorithm of Bacon & Precup (2015)
# 4.4. Ablations
(b) a Q automaton that attempts to explicitly repre- sent the Q function for each task / subtask com- bination (essentially a HAM (Andre & Russell, 2002) with a deep state abstraction function)
In addition to the overall modular parameter-tying structure induced by our sketches, the key components of our train- ing procedure are the decoupled critic and the curriculum. Our next experiments investigate the extent to which these are necessary for good performance.
2. Alternative ways of incorporating sketch data into standard policy gradient methods:
(c) learning an independent policy for each task (d) learning a joint policy across all tasks, condi- tioning directly on both environment features and a representation of the complete sketch
To evaluate the the critic, we consider three ablations: (1) removing the dependence of the model on the environment state, in which case the baseline is a single scalar per task; (2) removing the dependence of the model on the task, in which case the baseline is a conventional generalized ad- vantage estimator; and (3) removing both, in which case
Modular Multitask Reinforcement Learning with Policy Sketches
(a) (b)
(c)
Figure 5: Training details in the crafting domain. (a) Critics: lines labeled âtaskâ include a baseline that varies with task identity, while lines labeled âstateâ include a baseline that varies with state identity. Estimating a baseline that depends on both the represen- tation of the current state and the identity of the current task is better than either alone or a constant baseline. (b) Curricula: lines labeled âlenâ use a curriculum with iteratively increasing sketch lengths, while lines labeled âwgtâ sample tasks in inverse propor- tion to their current reward. Adjusting the sampling distribution based on both task length and performance return improves con- vergence. (c) Individual task performance. Colors correspond to task length. Sharp steps in the learning curve correspond to in- creases of émax in the curriculum.
the baseline is a single scalar, as in a vanilla policy gradient approach. Results are shown in Figure 5a. Introducing both state and task dependence into the baseline leads to faster convergence of the model: the approach with a constant baseline achieves less than half the overall performance of the full critic after 3 million episodes. Introducing task and state dependence independently improve this performance; combining them gives the best result.
# wet)
Model Multitask 0-shot Adaptation Joint Independent OptionâCritic Modular (ours) .49 .44 .47 .89 .01 â â .77 â .01 .42 .76
Table 1: Accuracy and generalization of learned models in the crafting domain. The table shows the task completion rate for each approach after convergence under various training condi- tions. Multitask is the multitask training condition described in Section 4.3, while 0-Shot and Adaptation are the generalization experiments described in Section 4.5. Our modular approach con- sistently achieves the best performance.
We hold out two length-four tasks from the full inventory used in Section 4.3, and train on the remaining tasks. For zero-shot experiments, we simply form the concatenated policy described by the sketches of the held-out tasks, and repeatedly execute this policy (without learning) in order to obtain an estimate of its effectiveness. For adaptation ex- periments, we consider ordinary RL over high-level actions , implementing the high- B level learner with the same agent architecture as described in Section 3.1. Note that the Independent and Optionâ Critic models cannot be applied to the zero-shot evaluation, while the Joint model cannot be applied to the adaptation baseline (because it depends on pre-speciï¬ed sketch fea- tures). Results are shown in Table 1. The held-out tasks are sufï¬ciently challenging that the baselines are unable to obtain more than negligible reward: in particular, the joint model overï¬ts to the training tasks and cannot generalize to new sketches, while the independent model cannot discover enough of a reward signal to learn in the adaptation setting. The modular model does comparatively well: individual subpolicies succeed in novel zero-shot conï¬gurations (sug- gesting that they have in fact discovered the behavior sug- gested by the semantics of the sketch) and provide a suit- able basis for adaptive discovery of new high-level policies.
We also investigate two aspects of our curriculum learning scheme: starting with short examples and moving to long ones, and sampling tasks in inverse proportion to their ac- cumulated reward. Experiments are shown in Figure 5b. Both components help; prioritization by both length and weight gives the best results.
# 4.5. Zero-shot and Adaptation Learning
In our ï¬nal experiments, we consider the modelâs ability to generalize beyond the standard training condition. We ï¬rst consider two tests of generalization: a zero-shot setting, in which the model is provided a sketch for the new task and must immediately achieve good performance, and a adap- tation setting, in which no sketch is provided and the model must learn the form of a suitable sketch via interaction in the new task.
# 5. Conclusions
We have described an approach for multitask learning of deep multitask policies guided by symbolic policy sketches. By associating each symbol appearing in a sketch with a modular neural subpolicy, we have shown that it is possible to build agents that share behavior across tasks in order to achieve success in tasks with sparse and delayed rewards. This process induces an inventory of reusable and interpretable subpolicies which can be employed for zero- shot generalization when further sketches are available, and hierarchical reinforcement learning when they are not. Our work suggests that these sketches, which are easy to pro- duce and require no grounding in the environment, provide an effective scaffold for learning hierarchical policies from minimal supervision.
Modular Multitask Reinforcement Learning with Policy Sketches
# Acknowledgments
JA is supported by a Facebook Graduate Fellowship and a Berkeley AI / Huawei Fellowship.
Devin, Coline, Gupta, Abhishek, Darrell, Trevor, Abbeel, Pieter, and Levine, Sergey. Learning modular neural network policies for multi-task and multi-robot transfer. arXiv preprint arXiv:1609.07088, 2016.
# References
Andre, David and Russell, Stuart. Programmable reinforce- ment learning agents. In Advances in Neural Information Processing Systems, 2001.
Andre, David and Russell, Stuart. State abstraction for pro- grammable reinforcement learning agents. In Proceed- ings of the Meeting of the Association for the Advance- ment of Artiï¬cial Intelligence, 2002.
Andreas, Jacob, Rohrbach, Marcus, Darrell, Trevor, and Klein, Dan. Learning to compose neural networks for question answering. In Proceedings of the Annual Meet- ing of the North American Chapter of the Association for Computational Linguistics, 2016.
Artzi, Yoav and Zettlemoyer, Luke. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computa- tional Linguistics, 1(1):49â62, 2013.
Dietterich, Thomas G. Hierarchical reinforcement learning with the maxq value function decomposition. J. Artif. Intell. Res. (JAIR), 13:227â303, 2000.
Greensmith, Evan, Bartlett, Peter L, and Baxter, Jonathan. Variance reduction techniques for gradient estimates in reinforcement learning. Journal of Machine Learning Research, 5(Nov):1471â1530, 2004.
Hauser, Kris, Bretl, Timothy, Harada, Kensuke, and Latombe, Jean-Claude. Using motion primitives in prob- abilistic sample-based planning for humanoid robots. In Algorithmic foundation of robotics, pp. 507â522. Springer, 2008.
Iyyer, Mohit, Boyd-Graber, Jordan, Claudino, Leonardo, Socher, Richard, and Daum´e III, Hal. A neural net- work for factoid question answering over paragraphs. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2014.
Bacon, Pierre-Luc and Precup, Doina. The option-critic ar- chitecture. In NIPS Deep Reinforcement Learning Work- shop, 2015.
Kearns, Michael and Singh, Satinder. Near-optimal rein- forcement learning in polynomial time. Machine Learn- ing, 49(2-3):209â232, 2002.
Bakker, Bram and Schmidhuber, J¨urgen. Hierarchical rein- forcement learning based on subgoal discovery and sub- policy specialization. In Proc. of the 8-th Conf. on Intel- ligent Autonomous Systems, pp. 438â445, 2004.
Bengio, Yoshua, Louradour, J´erËome, Collobert, Ronan, and In International Weston, Jason. Curriculum learning. Conference on Machine Learning, pp. 41â48. ACM, 2009.
Branavan, S.R.K., Chen, Harr, Zettlemoyer, Luke S., and Barzilay, Regina. Reinforcement learning for mapping In Proceedings of the Annual instructions to actions. Meeting of the Association for Computational Linguis- tics, pp. 82â90. Association for Computational Linguis- tics, 2009.
Chen, David L. and Mooney, Raymond J. Learning to inter- pret natural language navigation instructions from obser- vations. In Proceedings of the Meeting of the Association for the Advancement of Artiï¬cial Intelligence, volume 2, pp. 1â2, 2011.
Konidaris, George and Barto, Andrew G. Building portable options: Skill transfer in reinforcement learning. In IJ- CAI, volume 7, pp. 895â900, 2007.
Konidaris, George, Kuindersma, Scott, Grupen, Roderic, and Barto, Andrew. Robot learning from demonstration by constructing skill trees. The International Journal of Robotics Research, pp. 0278364911428653, 2011.
Kulkarni, Tejas D, Narasimhan, Karthik R, Saeedi, Arda- van, and Tenenbaum, Joshua B. Hierarchical deep rein- forcement learning: Integrating temporal abstraction and intrinsic motivation. arXiv preprint arXiv:1604.06057, 2016.
Marthi, Bhaskara, Lantham, David, Guestrin, Carlos, and Russell, Stuart. Concurrent hierarchical reinforcement learning. In Proceedings of the Meeting of the Associa- tion for the Advancement of Artiï¬cial Intelligence, 2004.
Menache, Ishai, Mannor, Shie, and Shimkin, Nahum. Q-cutdynamic discovery of sub-goals in reinforcement In European Conference on Machine Learn- learning. ing, pp. 295â306. Springer, 2002.
Daniel, Christian, Neumann, Gerhard, and Peters, Jan. Hi- erarchical relative entropy policy search. In Proceedings of the International Conference on Artiï¬cial Intelligence and Statistics, pp. 273â281, 2012.
Neelakantan, Arvind, Le, Quoc V, and Sutskever, Ilya. Neural programmer: Inducing latent programs with gra- dient descent. arXiv preprint arXiv:1511.04834, 2015.
Modular Multitask Reinforcement Learning with Policy Sketches
Niekum, Scott, Osentoski, Sarah, Konidaris, George, Chitta, Sachin, Marthi, Bhaskara, and Barto, Andrew G. Learning grounded ï¬nite-state representations from un- structured demonstrations. The International Journal of Robotics Research, 34(2):131â157, 2015.
Vogel, Adam and Jurafsky, Dan. Learning to follow navi- gational directions. In Proceedings of the Annual Meet- ing of the Association for Computational Linguistics, pp. 806â814. Association for Computational Linguistics, 2010.
Parr, Ron and Russell, Stuart. Reinforcement learning with hierarchies of machines. In Advances in Neural Infor- mation Processing Systems, 1998.
Williams, Ronald J. Simple statistical gradient-following learning. algorithms for connectionist reinforcement Machine learning, 8(3-4):229â256, 1992.
Precup, Doina. Temporal abstraction in reinforcement learning. PhD thesis, 2000.
Reed, Scott and de Freitas, Nando. Neural programmer- interpreters. Proceedings of the International Confer- ence on Learning Representations, 2016.
Schulman, John, Moritz, Philipp, Levine, Sergey, Jordan, Michael, and Abbeel, Pieter. High-dimensional con- tinuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015a.
Schulman, John, Moritz, Philipp, Levine, Sergey, Jordan, Michael, and Abbeel, Pieter. Trust region policy op- In International Conference on Machine timization. Learning, 2015b.
Socher, Richard, Huval, Brody, Manning, Christopher, and Ng, Andrew. Semantic compositionality through recur- sive matrix-vector spaces. In Proceedings of the Confer- ence on Empirical Methods in Natural Language Pro- cessing, pp. 1201â1211, Jeju, Korea, 2012.
Stolle, Martin and Precup, Doina. Learning options in rein- forcement learning. In International Symposium on Ab- straction, Reformulation, and Approximation, pp. 212â 223. Springer, 2002.
Sutton, Richard S, Precup, Doina, and Singh, Satinder. Be- tween MDPs and semi-MDPs: A framework for tempo- ral abstraction in reinforcement learning. Artiï¬cial intel- ligence, 112(1):181â211, 1999.
Tellex, Stefanie, Kollar, Thomas, Dickerson, Steven, Wal- ter, Matthew R., Banerjee, Ashis Gopal, Teller, Seth, and Roy, Nicholas. Understanding natural language com- mands for robotic navigation and mobile manipulation. In In Proceedings of the National Conference on Artiï¬- cial Intelligence, 2011.
Tieleman, Tijmen. RMSProp (unpublished), 2012.
Vezhnevets, Alexander, Mnih, Volodymyr, Agapiou, John, Osindero, Simon, Graves, Alex, Vinyals, Oriol, and Kavukcuoglu, Koray. Strategic attentive writer for learn- ing macro-actions. arXiv preprint arXiv:1606.04695, 2016.
Modular Multitask Reinforcement Learning with Policy Sketches
A. Tasks and Sketches The complete list of tasks, sketches, and symbols is given below. Tasks marked with an asteriskâ are held out for the generalization experiments described in Section 4.5, but included in the multitask training experiments in Sections 4.3 and 4.4.
Goal Sketch Crafting environment make plank make stick make cloth make rope make bridge make bedâ make axeâ make shears get gold get gem get wood get wood get grass get grass get iron get wood get wood get wood get iron get wood use toolshed use workbench use factory use toolshed get wood use toolshed use workbench use workbench get wood use workbench use factory get grass get iron get iron use factory get iron use workbench use toolshed use workbench use bridge use toolshed use axe
# Maze environment
room 1 room 2 room 3 room 4 room 5 room 6 room 7 room 8 room 9 room 10
left left right up up up down left right left
left down down left right right right left down up
# up up down down right
# Cliff environment
path 0 path 1 path 2 path 3 path 4 path 5 path 6 path 7 path 8 path 9 path 10 path 11 path 12 path 13 path 14 path 15 path 16 path 17 path 18 path 19 path 20 path 21 path 22 path 23 north east south west west west north west east north east south south south south east east east north west north north west south south north east north south west north east west south south south east north east west north west west east north north west east west south south south | {
"id": "1606.04695"
} |
1611.01576 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | 6 1 0 2
v o N 1 2 ] E N . s c [
2 v 6 7 5 1 0 . 1 1 6 1 : v i X r a
# Under review as a conference paper at ICLR 2017
# QUASI-RECURRENT NEURAL NETWORKS
James Bradburyâ, Stephen Merityâ, Caiming Xiong & Richard Socher Salesforce Research Palo Alto, California {james.bradbury,smerity,cxiong,rsocher}@salesforce.com
# ABSTRACT
Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestepâs computation on the previous timestepâs out- put limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural se- quence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in par- allel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classiï¬cation, and character-level neural machine translation demonstrate these advantages and underline the viabil- ity of QRNNs as a basic building block for a variety of sequence tasks.
# INTRODUCTION
Recurrent neural networks (RNNs), including gated variants such as the long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) have become the standard model architecture for deep learning approaches to sequence modeling tasks. RNNs repeatedly apply a function with trainable parameters to a hidden state. Recurrent layers can also be stacked, increasing network depth, repre- sentational power and often accuracy. RNN applications in the natural language domain range from sentence classiï¬cation (Wang et al., 2015) to word- and character-level language modeling (Zaremba et al., 2014). RNNs are also commonly the basic building block for more complex models for tasks such as machine translation (Bahdanau et al., 2015; Luong et al., 2015; Bradbury & Socher, 2016) or question answering (Kumar et al., 2016; Xiong et al., 2016). Unfortunately standard RNNs, in- cluding LSTMs, are limited in their capability to handle tasks involving very long sequences, such as document classiï¬cation or character-level machine translation, as the computation of features or states for different parts of the document cannot occur in parallel.
Convolutional neural networks (CNNs) (Krizhevsky et al., 2012), though more popular on tasks in- volving image data, have also been applied to sequence encoding tasks (Zhang et al., 2015). Such models apply time-invariant ï¬lter functions in parallel to windows along the input sequence. CNNs possess several advantages over recurrent models, including increased parallelism and better scal- ing to long sequences such as those often seen with character-level language data. Convolutional models for sequence processing have been more successful when combined with RNN layers in a hybrid architecture (Lee et al., 2016), because traditional max- and average-pooling approaches to combining convolutional features across timesteps assume time invariance and hence cannot make full use of large-scale sequence order information.
We present quasi-recurrent neural networks for neural sequence modeling. QRNNs address both drawbacks of standard models: like CNNs, QRNNs allow for parallel computation across both timestep and minibatch dimensions, enabling high throughput and good scaling to long sequences. Like RNNs, QRNNs allow the output to depend on the overall order of elements in the sequence. We describe QRNN variants tailored to several natural language tasks, including document-level sentiment classiï¬cation, language modeling, and character-level machine translation. These models outperform strong LSTM baselines on all three tasks while dramatically reducing computation time.
# âEqual contribution
1
# Under review as a conference paper at ICLR 2017
LSTM CNN QRNN 4 Lrcor TT conciuton TTT â convixicn iain LSTM/Linear â+{_}+{-_}--L }- Max-Pool fo-Pol [ELS = === >| 7 Linear _ Convolution Convolution -____ LSTM/Linear â-{_ ++} HL }- Max-Pool iw fo-Pol §=$[2_ = > t Â¥ t iu iu iu
Figure 1: Block diagrams showing the computation structure of the QRNN compared with typical LSTM and CNN architectures. Red signiï¬es convolutions or matrix multiplications; a continuous block means that those computations can proceed in parallel. Blue signiï¬es parameterless functions that operate in parallel along the channel/feature dimension. LSTMs can be factored into (red) linear blocks and (blue) elementwise blocks, but computation at each timestep still depends on the results from the previous timestep.
# 2 MODEL
Each layer of a quasi-recurrent neural network consists of two kinds of subcomponents, analogous to convolution and pooling layers in CNNs. The convolutional component, like convolutional layers in CNNs, allows fully parallel computation across both minibatches and spatial dimensions, in this case the sequence dimension. The pooling component, like pooling layers in CNNs, lacks trainable parameters and allows fully parallel computation across minibatch and feature dimensions. Given an input sequence X â RT Ãn of T n-dimensional vectors x1 . . . xT , the convolutional sub- component of a QRNN performs convolutions in the timestep dimension with a bank of m ï¬lters, producing a sequence Z â RT Ãm of m-dimensional candidate vectors zt. In order to be useful for tasks that include prediction of the next token, the ï¬lters must not allow the computation for any given timestep to access information from future timesteps. That is, with ï¬lters of width k, each zt depends only on xtâk+1 through xt. This concept, known as a masked convolution (van den Oord et al., 2016), is implemented by padding the input to the left by the convolutionâs ï¬lter size minus one.
We apply additional convolutions with separate ï¬lter banks to obtain sequences of vectors for the elementwise gates that are needed for the pooling function. While the candidate vectors are passed through a tanh nonlinearity, the gates use an elementwise sigmoid. If the pooling function requires a forget gate ft and an output gate ot at each timestep, the full set of computations in the convolutional component is then:
Z = tanh(Wz â X) F = Ï(Wf â X) O = Ï(Wo â X), (1)
where Wz,Wf , and Wo, each in RkÃnÃm, are the convolutional ï¬lter banks and â denotes a masked convolution along the timestep dimension. Note that if the ï¬lter width is 2, these equations reduce to the LSTM-like
zt = tanh(W1 ft = Ï(W1 ot = Ï(W1 zxtâ1 + W2 f xt) oxt). zxt) f xtâ1 + W2 oxtâ1 + W2 (2)
Convolution ï¬lters of larger width effectively compute higher n-gram features at each timestep; thus larger widths are especially important for character-level tasks.
Suitable functions for the pooling subcomponent can be constructed from the familiar elementwise gates of the traditional LSTM cell. We seek a function controlled by gates that can mix states across timesteps, but which acts independently on each channel of the state vector. The simplest option, which Balduzzi & Ghifary (2016) term âdynamic average poolingâ, uses only a forget gate:
hy = f © by-1 + (1 â f:) O x, (3)
2
# Under review as a conference paper at ICLR 2017
where © denotes elementwise multiplication. The function may also include an output gate:
ec, =f OG_1 + (1-f;) Ou _ (4) hy = 0; ©.
Or the recurrence relation may include an independent input and forget gate:
ce =f0q-1. +h) On (5) hy = 0; © Cy.
We term these three options f -pooling, fo-pooling, and ifo-pooling respectively; in each case we initialize h or c to zero. Although the recurrent parts of these functions must be calculated for each timestep in sequence, their simplicity and parallelism along feature dimensions means that, in practice, evaluating them over even long sequences requires a negligible amount of computation time.
A single QRNN layer thus performs an input-dependent pooling, followed by a gated linear combi- nation of convolutional features. As with convolutional neural networks, two or more QRNN layers should be stacked to create a model with the capacity to approximate more complex functions.
2.1 VARIANTS
Motivated by several common natural language tasks, and the long history of work on related ar- chitectures, we introduce several extensions to the stacked QRNN described above. Notably, many extensions to both recurrent and convolutional models can be applied directly to the QRNN as it combines elements of both model types.
Regularization An important extension to the stacked QRNN is a robust regularization scheme inspired by recent work in regularizing LSTMs.
The need for an effective regularization method for LSTMs, and dropoutâs relative lack of efï¬cacy when applied to recurrent connections, led to the development of recurrent dropout schemes, in- cluding variational inferenceâbased dropout (Gal & Ghahramani, 2016) and zoneout (Krueger et al., 2016). These schemes extend dropout to the recurrent setting by taking advantage of the repeating structure of recurrent networks, providing more powerful and less destructive regularization.
Variational inferenceâbased dropout locks the dropout mask used for the recurrent connections across timesteps, so a single RNN pass uses a single stochastic subset of the recurrent weights. Zoneout stochastically chooses a new subset of channels to âzone outâ at each timestep; for these channels the network copies states from one timestep to the next without modiï¬cation.
As QRNNs lack recurrent weights, the variational inference approach does not apply. Thus we extended zoneout to the QRNN architecture by modifying the pooling function to keep the previous pooling state for a stochastic subset of channels. Conveniently, this is equivalent to stochastically setting a subset of the QRNNâs f gate channels to 1, or applying dropout on 1 â f :
F = 1 â dropout(1 â Ï(Wf â X))) (6)
Thus the pooling function itself need not be modiï¬ed at all. We note that when using an off-the- shelf dropout layer in this context, it is important to remove automatic rescaling functionality from the implementation if it is present. In many experiments, we also apply ordinary dropout between layers, including between word embeddings and the ï¬rst QRNN layer.
Densely-Connected Layers We can also extend the QRNN architecture using techniques intro- duced for convolutional networks. For sequence classiï¬cation tasks, we found it helpful to use skip-connections between every QRNN layer, a technique termed âdense convolutionâ by Huang et al. (2016). Where traditional feed-forward or convolutional networks have connections only be- tween subsequent layers, a âDenseNetâ with L layers has feed-forward or convolutional connections between every pair of layers, for a total of L(Lâ1). This can improve gradient ï¬ow and convergence properties, especially in deeper networks, although it requires a parameter count that is quadratic in the number of layers.
When applying this technique to the QRNN, we include connections between the input embeddings and every QRNN layer and between every pair of QRNN layers. This is equivalent to concatenating
3
# Under review as a conference paper at ICLR 2017
Convolution oe p SS Linear fo-Poolh =~: â= ~~ â â SY Sara >| âfo-Pool Convolution ans , Salsas Linear fo-Poolh =[â-â= â â â â â > f-Pool Attention Linear Output gates y MU y
Figure 2: The QRNN encoderâdecoder architecture used for machine translation experiments.
each QRNN layerâs input to its output along the channel dimension before feeding the state into the next layer. The output of the last layer alone is then used as the overall encoding result.
EncoderâDecoder Models To demonstrate the generality of QRNNs, we extend the model architec- ture to sequence-to-sequence tasks, such as machine translation, by using a QRNN as encoder and a modiï¬ed QRNN, enhanced with attention, as decoder. The motivation for modifying the decoder is that simply feeding the last encoder hidden state (the output of the encoderâs pooling layer) into the decoderâs recurrent pooling layer, analogously to conventional recurrent encoderâdecoder architec- tures, would not allow the encoder state to affect the gate or update values that are provided to the decoderâs pooling layer. This would substantially limit the representational power of the decoder.
Instead, the output of each decoder QRNN layerâs convolution functions is supplemented at every timestep with the final encoder hidden state. This is accomplished by adding the result of the convo- lution for layer ¢ (e.g., Wé « X°, in R?*â¢) with broadcasting to a linearly projected copy of layer £âs last encoder state (e.g., veins ,in Râ¢):
Z = tanh(WS « X°+ Vhs) Ff = o(Wi xXâ + Vih*) (7) Of = o (WS « Xo + VohS),
where the tilde denotes that Ëh is an encoder variable. Encoderâdecoder models which operate on long sequences are made signiï¬cantly more powerful with the addition of soft attention (Bahdanau et al., 2015), which removes the need for the entire input representation to ï¬t into a ï¬xed-length encoding vector. In our experiments, we computed an attentional sum of the encoderâs last layerâs hidden states. We used the dot products of these encoder hidden states with the decoderâs last layerâs un-gated hidden states, applying a softmax along the encoder timesteps, to weight the encoder states into an attentional sum kt for each decoder timestep. This context, and the decoder state, are then fed into a linear layer followed by the output gate:
st = softmax(ch - hâ) all's k, = > athâ (8) h? = 0, © (W,k, + W.c?),
where L is the last layer.
While the ï¬rst step of this attention procedure is quadratic in the sequence length, in practice it takes signiï¬cantly less computation time than the modelâs linear and convolutional layers due to the simple and highly parallel dot-product scoring function.
4
# Under review as a conference paper at ICLR 2017
Model Time / Epoch (s) Test Acc (%) NBSVM-bi (Wang & Manning, 2012) 2 layer sequential BoW CNN (Johnson & Zhang, 2014) Ensemble of RNNs and NB-SVM (Mesnil et al., 2014) 2-layer LSTM (Longpre et al., 2016) Residual 2-layer bi-LSTM (Longpre et al., 2016) â â â â â 91.2 92.3 92.6 87.6 90.1 Our models Densely-connected 4-layer LSTM (cuDNN optimized) Densely-connected 4-layer QRNN Densely-connected 4-layer QRNN with k = 4 480 150 160 90.9 91.4 91.1
Table 1: Accuracy comparison on the IMDb binary sentiment classiï¬cation task. All of our models use 256 units per layer; all layers other than the ï¬rst layer, whose ï¬lter width may vary, use ï¬lter width k = 2. Train times are reported on a single NVIDIA K40 GPU. We exclude semi-supervised models that conduct additional training on the unlabeled portion of the dataset.
# 3 EXPERIMENTS
We evaluate the performance of the QRNN on three different natural language tasks: document-level sentiment classiï¬cation, language modeling, and character-based neural machine translation. Our QRNN models outperform LSTM-based models of equal hidden size on all three tasks while dra- matically improving computation speed. Experiments were implemented in Chainer (Tokui et al.).
3.1 SENTIMENT CLASSIFICATION
We evaluate the QRNN architecture on a popular document-level sentiment classiï¬cation bench- mark, the IMDb movie review dataset (Maas et al., 2011). The dataset consists of a balanced sample of 25,000 positive and 25,000 negative reviews, divided into equal-size train and test sets, with an average document length of 231 words (Wang & Manning, 2012). We compare only to other results that do not make use of additional unlabeled data (thus excluding e.g., Miyato et al. (2016)).
Our best performance on a held-out development set was achieved using a four-layer densely- connected QRNN with 256 units per layer and word vectors initialized using 300-dimensional cased GloVe embeddings (Pennington et al., 2014). Dropout of 0.3 was applied between layers, and we used L? regularization of 4 x 10~°. Optimization was performed on minibatches of 24 examples using RMSprop (Tieleman & Hinton, 2012) with learning rate of 0.001, a = 0.9, and e = 107°.
Small batch sizes and long sequence lengths provide an ideal situation for demonstrating the QRNNâs performance advantages over traditional recurrent architectures. We observed a speedup of 3.2x on IMDb train time per epoch compared to the optimized LSTM implementation provided in NVIDIAâs cuDNN library. For speciï¬c batch sizes and sequence lengths, a 16x speed gain is possible. Figure 4 provides extensive speed comparisons. In Figure 3, we visualize the hidden state vectors cL t of the ï¬nal QRNN layer on part of an example from the IMDb dataset. Even without any post-processing, changes in the hidden state are visible and interpretable in regards to the input. This is a consequence of the elementwise nature of the recurrent pooling function, which delays direct interaction between different channels of the hidden state until the computation of the next QRNN layer.
3.2 LANGUAGE MODELING
We replicate the language modeling experiment of Zaremba et al. (2014) and Gal & Ghahramani (2016) to benchmark the QRNN architecture for natural language sequence prediction. The experi- ment uses a standard preprocessed version of the Penn Treebank (PTB) by Mikolov et al. (2010).
We implemented a gated QRNN model with medium hidden size: 2 layers with 640 units in each layer. Both QRNN layers use a convolutional ï¬lter width k of two timesteps. While the âmediumâ models used in other work (Zaremba et al., 2014; Gal & Ghahramani, 2016) consist of 650 units in
5
# Under review as a conference paper at ICLR 2017
Under review as a conference paper at ICLR 2017 5 â â â== â a == ee == [zs = â= == â â & 3 aay pF â ee & 3 â a = a a in: âââââ=$_ââââ ââ a 8 ââââaa ââ Se ee ââââ_ââ 5 3 = 8 â Ee a a | = 5 ae 8 ââ Zz, = a a = Timesteps (words) iS & ell 3 8 s S Hidden units
Figure 3: Visualization of the ï¬nal QRNN layerâs hidden state vectors cL in the IMDb task, with t timesteps along the vertical axis. Colors denote neuron activations. After an initial positive statement âThis movie is simply gorgeousâ (off graph at timestep 9), timestep 117 triggers a reset of most hidden states due to the phrase ânot exactly a bad storyâ (soon after âmain weakness is its storyâ). Only at timestep 158, after âI recommend this movie to everyone, even if youâve never played the gameâ, do the hidden units recover.
each layer, it was more computationally convenient to use a multiple of 32. As the Penn Treebank is a relatively small dataset, preventing overï¬tting is of considerable importance and a major focus of recent research. It is not obvious in advance which of the many RNN regularization schemes would perform well when applied to the QRNN. Our tests showed encouraging results from zoneout applied to the QRNNâs recurrent pooling layer, implemented as described in Section 2.1.
The experimental settings largely followed the âmediumâ setup of Zaremba et al. (2014). Optimiza- tion was performed by stochastic gradient descent (SGD) without momentum. The learning rate was set at 1 for six epochs, then decayed by 0.95 for each subsequent epoch, for a total of 72 epochs. We additionally used L2 regularization of 2 Ã 10â4 and rescaled gradients with norm above 10. Zoneout was applied by performing dropout with ratio 0.1 on the forget gates of the QRNN, without rescaling the output of the dropout function. Batches consist of 20 examples, each 105 timesteps.
Comparing our results on the gated QRNN with zoneout to the results of LSTMs with both ordinary and variational dropout in Table 2, we see that the QRNN is highly competitive. The QRNN without zoneout strongly outperforms both our medium LSTM and the medium LSTM of Zaremba et al. (2014) which do not use recurrent dropout and is even competitive with variational LSTMs. This may be due to the limited computational capacity that the QRNNâs pooling layer has relative to the LSTMâs recurrent weights, providing structural regularization over the recurrence.
Without zoneout, early stopping based upon validation loss was required as the QRNN would be- gin overï¬tting. By applying a small amount of zoneout (p = 0.1), no early stopping is required and the QRNN achieves competitive levels of perplexity to the variational LSTM of Gal & Ghahra-
Model Parameters Validation Test LSTM (medium) (Zaremba et al., 2014) Variational LSTM (medium, MC) (Gal & Ghahramani, 2016) LSTM with CharCNN embeddings (Kim et al., 2016) Zoneout + Variational LSTM (medium) (Merity et al., 2016) 20M 20M 19M 20M 86.2 81.9 â 84.4 82.7 79.7 78.9 80.6 Our models LSTM (medium) QRNN (medium) QRNN + zoneout (p = 0.1) (medium) 20M 18M 18M 85.7 82.9 82.1 82.0 79.9 78.3
Table 2: Single model perplexity on validation and test sets for the Penn Treebank language model- ing task. Lower is better. âMediumâ refers to a two-layer network with 640 or 650 hidden units per layer. All QRNN models include dropout of 0.5 on embeddings and between layers. MC refers to Monte Carlo dropout averaging at test time.
6
# Under review as a conference paper at ICLR 2017
32 Sequence length 128 64 256 e z i s h c t a B 8 16 32 64 128 256 5.5x 5.5x 4.2x 3.0x 2.1x 1.4x 8.8x 6.7x 4.5x 3.0x 1.9x 1.4x 11.0x 7.8x 4.9x 3.0x 2.0x 1.3x 12.4x 8.3x 4.9x 3.0x 2.0x 1.3x 512 16.9x 10.8x 6.4x 3.7x 2.4x 1.3x
=
Figure 4: Left: Training speed for two-layer 640-unit PTB LM on a batch of 20 examples of 105 timesteps. âRNNâ and âsoftmaxâ include the forward and backward times, while âoptimization overheadâ includes gradient clipping, L2 regularization, and SGD computations. Right: Inference speed advantage of a 320-unit QRNN layer alone over an equal-sized cuDNN LSTM layer for data with the given batch size and sequence length. Training results are similar.
mani (2016), which had variational inference based dropout of 0.2 applied recurrently. Their best performing variation also used Monte Carlo (MC) dropout averaging at test time of 1000 different masks, making it computationally more expensive to run.
When training on the PTB dataset with an NVIDIA K40 GPU, we found that the QRNN is sub- stantially faster than a standard LSTM, even when comparing against the optimized cuDNN LSTM. In Figure 4 we provide a breakdown of the time taken for Chainerâs default LSTM, the cuDNN LSTM, and QRNN to perform a full forward and backward pass on a single batch during training of the RNN LM on PTB. For both LSTM implementations, running time was dominated by the RNN computations, even with the highly optimized cuDNN implementation. For the QRNN implementa- tion, however, the âRNNâ layers are no longer the bottleneck. Indeed, there are diminishing returns from further optimization of the QRNN itself as the softmax and optimization overhead take equal or greater time. Note that the softmax, over a vocabulary size of only 10,000 words, is relatively small; for tasks with larger vocabularies, the softmax would likely dominate computation time.
It is also important to note that the cuDNN libraryâs RNN primitives do not natively support any form of recurrent dropout. That is, running an LSTM that uses a state-of-the-art regularization scheme at cuDNN-like speeds would likely require an entirely custom kernel.
3.3 CHARACTER-LEVEL NEURAL MACHINE TRANSLATION
We evaluate the sequence-to-sequence QRNN architecture described in 2.1 on a challenging neu- ral machine translation task, IWSLT GermanâEnglish spoken-domain translation, applying fully character-level segmentation. This dataset consists of 209,772 sentence pairs of parallel training data from transcribed TED and TEDx presentations, with a mean sentence length of 103 characters for German and 93 for English. We remove training sentences with more than 300 characters in English or German, and use a uniï¬ed vocabulary of 187 Unicode code points.
Our best performance on a development set (TED.tst2013) was achieved using a four-layer encoderâ decoder QRNN with 320 units per layer, no dropout or L? regularization, and gradient rescaling to a maximum magnitude of 5. Inputs were supplied to the encoder reversed, while the encoder convolutions were not masked. The first encoder layer used convolutional filter width k = 6, while the other encoder layers used k = 2. Optimization was performed for 10 epochs on minibatches of 16 examples using Adam (Kingma & Ba, 2014) with a = 0.001, 6; = 0.9, 62 = 0.999, and ⬠= 10-8. Decoding was performed using beam search with beam width 8 and length normalization a = 0.6. The modified log-probability ranking criterion is provided in the appendix.
Results using this architecture were compared to an equal-sized four-layer encoderâdecoder LSTM with attention, applying dropout of 0.2. We again optimized using Adam; other hyperparameters were equal to their values for the QRNN and the same beam search procedure was applied. Table 3 shows that the QRNN outperformed the character-level LSTM, almost matching the performance of a word-level attentional baseline.
7
# Under review as a conference paper at ICLR 2017
Model Train Time BLEU (TED.tst2014) Word-level LSTM w/attn (Ranzato et al., 2016) Word-level CNN w/attn, input feeding (Wiseman & Rush, 2016) â â 20.2 24.0 Our models Char-level 4-layer LSTM Char-level 4-layer QRNN with k = 6 4.2 hrs/epoch 1.0 hrs/epoch 16.53 19.41
Table 3: Translation performance, measured by BLEU, and train speed in hours per epoch, for the IWSLT German-English spoken language translation task. All models were trained on in-domain data only, and use negative log-likelihood as the training criterion. Our models were trained for 10 epochs. The QRNN model uses k = 2 for all layers other than the ï¬rst encoder layer.
# 4 RELATED WORK
Exploring alternatives to traditional RNNs for sequence tasks is a major area of current research. Quasi-recurrent neural networks are related to several such recently described models, especially the strongly-typed recurrent neural networks (T-RNN) introduced by Balduzzi & Ghifary (2016). While the motivation and constraints described in that work are different, Balduzzi & Ghifary (2016)âs concepts of âlearnwareâ and âï¬rmwareâ parallel our discussion of convolution-like and pooling-like subcomponents. As the use of a fully connected layer for recurrent connections violates the con- straint of âstrong typingâ, all strongly-typed RNN architectures (including the T-RNN, T-GRU, and T-LSTM) are also quasi-recurrent. However, some QRNN models (including those with attention or skip-connections) are not âstrongly typedâ. In particular, a T-RNN differs from a QRNN as de- scribed in this paper with ï¬lter size 1 and f -pooling only in the absence of an activation function on z. Similarly, T-GRUs and T-LSTMs differ from QRNNs with ï¬lter size 2 and fo- or ifo-pooling respectively in that they lack tanh on z and use tanh rather than sigmoid on o.
The QRNN is also related to work in hybrid convolutionalârecurrent models. Zhou et al. (2015) apply CNNs at the word level to generate n-gram features used by an LSTM for text classiï¬cation. Xiao & Cho (2016) also tackle text classiï¬cation by applying convolutions at the character level, with a stride to reduce sequence length, then feeding these features into a bidirectional LSTM. A similar approach was taken by Lee et al. (2016) for character-level machine translation. Their modelâs encoder uses a convolutional layer followed by max-pooling to reduce sequence length, a four-layer highway network, and a bidirectional GRU. The parallelism of the convolutional, pooling, and highway layers allows training speed comparable to subword-level models without hard-coded text segmentation.
The QRNN encoderâdecoder model shares the favorable parallelism and path-length properties ex- hibited by the ByteNet (Kalchbrenner et al., 2016), an architecture for character-level machine trans- lation based on residual convolutions over binary trees. Their model was constructed to achieve three desired properties: parallelism, linear-time computational complexity, and short paths between any pair of words in order to better propagate gradient signals.
# 5 CONCLUSION
Intuitively, many aspects of the semantics of long sequences are context-invariant and can be com- puted in parallel (e.g., convolutionally), but some aspects require long-distance context and must be computed recurrently. Many existing neural network architectures either fail to take advantage of the contextual information or fail to take advantage of the parallelism. QRNNs exploit both parallelism and context, exhibiting advantages from both convolutional and recurrent neural networks. QRNNs have better predictive accuracy than LSTM-based models of equal hidden size, even though they use fewer parameters and run substantially faster. Our experiments show that the speed and accuracy advantages remain consistent across tasks and at both word and character levels.
Extensions to both CNNs and RNNs are often directly applicable to the QRNN, while the modelâs hidden states are more interpretable than those of other recurrent architectures as its channels main- tain their independence across timesteps. We believe that QRNNs can serve as a building block for long-sequence tasks that were previously impractical with traditional RNNs.
8
# Under review as a conference paper at ICLR 2017
# REFERENCES
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2015.
David Balduzzi and Muhammad Ghifary. Strongly-typed recurrent neural networks. In ICML, 2016.
James Bradbury and Richard Socher. MetaMind neural machine translation system for WMT 2016. In Proceedings of the First Conference on Machine Translation, Berlin, Germany. Association for Computational Linguistics, 2016.
Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. In NIPS, 2016.
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural Computation, 9(8): 1735â1780, Nov 1997. ISSN 0899-7667.
Gao Huang, Zhuang Liu, and Kilian Q Weinberger. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016.
Rie Johnson and Tong Zhang. Effective use of word order for text categorization with convolutional neural networks. arXiv preprint arXiv:1412.1058, 2014.
Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099, 2016.
Yoon Kim, Yacine Jernite, David Sontag, and Alexander M. Rush. Character-aware neural language models. arXiv preprint arXiv:1508.06615, 2016.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet classiï¬cation with deep convo- lutional neural networks. In NIPS, 2012.
David Krueger, Tegan Maharaj, J´anos Kram´ar, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron Courville, et al. Zoneout: Regu- larizing RNNs by Randomly Preserving Hidden Activations. arXiv preprint arXiv:1606.01305, 2016.
Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. Ask me anything: Dynamic memory networks for natural language processing. In ICML, 2016.
Jason Lee, Kyunghyun Cho, and Thomas Hofmann. Fully character-level neural machine translation without explicit segmentation. arXiv preprint arXiv:1610.03017, 2016.
Shayne Longpre, Sabeek Pradhan, Caiming Xiong, and Richard Socher. A way out of the odyssey: Analyzing and combining recent insights for LSTMs. Submitted to ICLR, 2016.
M. T. Luong, H. Pham, and C. D. Manning. Effective approaches to attention-based neural machine translation. In EMNLP, 2015.
Andrew L Maas, Andrew Y Ng, and Christopher Potts. Multi-dimensional sentiment analysis with learned representations. Technical report, 2011.
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016.
Gr´egoire Mesnil, Tomas Mikolov, MarcâAurelio Ranzato, and Yoshua Bengio. Ensemble of gen- erative and discriminative techniques for sentiment analysis of movie reviews. arXiv preprint arXiv:1412.5335, 2014.
Tomas Mikolov, Martin Karaï¬Â´at, Luk´as Burget, Jan Cernock´y, and Sanjeev Khudanpur. Recurrent neural network based language model. In INTERSPEECH, 2010.
9
# Under review as a conference paper at ICLR 2017
Takeru Miyato, Andrew M Dai, and Ian Goodfellow. Virtual adversarial training for semi-supervised text classiï¬cation. arXiv preprint arXiv:1605.07725, 2016.
Jeffrey Pennington, Richard Socher, and Christopher D Manning. GloVe: Global vectors for word representation. In EMNLP, 2014.
MarcâAurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level train- ing with recurrent neural networks. In ICLR, 2016.
Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4(2), 2012.
Seiya Tokui, Kenta Oono, and Shohei Hido. Chainer: A next-generation open source framework for deep learning.
Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016.
Sida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topic classiï¬cation. In ACL, 2012.
Xin Wang, Yuanchao Liu, Chengjie Sun, Baoxun Wang, and Xiaolong Wang. Predicting polarities of tweets by composing word embeddings with long short-term memory. In ACL, 2015.
Sam Wiseman and Alexander M Rush. Sequence-to-sequence learning as beam-search optimization. arXiv preprint arXiv:1606.02960, 2016.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Googleâs neural machine trans- arXiv preprint lation system: Bridging the gap between human and machine translation. arXiv:1609.08144, 2016.
Yijun Xiao and Kyunghyun Cho. Efï¬cient character-level document classiï¬cation by combining convolution and recurrent layers. arXiv preprint arXiv:1602.00367, 2016.
Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textual question answering. In ICML, 2016.
Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.
Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text clas- siï¬cation. In NIPS, 2015.
Chunting Zhou, Chonglin Sun, Zhiyuan Liu, and Francis Lau. A C-LSTM neural network for text classiï¬cation. arXiv preprint arXiv:1511.08630, 2015.
10
# Under review as a conference paper at ICLR 2017
# APPENDIX
BEAM SEARCH RANKING CRITERION
The modiï¬ed log-probability ranking criterion we used in beam search for translation experiments is:
T+a Tirg + log(Peana) = TT. tre T Ss log(p(wi|w1 ... wi-1)), (9) i=l
where α is a length normalization parameter (Wu et al., 2016), wi is the ith output character, and Ttrg is a âtarget lengthâ equal to the source sentence length plus ï¬ve characters. This reduces at α = 0 to ordinary beam search with probabilities:
T log(Peana) = > log(p(wilwr -.. wi-1)), (10) i=1
and at α = 1 to beam search with probabilities normalized by length (up to the target length):
T 1 log(Peana) © a y log(p(wi|wi ... wi_1))- (1) i=1
Conveniently, this ranking criterion can be computed at intermediate beam-search timesteps, obvi- ating the need to apply a separate reranking on complete hypotheses.
11 | {
"id": "1605.07725"
} |
1611.01600 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | 8 1 0 2
y a M 0 1 ] E N . s c [
3 v 0 0 6 1 0 . 1 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# LOSS-AWARE BINARIZATION OF DEEP NETWORKS
Lu Hou, Quanming Yao, James T. Kwok Department of Computer Science and Engineering Hong Kong University of Science and Technology Clear Water Bay, Hong Kong {lhouab,qyaoaa,jamesk}@cse.ust.hk
# ABSTRACT
Deep neural network models, though very powerful and highly successful, are computationally expensive in terms of space and time. Recently, there have been a number of attempts on binarizing the network weights and activations. This greatly reduces the network size, and replaces the underlying multiplications to additions or even XNOR bit operations. However, existing binarization schemes are based on simple matrix approximations and ignore the effect of binarization on the loss. In this paper, we propose a proximal Newton algorithm with diag- onal Hessian approximation that directly minimizes the loss w.r.t. the binarized weights. The underlying proximal step has an efï¬cient closed-form solution, and the second-order information can be efï¬ciently obtained from the second moments already computed by the Adam optimizer. Experiments on both feedforward and recurrent networks show that the proposed loss-aware binarization algorithm out- performs existing binarization schemes, and is also more robust for wide and deep networks.
# INTRODUCTION
Recently, deep neural networks have achieved state-of-the-art performance in various tasks such as speech recognition, visual object recognition, and image classiï¬cation (LeCun et al., 2015). Though powerful, the large number of network weights leads to space and time inefï¬ciencies in both training and storage. For instance, the popular AlexNet, VGG-16 and Resnet-18 all require hundred of megabytes to store, and billions of high-precision operations on classiï¬cation. This limits its use in embedded systems, smart phones and other portable devices that are now everywhere.
To alleviate this problem, a number of approaches have been recently proposed. One attempt ï¬rst trains a neural network and then compresses it (Han et al., 2016; Kim et al., 2016). Instead of this two-step approach, it is more desirable to train and compress the network simultaneously. Example approaches include tensorizing (Novikov et al., 2015), parameter quantization (Gong et al., 2014), and binarization (Courbariaux et al., 2015; Hubara et al., 2016; Rastegari et al., 2016). In particular, binarization only requires one bit for each weight value. This can signiï¬cantly reduce storage, and also eliminates most multiplications during the forward pass.
Courbariaux et al. (2015) pioneered neural network binarization with the BinaryConnect algorithm, which achieves state-of-the-art results on many classiï¬cation tasks. Besides binarizing the weights, Hubara et al. (2016) further binarized the activations. Rastegari et al. (2016) also learned to scale the binarized weights, and obtained better results. Besides, they proposed the XNOR-network with both weights and activations binarized as in (Hubara et al., 2016). Instead of binarization, ternary-connect quantizes each weight to {â1, 0, 1} (Lin et al., 2016). Similarly, the ternary weight network (Li & Liu, 2016) and DoReFa-net (Zhou et al., 2016) quantize weights to three levels or more. However, though using more bits allows more accurate weight approximations, specialized hardwares are needed for the underlying non-binary operations.
Besides the huge amount of computation and storage involved, deep networks are difï¬cult to train because of the highly nonconvex objective and inhomogeneous curvature. To alleviate this problem, Hessian-free methods (Martens & Sutskever, 2012) use the second-order information by conjugate gradient. A related method is natural gradient descent (Pascanu & Bengio, 2014), which utilizes ge-
1
Published as a conference paper at ICLR 2017
ometry of the underlying parameter manifold. Another approach uses element-wise adaptive learn- ing rate, as in Adagrad (Duchi et al., 2011), Adadelta (Zeiler, 2012), RMSprop (Tieleman & Hinton, 2012), and Adam Kingma & Ba (2015). This can also be considered as preconditioning that rescales the gradient so that all dimensions have similar curvatures.
In this paper, instead of directly approximating the weights, we propose to consider the effect of binarization on the loss during binarization. We formulate this as an optimization problem using the proximal Newton algorithm (Lee et al., 2014) with a diagonal Hessian. The crux of proximal algorithms is the proximal step. We show that this step has a closed-form solution, whose form is similar to the use of element-wise adaptive learning rate. The proposed method also reduces to Bi- naryConnect (Courbariaux et al., 2015) and the Binary-Weight-Network (Hubara et al., 2016) when curvature information is dropped. Experiments on both feedforward and recurrent neural network models show that it outperforms existing binarization algorithms. In particular, BinaryConnect fails on deep recurrent networks because of the exploding gradient problem, while the proposed method still demonstrates robust performance.
â
Notations: For a vector x, \/x denotes the element-wise square root, |x| denotes the element-wise absolute value, ||x||,) = (0; |x|?) is the p-norm of x, x + 0 denotes that all entries of x are positive, sign(x) is the vector with [sign(x)]; = lif x; > 0 and â1 otherwise, and Diag(x) returns a diagonal matrix with x on the diagonal. For two vectors x and y, x © y denotes the element- wise multiplication and x @ y denotes the element-wise division. For a matrix X, vec(X) returns the vector obtained by stacking the columns of X, and diag(X) returns a diagonal matrix whose diagonal elements are extracted from diagonal of X.
# 2 RELATED WORK
2.1 WEIGHT BINARIZATION IN DEEP NETWORKS
In a feedforward neural network with L layers, let the weight matrix (or tensor in the case of a convolutional layer) at layer 1 be W;. We combine the (full-precision) weights from all layers as w =[w],w3,...,w/]', where w; = vec(W,). Analogously, the binarized weights are denoted as W = [Ww] ,WJ,..., W]]". As it is essential to use full-precision weights during updates (2015), typically binarized weights are only used during the forward and backward propagations, but not on parameter update. At the ¢th iteration, the (full-precision) weight w} is updated by using the backpropagated gradient V¢(wââ') (where £ is the loss and V;£(w'~') is the partial derivative of ¢ w.r.t. the weights of the /th layer). In the next forward propagation, it is then binarized as W} = Binarize(w/), where Binarize(-) is some binarization scheme.
The two most popular binarization schemes are BinaryConnect (Courbariaux et al., 2015) and Binary-Weight-Network (BWN) (Rastegari et al., 2016). In BinaryConnect, binarization is per- formed by transforming each element of wt
l to â1 or +1 using the sign function:1 l ).
(1) Besides the binarized weight matrix, a scaling parameter is also learned in BWN. In other words, Binarize(wt l bt l is binary. They are obtained by minimizing the difference between wt
t Ww : aj = fil, bj = sign(w/), (2)
where nl is the number of weights in layer l. Hubara et al. (2016) further binarized the activations as Ëxt l is the activation of the lth layer at iteration t.
2.2 PROXIMAL NEWTON ALGORITHM
The proximal Newton algorithm (Lee et al., 2014) has been popularly used for solving composite optimization problems of the form
min x f (x) + g(x),
1A stochastic binarization scheme is also proposed in (Courbariaux et al., 2015). However, it is much more computational expensive than (1) and so will not be considered here.
2
Published as a conference paper at ICLR 2017
where f is convex and smooth, and g is convex but possibly nonsmooth. At iteration t, it generates the next iterate as
Xi1 = arg min Vf (x1) " (x âx;) + (x â x) H(x â x:) + g(x),
where H is an approximate Hessian matrix of f at xt. With the use of second-order information, the proximal Newton algorithm converges faster than the proximal gradient algorithm (Lee et al., 2014). Recently, by assuming that f and g have difference-of-convex decompositions (Yuille & Rangarajan, 2002), the proximal Newton algorithm is also extended to the case where g is nonconvex (Rakotomamonjy et al., 2016).
# 3 LOSS-AWARE BINARIZATION
As can be seen, existing weight binarization methods (Courbariaux et al., 2015; Rastegari et al., 2016) simply ï¬nd the closest binary approximation of w, and ignore its effects to the loss. In this paper, we consider the loss directly during binarization. As in (Rastegari et al., 2016), we also binarize the weight wl in each layer as Ëwl = αlbl, where αl > 0 and bl is binary.
In the following, we make the following assumptions on ¢. (A1) ¢ is continuously differentiable with Lipschitz-continuous gradient, i.e., there exists 8 > 0 such that ||V@(u) â Vé(v) ||, < 8 |luâ vl, for any u, v; (A2) ¢ is bounded from below.
3.1 BINARIZATION USING PROXIMAL NEWTON ALGORITHM
We formulate weight binarization as the following optimization problem:
miny (Ww) (3) st. Wy, =aybi, a > 0, b ⬠{41}â¢, L=1,...,L, (4) where ¢ is the loss. Let C' be the feasible region in @. and define its indicator function: I¢(w) = 0 if w ⬠C, and oo otherwise. Problem can then be rewritten as
min (Ww) + Io(w). (5)
We solve sing the proximal Newton method (Section[2.2}. At iteration t, the smooth term ¢(w) is replaced by the second-order expansion
» nt . ate 1. nt Hlye ate E(w!) + VOW! NT wt _w' D) +4 5 (wt _w' NTH! low! _w' 1,
where H'~! is an estimate of the Hessian of @ at wâ~!. Note that using the Hessian to capture second-order information is essential for efficient neural network training, as @ is often flat in some directions but highly curved in others. By rescaling the gradient, the loss has similar curvatures along all directions. This is also called preconditioning in the literature (Dauphin et al.||2015a).
For neural networks, the exact Hessian is rarely positive semi-definite. This can be problematic as the nonconvex objective leads to indefinite quadratic optimization. Moreover, computing the exact Hessian is both time- and space-inefficient on large networks. To alleviate these problems, a popular approach is to approximate the Hessian by a diagonal positive definite matrix D. One popular choice is the efficient Jacobi preconditioner. Though an efficient approximation of the Hes- sian under certain conditions, it is not competitive for indefinite matrices (Dauphin et al.| 2015a). More recently, it is shown that equilibration provides a more robust preconditioner in the pres- ence of saddle points (Dauphin et al.||2015a). This is also adopted by popular stochastic optimiza- tion algorithms such as RMSprop (Tieleman & Hinton| and Adam (Kingma_& Bal 2015). Specifically, the second moment v in these algorithms is an estimator of diag(Hâ) (Dauphin et al 20156). Here, we use the square root of this v, which is readily available in Adam, to construct D = Diag([diag(D,)",...,diag(Dz)"]"), where Dy is the approximate diagonal Hessian at layer 1. In general, other estimators of diag(H) can also be used.
At the tth iteration of the proximal Newton algorithm, the following subproblem is solved:
mings Ve(w'!) Tw! â wi!) + a w" âw TD lw! â wit) (6) st. Wy =ajbi, af > 0, bye {+1}", l=1,...,L.
3
Published as a conference paper at ICLR 2017
Proposition 3.1 Let dtâ1 l â¡ diag(Dtâ1 ), and
# l l â¡ Ëwtâ1
Ww; ew! - View!) ody ?. (7)
The optimal solution of (6) can be obtained in closed-form as
+ _ lid tO wilh a= aT, , bj = sign(w7). (8)
Theorem 3.1 Assume that [dt algorithm (with closed-form update of Ëwt in Proposition 3.1) converges. l]k > β âl, k, t, the objective of (5) produced by the proximal Newton
Note that both the loss @ and indicator function Jc(-) in (5) are not convex. Hence, convergence analysis of the proximal Newton algorithm in (Lee et al.}/2014), which is only for convex problems, cannot be applied. Recently, |Rakotomamonjy et al.|(2016) proposed a nonconvex proximal Newton extension. However, it assumes a difference-of-convex decomposition which does not hold here.
Remark 3.1 When Dtâ1 l = λI, i.e., the curvature is the same for all dimensions in the lth layer, (8) then reduces to the BWN solution in (2) In other words, BWN corresponds to using the proximal gradient algorithm, while the proposed method corresponds to the proximal Newton algorithm with diagonal Hessian. In composite optimization, it is known that the proximal Newton method is more efï¬cient than the proximal gradient algorithm (Lee et al., 2014; Rakotomamonjy et al., 2016).
Remark 3.2 When αt l = 1, (8) reduces to sign(wt l ), which is the BinaryConnect solution in (1).
From (7) and (8). each iteration first performs gradient descent along V/¢(w!~+) with an adaptive learning rate ad, and then projects it to a binary solution. As discussed in (Courbariaux| fet al.|/2015) 2015), it is important to keep a full-precision weight during training. Hence, we replace (7) by wy + wi â Vil(w'!) @ di~ 1 The whole procedure, which will be called Loss-Aware Binarization (LAB), is shown i in Algorithm|]] [1] In steps 5 and 6, following (Li & Liu\ 2016}, we first rescale input x) 1 to the Ith layer with q;, so that multiplications in dot products and convolutions become additions.
While binarizing weights changes most multiplications to additions, binarizing both weights and activations saves even more computations as additions are further changed to XNOR bit operations (Hubara et al., 2016). Our Algorithm 1 can also be easily extended by binarizing the activations with the simple sign function.
3.2 EXTENSION TO RECURRENT NEURAL NETWORKS
The proposed method can be easily extended to recurrent neural networks. Let xl and hl be the input and hidden states, respectively, at time step (or depth) l. A typical recurrent neural network has a recurrence of the form hl = Wxxl + WhÏ(hlâ1) + b (equivalent to the more widely known hl = Ï(Wxxl +Whhlâ1 +b) (Pascanu et al., 2013) ). We binarize both the input-to-hidden weight Wx and hidden-to-hidden weight Wh. Since weights are shared across time in a recurrent network, we only need to binarize Wx and Wh once in each forward propagation. Besides weights, one can also binarize the activations (of the inputs and hidden states) as in the previous section.
In deep networks, the backpropagated gradient takes the form of a product of Jacobian matrices (Pas-| feanu etal etal ). In a vanilla recurrent neural networkP']for activations h, and hy at depths p and q, F oh, . respectively (where p > q), ae = TIy<i<p ot = Tyect<p W/' diag(oâ(hy-1)). The necessary condition for exploding gradients is that the largest singular value \;(W),) of W), is larger than some given constant ( Pascanu et al] BOTS] The following Proposition shows that for any binary W),, its largest singular value is lower- bounded by the square root of its dimension.
Proposition 3.2 For any W â {â1, +1}mÃn (m ⤠n), λ1(W) â¥
â
# Vn.
2Here, we consider the vanilla recurrent neural network for simplicity. It can be shown that a similar behavior holds for the more commonly used LSTM.
4
Published as a conference paper at ICLR 2017
Algorithm 1 Loss-Aware Binarization (LAB) for training a feedforward neural network. Input: Minibatch {(xâ, yâ)}, current full-precision weights {w/}, first moment {mj~'}, moment {vj +}, and learning rate 7°. Forward Propagation for! = 1to Ldo at = Iai towilh, ay" f 7 bf = sign(w!); rescale the layer-! input: X/_, = a} x]_13 compute z/ with input x/_, and binary weight b/; apply batch-normalization and nonlinear activation to z/ to obtain x}; end for 9: compute the loss ¢ using xâ, and y*; : Backward Propagation oe : initialize output layerâs activationâs gradient Oxi 12: for! = L to2do 13: compute Can using oa a} and by; 14: end for 15: Update parameters using Adam 16: for! = 1 to Ldo 17: compute gradients V,/(w*) using ox and x}_1; 18: update first moment mj = Bimj-t +(1â Bi) Vil(w'); 19: update second moment v} = fav; ! + (1 â 52)(Vil(w') © Vil(w")); 20: compute unbiased first moment m{ = m/j/(1 â ff); 21: compute unbiased second moment 0; = vj/(1 â 84); 22: compute current curvature matrix dj = = (a + Va): 23: update full-precision weights w/t! = wi} â mi @ di; 24: update learning rate 7+! = UpdateRule(7â, t + 1); 25: end for
# }, second
Thus, with weight binarization as in BinaryConnect, the exploding gradient problem becomes more severe as the weight matrices are often large. On the other hand, recall that λ1(c ËWh) = cλ1( ËWh) for any non-negative c. The proposed method alleviates this exploding gradient problem by adap- tively learning the scaling parameter αh.
# 4 EXPERIMENTS
In this section, we perform experiments on the proposed binarization scheme with both feedforward networks (Sections 4.1 and 4.2) and recurrent neural networks (Sections 4.3 and 4.4).
4.1 FEEDFORWARD NEURAL NETWORKS
We compare the original full-precision network (without binarization) with the following weight- binarized networks: (i) BinaryConnect; (ii) Binary-Weight-Network (BWN); and (iii) the proposed Loss-Aware Binarized network (LAB). We also compare with networks having both weights and activations binarized:3 (i) BinaryNeuralNetwork (BNN) (Hubara et al., 2016), the weight-and- activation binarized counterpart of BinaryConnect; (ii) XNOR-Network (XNOR) (Rastegari et al., 2016), the counterpart of BWN; (iii) LAB2, the counterpart of the proposed method, which binarizes weights using proximal Newton method and binarizes activations using a simple sign function.
The setup is similar to that in Courbariaux et al. (2015). We do not perform data augmentation or unsupervised pretraining. Experiments are performed on three commonly used data sets:
3We use the straight-through-estimator (Hubara et al., 2016) to compute the gradient involving the sign function.
5
Published as a conference paper at ICLR 2017
1. MNIST: This contains 28 Ã 28 gray images from ten digit classes. We use 50000 images for training, another 10000 for validation, and the remaining 10000 for testing. We use the 4-layer model:
784F C â 2048F C â 2048F C â 2048F C â 10SV M,
where F C is a fully-connected layer, and SV M is a L2-SVM output layer using the square hinge loss. Batch normalization, with a minibatch size 100, is used to accelerate learning. The maximum number of epochs is 50. The learning rate for the weight-binarized (resp. weight-and-activation-binarized) network starts at 0.01 (resp. 0.005), and decays by a fac- tor of 0.1 at epochs 15 and 25.
2. CIFAR-10: This contains 32 Ã 32 color images from ten object classes. We use 45000 images for training, another 5000 for validation, and the remaining 10000 for testing. The images are preprocessed with global contrast normalization and ZCA whitening. We use the VGG-like architecture:
(2Ã128C3)âM P 2â(2Ã256C3)âM P 2â(2Ã512C3)âM P 2â(2Ã1024F C)â10SV M,
where C3 is a 3 Ã 3 ReLU convolution layer, and M P 2 is a 2 Ã 2 max-pooling layer. Batch normalization, with a minibatch size of 50, is used. The maximum number of epochs is 200. The learning rate for the weight-binarized (resp. weight-and-activation-binarized) network starts at 0.03 (resp. 0.02), and decays by a factor of 0.5 after every 15 epochs. 3. SVHN: This contains 32 Ã 32 color images from ten digit classes. We use 598388 images for training, another 6000 for validation, and the remaining 26032 for testing. The images are preprocessed with global and local contrast normalization. The model used is:
(2Ã64C3)âM P 2â(2Ã128C3)âM P 2â(2Ã256C3)âM P 2â(2Ã1024F C)â10SV M.
Batch normalization, with a minibatch size of 50, is used. The maximum number of epochs is 50. The learning rate for the weight-binarized (resp. weight-and-activation-binarized) network starts at 0.001 (resp. 0.0005), and decays by a factor of 0.1 at epochs 15 and 25.
Since binarization is a form of regularization (Courbariaux et al., 2015), we do not use other reg- ularization methods (like Dropout). All the weights are initialized as in (Glorot & Bengio, 2010). Adam (Kingma & Ba, 2015) is used as the optimization solver.
Table 1 shows the test classiï¬cation error rates, and Figure 1 shows the convergence of LAB. As can be seen, the proposed LAB achieves the lowest error on MNIST and SVHN. It even outperforms the full-precision network on MNIST, as weight binarization serves as a regularizer. With the use of cur- vature information, LAB outperforms BinaryConnect and BWN. On CIFAR-10, LAB is slightly out- performed by BinaryConnect, but is still better than the full-precision network. Among the schemes that binarize both weights and activations, LAB2 also outperforms BNN and the XNOR-Network.
# Table 1: Test error rates (%) for feedforward neural network models.
(no binarization) (binarize weights) (binarize weights and activations) full-precision BinaryConnect BWN LAB BNN XNOR LAB2 MNIST CIFAR-10 1.190 1.280 1.310 1.180 1.470 1.530 1.380 11.900 9.860 10.510 10.500 12.870 12.620 12.280 SVHN 2.277 2.450 2.535 2.354 3.500 3.435 3.362
4.2 VARYING THE NUMBER OF FILTERS IN CNN
As in Zhou et al. (2016), we study sensitivity to network width by varying the number of ï¬lters K on the SVHN data set. As in Section 4.1, we use the model
(2 Ã KC3) â M P 2 â (2 Ã 2KC3) â M P 2 â (2 Ã 4KC3) â M P 2 â (2 Ã 1024F C) â 10SV M.
Results are shown in Table 2. Again, the proposed LAB has the best performance. Moreover, as the number of ï¬lters increases, degradation due to binarization becomes less severe. This suggests
6
Published as a conference paper at ICLR 2017
(a) MNIST. (b) CIFAR-10. (c) SVHN.
0.06 soos ae < © 0.02 0 2 1077) 20) 230, 40, 50 epochs
ot ; 0.08 80.06 s © 0.04 = 0.02 0 0 10 20 30 40 50 epochs
04 03 3 10.2 ia & 0.1 0 0 50 100 150 200 epochs
Figure 1: Convergence of LAB with feedforward neural networks.
that more powerful models (e.g., CNN with more ï¬lters, standard feedforward networks with more hidden units) are less susceptible to performance degradation due to binarization. We speculate that this is because large networks often have larger-than-needed capacities, and so are less affected by the limited expressiveness of binary weights. Another related reason is that binarization acts as regularization, and so contributes positively to the performance.
Table 2: Test error rates (%) on SVHN, for CNNs with different numbers of ï¬lters. Number in brackets is the difference between the errors of the binarized scheme and the full-precision network. K = 32 2.585 2.777 (0.192) 2.743 (0.158) 2.742 (0.157)
K = 16 2.738 3.200 (0.462) 3.119 (0.461) 3.050 (0.312) K = 64 2.277 2.450 (0.173) 2.535 (0.258) 2.354 (0.077) K = 128 2.146 2.315 (0.169) 2.319 (0.173) 2.200 (0.054) full-precision BinaryConnect BWN LAB
4.3 RECURRENT NEURAL NETWORKS
In this section, we perform experiments on the popular long short-term memory (LSTM) (Hochre- iter & Schmidhuber, 1997). Performance is evaluated in the context of character-level language modeling. The LSTM takes as input a sequence of characters, and predicts the next character at each time step. The training objective is the cross-entropy loss over all target sequences. Following Karpathy et al. (2016), we use two data sets (with the same training/validation/test set splitting): (i) Leo Tolstoyâs War and Peace, which consists of 3258246 characters of almost entirely English text with minimal markup and has a vocabulary size of 87; and (ii) the source code of the Linux Kernel, which consists of 6206996 characters and has a vocabulary size of 101.
We use a one-layer LSTM with 512 cells. The maximum number of epochs is 200, and the number of time steps is 100. The initial learning rate is 0.002. After 10 epochs, it is decayed by a factor of 0.98 after each epoch. The weights are initialized uniformly in [0.08, 0.08]. After each iteration, the gradients are clipped to the range [â5, 5], and all the updated weights are clipped to [â1, 1]. For the weight-and-activation-binarized networks, we do not binarize the inputs, as they are one-hot vectors in this language modeling task.
Table 3 shows the testing cross-entropy values. As in Section 4.1, the proposed LAB outperforms other weight binarization schemes, and is even better than the full-precision network on the Linux Kernel data set. BinaryConnect does not work well here because of the problem of exploding gra- dients (see Section 3.2 and more results in Section 4.4). On the other hand, BWN and the proposed LAB scale the binary weight matrix and perform better. LAB also performs better than BWN as curvature information is considered. Similarly, among schemes that binarize both weights and acti- vations, the proposed LAB2 also outperforms BNN and XNOR-Network.
4.4 VARYING THE NUMBER OF TIME STEPS IN LSTM
In this experiment, we study the sensitivity of the binarization schemes with varying numbers of unrolled time steps (T S) in LSTM. Results are shown in Table 4. Again, the proposed LAB has the best performance. When T S = 10, the LSTM is relatively shallow, and all binarization schemes have similar performance as the full-precision network. When T S ⥠50, BinaryConnect fails,
7
Published as a conference paper at ICLR 2017
Table 3: Testing cross-entropy values of LSTM.
(no binarization) (binarize weights) (binarize weights and activations) full-precision BinaryConnect BWN LAB BNN XNOR LAB2 War and Peace 1.268 2.942 1.313 1.291 3.050 1.424 1.376 Linux Kernel 1.329 3.532 1.307 1.305 3.624 1.426 1.409
while BWN and the proposed LAB perform better (as discussed in Section 3.2). Figure 2 shows the distributions of the hidden-to-hidden weight gradients for T S = 10 and 100. As can be seen, while all models have similar gradient distributions at T S = 10, the gradient values in BinaryConnect are much higher than those of the other algorithms for the deeper network (T S = 100).
Table 4: Testing cross-entropy on War and Peace, for LSTMs with different time steps (T S). Differ- ence between cross-entropies of binarized scheme and full-precision network is shown in brackets. T S = 50 1.310 2.980 (1.670) 1.325 (0.015) 1.324 (0.014)
T S = 10 1.527 1.528 (0.001) 1.532 (0.005) 1.527 (0.000) T S = 100 1.268 2.942 (1.674) 1.313 (0.045) 1.291 (0.023) full-precision BinaryConnect BWN LAB (a) T S = 10. (b) T S = 100.
T S = 150 1.249 2.872 (1.623) 1.311 (0.062) 1.285 (0.036)
100% . . : . [Miut-precision inaryConnect (awn (Has Ei percentage of elements 10 409 10° 107 10% 10° 10% 10° 107 10° 10° 10° 10° âgradient magnitude
100% [Miut-precision ; IN bial percentage of elements 10" 10 10° 107 10° 10° 10% 10° 10? 107 10° 10° 10? âgradient magnitude
Figure 2: Distribution of weight gradients on War and Peace, for LSTMs with different time steps.
Note from Table 4 that as the time step increases, all except BinaryConnect show better performance. However, degradation due to binarization also becomes more severe. This is because the weights are shared across time steps. Hence, error due to binarization also propagates across time.
# 5 CONCLUSION
In this paper, we propose a binarization algorithm that directly considers its effect on the loss during binarization. The binarized weights are obtained using proximal Newton algorithm with diagonal Hessian approximation. The proximal step has an efï¬cient closed-form solution, and the second- order information in the Hessian can be readily obtained from the Adam optimizer. Experiments show that the proposed algorithm outperforms existing binarization schemes, has comparable per- formance as the original full-precision network, and is also robust for wide and deep networks.
ACKNOWLEDGMENTS
This research was supported in part by the Research Grants Council of the Hong Kong Special Administrative Region (Grant 614513). We thank Yongqi Zhang for helping with the experiments, and developers of Theano (Theano Development Team, 2016), Pylearn2 (Goodfellow et al., 2013) and Lasagne. We also thank NVIDIA for the support of Titan X GPU.
8
Published as a conference paper at ICLR 2017
# REFERENCES
M. Courbariaux, Y. Bengio, and J.P. David. BinaryConnect: Training deep neural networks with binary weights during propagations. In NIPS, pp. 3105â3113, 2015.
Y. Dauphin, H. de Vries, and Y. Bengio. Equilibrated adaptive learning rates for non-convex opti- mization. In NIPS, pp. 1504â1512, 2015a.
Y. Dauphin, H. de Vries, J. Chung, and Y. Bengio. RMSprop and equilibrated adaptive learning rates for non-convex optimization. Technical Report arXiv:1502.04390, 2015b.
J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121â2159, 2011.
X. Glorot and Y. Bengio. Understanding the difï¬culty of training deep feedforward neural networks. In AISTAT, pp. 249â256, 2010.
Y. Gong, L. Liu, M. Yang, and L. Bourdev. Compressing deep convolutional networks using vector quantization. Technical Report arXiv:1412.6115, 2014.
I.J. Goodfellow, D. Warde-Farley, P. Lamblin, V. Dumoulin, M. Mirza, R. Pascanu, J. Bergstra, F. Bastien, and Y. Bengio. Pylearn2: a machine learning research library. arXiv preprint arXiv:1308.4214, 2013.
S. Han, H. Mao, and W.J. Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and Huffman coding. In ICLR, 2016.
S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, pp. 1735â1780, 1997.
I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio. Binarized neural networks. In NIPS, pp. 4107â4115, 2016.
A. Karpathy, J. Johnson, and F.-F. Li. Visualizing and understanding recurrent networks. In ICLR, 2016.
Y.-D. Kim, E. Park, S. Yoo, T. Choi, L. Yang, and D. Shin. Compression of deep convolutional neural networks for fast and low power mobile applications. In ICLR, 2016.
D. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436â444, 2015.
J.D. Lee, Y. Sun, and M.A. Saunders. Proximal Newton-type methods for minimizing composite functions. SIAM Journal on Optimization, 24(3):1420â1443, 2014.
F. Li and B. Liu. Ternary weight networks. Technical Report arXiv:1605.04711, 2016.
Z. Lin, M. Courbariaux, R. Memisevic, and Y. Bengio. Neural networks with few multiplications. In ICLR, 2016.
J. Martens and I. Sutskever. Training deep and recurrent networks with Hessian-free optimization. In Neural Networks: Tricks of the trade, pp. 479â535. Springer, 2012.
A. Novikov, D. Podoprikhin, A. Osokin, and D.P. Vetrov. Tensorizing neural networks. In NIPS, pp. 442â450, 2015.
R. Pascanu and Y. Bengio. Revisiting natural gradient for deep networks. In ICLR, 2014.
R. Pascanu, T. Mikolov, and Y. Bengio. On the difï¬culty of training recurrent neural networks. In ICLR, pp. 1310â1318, 2013.
A. Rakotomamonjy, R. Flamary, and G. Gasso. DC proximal Newton for nonconvex optimization problems. IEEE Transactions on Neural Networks and Learning Systems, 27(3):636â647, 2016.
9
Published as a conference paper at ICLR 2017
M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. XNOR-Net: ImageNet classiï¬cation using binary convolutional neural networks. In ECCV, 2016.
Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. URL http://arxiv.org/abs/ 1605.02688.
T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude, 2012.
A.L. Yuille and A. Rangarajan. The concave-convex procedure (CCCP). NIPS, 2:1033â1040, 2002.
M.D. Zeiler. ADADELTA: An adaptive learning rate method. Technical Report arXiv:1212.5701, 2012.
S. Zhou, Z. Ni, X. Zhou, H. Wen, Y. Wu, and Y. Zou. DoReFa-Net: Training low bitwidth convolu- tional neural networks with low bitwidth gradients. Technical Report arXiv:1606.06160, 2016.
10
Published as a conference paper at ICLR 2017
# A PROOF OF PROPOSITION 3.1
Denote ||x||¢ = x' Qx,
Ve(w't)" (we! -wh) 4 Sw _ wh) Dl w! âw') L 1 . te te _ = 5 tet â 1_ Vyew') @ at Yilbeo +e l=1 1 L = 5 Iw; â willbe +e l=1 nm = SOV la eattbsle â why? +e, [=I i=1
nm = SOV [=I i=1 â3||Vie(w'1) Odi" have bj = sign(w/).
where c, = â3||Vie(w'1) 1,2,...,L, we have bj nt
where c, = â3||Vie(w'1) Odi" Rea is independent of aj and bj. Since aj > 0,d} > 0,Vl = L 1,2,...,L, we have bj = sign(w/). Moreover,
nt nt SLD la talib ih: â wily? er = FSD a Caf ~ lft)? +e l=1 i=1 l=1 i=1 L l=1 \Idj~* |] (a7)? = ||dj? © wi |l1aj + ce, NlR
is lla; âow? =I lay" Ih
1 where c2 = ¢; + $||dj~' © w} © w}||1. Thus, the optimal af is lla; âow? ha =I lay" Ih
.
# B PROOF OF THEOREM 3.1
Let a = [a{...,a4]", and denote the objective in (3) by F(w, a). As w' is the minimizer in (6. we have
L(w') + Ve(w'1)T (wt _ wl) +4 Sw" _ wh) Tp lw _ wl) < e(w'-!), (9) From Assumption Al, we have
Ow") < e¢w'1) + Vem) T Ww! â wht) + 3 jet â wh (10)
Using (9) and (10), we obtain
ew) < L(w') _ Sw" _ wt )T(pt! _ BI)(w _ wt) ming. ({d) "Jn â 8) atâ-1 2 atâ1)|2 < ew) 5 wi -w' ll; -
Let c3 = mink,l,t([dtâ1
# l
# "Je
]k â β) > 0. Then,
(wt) < e(wt®) â F |p wha (ll)
From Assumption A2, ¢ is bounded from below. Together with the fact that {¢(wâ)} is monoton- ically decreasing from (Ip, the sequence {¢(w')} converges, thus the sequence {F(w', a')} also converges.
# C PROOF OF PROPOSITION 3.2
Let the singulars values of W be λ1(W) ⥠λ2(W) ⥠· · · ⥠λm(W). 1 m
=n. =n
â
Thus, λ1(W) ⥠n.
11 | {
"id": "1605.04711"
} |
1611.01578 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | 7 1 0 2
b e F 5 1 ] G L . s c [
2 v 8 7 5 1 0 . 1 1 6 1 : v i X r a
Under review as a conference paper at ICLR 2017
# NEURAL ARCHITECTURE SEARCH WITH REINFORCEMENT LEARNING
# Barret Zophâ, Quoc V. Le Google Brain {barretzoph,qvl}@google.com
# ABSTRACT
Neural networks are powerful and ï¬exible models that work well for many difï¬- cult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a re- current network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that out- performs the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplex- ity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.
# INTRODUCTION
The last few years have seen much success of deep neural networks in many challenging appli- cations, such as speech recognition (Hinton et al., 2012), image recognition (LeCun et al., 1998; Krizhevsky et al., 2012) and machine translation (Sutskever et al., 2014; Bahdanau et al., 2015; Wu et al., 2016). Along with this success is a paradigm shift from feature designing to architecture designing, i.e., from SIFT (Lowe, 1999), and HOG (Dalal & Triggs, 2005), to AlexNet (Krizhevsky et al., 2012), VGGNet (Simonyan & Zisserman, 2014), GoogleNet (Szegedy et al., 2015), and ResNet (He et al., 2016a). Although it has become easier, designing architectures still requires a lot of expert knowledge and takes ample time.
Sample architecture A with probability p Trains a child network The controller (RNN) with architecture Ato get accuracy R Compute gradient of p and scale it by R to update the controller
Figure 1: An overview of Neural Architecture Search.
This paper presents Neural Architecture Search, a gradient-based method for ï¬nding good architec- tures (see Figure 1) . Our work is based on the observation that the structure and connectivity of a
âWork done as a member of the Google Brain Residency program (g.co/brainresidency.)
1
# Under review as a conference paper at ICLR 2017
neural network can be typically speciï¬ed by a variable-length string. It is therefore possible to use a recurrent network â the controller â to generate such string. Training the network speciï¬ed by the string â the âchild networkâ â on the real data will result in an accuracy on a validation set. Using this accuracy as the reward signal, we can compute the policy gradient to update the controller. As a result, in the next iteration, the controller will give higher probabilities to architectures that receive high accuracies. In other words, the controller will learn to improve its search over time.
Our experiments show that Neural Architecture Search can design good models from scratch, an achievement considered not possible with other methods. On image recognition with CIFAR-10, Neural Architecture Search can ï¬nd a novel ConvNet model that is better than most human-invented architectures. Our CIFAR-10 model achieves a 3.65 test set error, while being 1.05x faster than the current best model. On language modeling with Penn Treebank, Neural Architecture Search can design a novel recurrent cell that is also better than previous RNN and LSTM architectures. The cell that our model found achieves a test set perplexity of 62.4 on the Penn Treebank dataset, which is 3.6 perplexity better than the previous state-of-the-art.
# 2 RELATED WORK
Hyperparameter optimization is an important research topic in machine learning, and is widely used in practice (Bergstra et al., 2011; Bergstra & Bengio, 2012; Snoek et al., 2012; 2015; Saxena & Verbeek, 2016). Despite their success, these methods are still limited in that they only search models from a ï¬xed-length space. In other words, it is difï¬cult to ask them to generate a variable-length conï¬guration that speciï¬es the structure and connectivity of a network. In practice, these methods often work better if they are supplied with a good initial model (Bergstra & Bengio, 2012; Snoek et al., 2012; 2015). There are Bayesian optimization methods that allow to search non ï¬xed length architectures (Bergstra et al., 2013; Mendoza et al., 2016), but they are less general and less ï¬exible than the method proposed in this paper.
Modern neuro-evolution algorithms, e.g., Wierstra et al. (2005); Floreano et al. (2008); Stanley et al. (2009), on the other hand, are much more ï¬exible for composing novel models, yet they are usually less practical at a large scale. Their limitations lie in the fact that they are search-based methods, thus they are slow or require many heuristics to work well.
Neural Architecture Search has some parallels to program synthesis and inductive programming, the idea of searching a program from examples (Summers, 1977; Biermann, 1978). In machine learning, probabilistic program induction has been used successfully in many settings, such as learning to solve simple Q&A (Liang et al., 2010; Neelakantan et al., 2015; Andreas et al., 2016), sort a list of numbers (Reed & de Freitas, 2015), and learning with very few examples (Lake et al., 2015).
The controller in Neural Architecture Search is auto-regressive, which means it predicts hyperpa- rameters one a time, conditioned on previous predictions. This idea is borrowed from the decoder in end-to-end sequence to sequence learning (Sutskever et al., 2014). Unlike sequence to sequence learning, our method optimizes a non-differentiable metric, which is the accuracy of the child net- work. It is therefore similar to the work on BLEU optimization in Neural Machine Translation (Ran- zato et al., 2015; Shen et al., 2016). Unlike these approaches, our method learns directly from the reward signal without any supervised bootstrapping.
Also related to our work is the idea of learning to learn or meta-learning (Thrun & Pratt, 2012), a general framework of using information learned in one task to improve a future task. More closely related is the idea of using a neural network to learn the gradient descent updates for another net- work (Andrychowicz et al., 2016) and the idea of using reinforcement learning to ï¬nd update policies for another network (Li & Malik, 2016).
# 3 METHODS
In the following section, we will ï¬rst describe a simple method of using a recurrent network to generate convolutional architectures. We will show how the recurrent network can be trained with a policy gradient method to maximize the expected accuracy of the sampled architectures. We will present several improvements of our core approach such as forming skip connections to increase model complexity and using a parameter server approach to speed up training. In the last part of
2
# Under review as a conference paper at ICLR 2017
the section, we will focus on generating recurrent architectures, which is another key contribution of our paper.
3.1 GENERATE MODEL DESCRIPTIONS WITH A CONTROLLER RECURRENT NEURAL NETWORK
In Neural Architecture Search, we use a controller to generate architectural hyperparameters of neural networks. To be ï¬exible, the controller is implemented as a recurrent neural network. Letâs suppose we would like to predict feedforward neural networks with only convolutional layers, we can use the controller to generate their hyperparameters as a sequence of tokens:
Number| | Filter *, lof Filtersf, | Height |, tf f Stride Number Filter Width J, Jof Filters), | Height |, x H A >< Layer N-1 Layer N Layer
.
.
N+1
Figure 2: How our controller recurrent neural network samples a simple convolutional network. It predicts ï¬lter height, ï¬lter width, stride height, stride width, and number of ï¬lters for one layer and repeats. Every prediction is carried out by a softmax classiï¬er and then fed into the next time step as input.
In our experiments, the process of generating an architecture stops if the number of layers exceeds a certain value. This value follows a schedule where we increase it as training progresses. Once the controller RNN ï¬nishes generating an architecture, a neural network with this architecture is built and trained. At convergence, the accuracy of the network on a held-out validation set is recorded. The parameters of the controller RNN, θc, are then optimized in order to maximize the expected validation accuracy of the proposed architectures. In the next section, we will describe a policy gradient method which we use to update parameters θc so that the controller RNN generates better architectures over time.
# 3.2 TRAINING WITH REINFORCE
The list of tokens that the controller predicts can be viewed as a list of actions a1:T to design an architecture for a child network. At convergence, this child network will achieve an accuracy R on a held-out dataset. We can use this accuracy R as the reward signal and use reinforcement learning to train the controller. More concretely, to ï¬nd the optimal architecture, we ask our controller to maximize its expected reward, represented by J(θc):
J(θc) = EP (a1:T ;θc)[R]
Since the reward signal R is non-differentiable, we need to use a policy gradient method to iteratively update θc. In this work, we use the REINFORCE rule from Williams (1992):
T Vo. I (Be) = > pavers.) | Vo. log Plata); 9) Ri t=1
An empirical approximation of the above quantity is:
m T 1 m YOY Vo. log P(arla(eâ1).13 9) Re k=1t=1
Where m is the number of different architectures that the controller samples in one batch and T is the number of hyperparameters our controller has to predict to design a neural network architecture.
3
# Under review as a conference paper at ICLR 2017
The validation accuracy that the k-th neural network architecture achieves after being trained on a training dataset is Rk.
The above update is an unbiased estimate for our gradient, but has a very high variance. In order to reduce the variance of this estimate we employ a baseline function:
1m mn So SS Vo, log P(aelayâ1):1;9e)(Re â b) b=1 t= 1
As long as the baseline function b does not depend on the on the current action, then this is still an unbiased gradient estimate. In this work, our baseline b is an exponential moving average of the previous architecture accuracies.
Accelerate Training with Parallelism and Asynchronous Updates: In Neural Architecture Search, each gradient update to the controller parameters θc corresponds to training one child net- work to convergence. As training a child network can take hours, we use distributed training and asynchronous parameter updates in order to speed up the learning process of the controller (Dean et al., 2012). We use a parameter-server scheme where we have a parameter server of S shards, that store the shared parameters for K controller replicas. Each controller replica samples m different child architectures that are trained in parallel. The controller then collects gradients according to the results of that minibatch of m architectures at convergence and sends them to the parameter server in order to update the weights across all controller replicas. In our implementation, convergence of each child network is reached when its training exceeds a certain number of epochs. This scheme of parallelism is summarized in Figure 3.
Parameter| [Parameter] | |, [Parameter] Server 1 Server 2 Server S Parameters Accuracy R Child Child Child Child Child Child Child Child Child Replica 1 | | Replica 2 Replica m Replica 1| | Replica 2 Replica m Replica 1| | Replica 2 Replica m
Figure 3: Distributed training for Neural Architecture Search. We use a set of S parameter servers to store and send parameters to K controller replicas. Each controller replica then samples m archi- tectures and run the multiple child models in parallel. The accuracy of each child model is recorded to compute the gradients with respect to θc, which are then sent back to the parameter servers.
INCREASE ARCHITECTURE COMPLEXITY WITH SKIP CONNECTIONS AND OTHER LAYER TYPES
In Section 3.1, the search space does not have skip connections, or branching layers used in modern architectures such as GoogleNet (Szegedy et al., 2015), and Residual Net (He et al., 2016a). In this section we introduce a method that allows our controller to propose skip connections or branching layers, thereby widening the search space.
To enable the controller to predict such connections, we use a set-selection type attention (Neelakan- tan et al., 2015) which was built upon the attention mechanism (Bahdanau et al., 2015; Vinyals et al., 2015). At layer N , we add an anchor point which has N â 1 content-based sigmoids to indicate the previous layers that need to be connected. Each sigmoid is a function of the current hiddenstate of the controller and the previous hiddenstates of the previous N â 1 anchor points:
P(Layer j is an input to layer i) = sigmoid(vTtanh(Wprev â hj + Wcurr â hi)), where hj represents the hiddenstate of the controller at anchor point for the j-th layer, where j ranges from 0 to N â 1. We then sample from these sigmoids to decide what previous layers to be used as inputs to the current layer. The matrices Wprev, Wcurr and v are trainable parameters. As
4
# Under review as a conference paper at ICLR 2017
these connections are also deï¬ned by probability distributions, the REINFORCE method still applies without any signiï¬cant modiï¬cations. Figure 4 shows how the controller uses skip connections to decide what layers it wants as inputs to the current layer.
N-1 skip connections Number] | Anchor | | Filter Stride | | Anchor | |Number| | Filter *, Jof Filters[, | Point ,| Height ), 5 f,| Width f, | Point f; JofFiltersf, | Height /, H ; ; >| >| : : : : t t Layer N-1 Layer N Layer N+1
Figure 4: The controller uses anchor points, and set-selection attention to form skip connections.
In our framework, if one layer has many input layers then all input layers are concatenated in the depth dimension. Skip connections can cause âcompilation failuresâ where one layer is not compat- ible with another layer, or one layer may not have any input or output. To circumvent these issues, we employ three simple techniques. First, if a layer is not connected to any input layer then the image is used as the input layer. Second, at the ï¬nal layer we take all layer outputs that have not been connected and concatenate them before sending this ï¬nal hiddenstate to the classiï¬er. Lastly, if input layers to be concatenated have different sizes, we pad the small layers with zeros so that the concatenated layers have the same sizes.
Finally, in Section 3.1, we do not predict the learning rate and we also assume that the architectures consist of only convolutional layers, which is also quite restrictive. It is possible to add the learning rate as one of the predictions. Additionally, it is also possible to predict pooling, local contrast normalization (Jarrett et al., 2009; Krizhevsky et al., 2012), and batchnorm (Ioffe & Szegedy, 2015) in the architectures. To be able to add more types of layers, we need to add an additional step in the controller RNN to predict the layer type, then other hyperparameters associated with it.
3.4 GENERATE RECURRENT CELL ARCHITECTURES
In this section, we will modify the above method to generate recurrent cells. At every time step t, the controller needs to ï¬nd a functional form for ht that takes xt and htâ1 as inputs. The simplest way is to have ht = tanh(W1 âxt +W2 âhtâ1), which is the formulation of a basic recurrent cell. A more complicated formulation is the widely-used LSTM recurrent cell (Hochreiter & Schmidhuber, 1997).
The computations for basic RNN and LSTM cells can be generalized as a tree of steps that take xt and htâ1 as inputs and produce ht as ï¬nal output. The controller RNN needs to label each node in the tree with a combination method (addition, elementwise multiplication, etc.) and an activation function (tanh, sigmoid, etc.) to merge two inputs and produce one output. Two outputs are then fed as inputs to the next node in the tree. To allow the controller RNN to select these methods and functions, we index the nodes in the tree in an order so that the controller RNN can visit each node one by one and label the needed hyperparameters.
Inspired by the construction of the LSTM cell (Hochreiter & Schmidhuber, 1997), we also need cell variables ctâ1 and ct to represent the memory states. To incorporate these variables, we need the controller RNN to predict what nodes in the tree to connect these two variables to. These predictions can be done in the last two blocks of the controller RNN.
To make this process more clear, we show an example in Figure 5, for a tree structure that has two leaf nodes and one internal node. The leaf nodes are indexed by 0 and 1, and the internal node is indexed by 2. The controller RNN needs to ï¬rst predict 3 blocks, each block specifying a combina- tion method and an activation function for each tree index. After that it needs to predict the last 2 blocks that specify how to connect ct and ctâ1 to temporary variables inside the tree. Speciï¬cally,
5
# Under review as a conference paper at ICLR 2017
he he & £5 BRBSRRRRG oe tit it i Index 2 we relu rN < â> < â+ < â» < â» < - Mer Xt Nea Xt âTeeindexoâ âTreeindexaâ âTreetndex2â ~Cell inject Cell indices Tree Tree Index 0 Index 1 elem_mult, Figure 5: An example of a recurrent cell constructed from a tree that has two leaf nodes (base 2) and one internal node. Left: the tree that deï¬nes the computation steps to be predicted by controller. Center: an example set of predictions made by the controller for each computation step in the tree. Right: the computation graph of the recurrent cell constructed from example predictions of the controller.
according to the predictions of the controller RNN in this example, the following computation steps will occur:
⢠The controller predicts Add and T anh for tree index 0, this means we need to compute a0 = tanh(W1 â xt + W2 â htâ1).
e The controller predicts ElemMult and ReLU for tree index 1, this means we need to compute a, = ReLU((W3 * 21) © (Wa hyâ1)).
⢠The controller predicts 0 for the second element of the âCell Indexâ, Add and ReLU for 0 = ReLU(a0 + ctâ1). elements in âCell Injectâ, which means we need to compute anew Notice that we donât have any learnable parameters for the internal nodes of the tree.
e The controller predicts ElemMult and Sigmoid for tree index 2, this means we need to compute az = sigmoid(aj*ââ © a1). Since the maximum index in the tree is 2, hy is set to a2.
e The controller RNN predicts 1 for the first element of the âCell Indexâ, this means that we should set c; to the output of the tree at index 1 before the activation, i.e., c, = (W3 * 21) © (W4 * hy-1).
In the above example, the tree has two leaf nodes, thus it is called a âbase 2â architecture. In our experiments, we use a base number of 8 to make sure that the cell is expressive.
# 4 EXPERIMENTS AND RESULTS
We apply our method to an image classiï¬cation task with CIFAR-10 and a language modeling task with Penn Treebank, two of the most benchmarked datasets in deep learning. On CIFAR-10, our goal is to ï¬nd a good convolutional architecture whereas on Penn Treebank our goal is to ï¬nd a good recurrent cell. On each dataset, we have a separate held-out validation dataset to compute the reward signal. The reported performance on the test set is computed only once for the network that achieves the best result on the held-out validation dataset. More details about our experimental procedures and results are as follows.
4.1 LEARNING CONVOLUTIONAL ARCHITECTURES FOR CIFAR-10
Dataset: In these experiments we use the CIFAR-10 dataset with data preprocessing and aug- mentation procedures that are in line with other previous results. We ï¬rst preprocess the data by whitening all the images. Additionally, we upsample each image then choose a random 32x32 crop of this upsampled image. Finally, we use random horizontal ï¬ips on this 32x32 cropped image.
Search space: Our search space consists of convolutional architectures, with rectiï¬ed linear units as non-linearities (Nair & Hinton, 2010), batch normalization (Ioffe & Szegedy, 2015) and skip connections between layers (Section 3.3). For every convolutional layer, the controller RNN has to select a ï¬lter height in [1, 3, 5, 7], a ï¬lter width in [1, 3, 5, 7], and a number of ï¬lters in [24, 36, 48,
6
# Under review as a conference paper at ICLR 2017
64]. For strides, we perform two sets of experiments, one where we ï¬x the strides to be 1, and one where we allow the controller to predict the strides in [1, 2, 3].
Training details: The controller RNN is a two-layer LSTM with 35 hidden units on each layer. It is trained with the ADAM optimizer (Kingma & Ba, 2015) with a learning rate of 0.0006. The weights of the controller are initialized uniformly between -0.08 and 0.08. For the distributed train- ing, we set the number of parameter server shards S to 20, the number of controller replicas K to 100 and the number of child replicas m to 8, which means there are 800 networks being trained on 800 GPUs concurrently at any time.
Once the controller RNN samples an architecture, a child model is constructed and trained for 50 epochs. The reward used for updating the controller is the maximum validation accuracy of the last 5 epochs cubed. The validation set has 5,000 examples randomly sampled from the training set, the remaining 45,000 examples are used for training. The settings for training the CIFAR-10 child models are the same with those used in Huang et al. (2016a). We use the Momentum Optimizer with a learning rate of 0.1, weight decay of 1e-4, momentum of 0.9 and used Nesterov Momentum (Sutskever et al., 2013).
During the training of the controller, we use a schedule of increasing number of layers in the child networks as training progresses. On CIFAR-10, we ask the controller to increase the depth by 2 for the child models every 1,600 samples, starting at 6 layers.
Results: After the controller trains 12,800 architectures, we ï¬nd the architecture that achieves the best validation accuracy. We then run a small grid search over learning rate, weight decay, batchnorm epsilon and what epoch to decay the learning rate. The best model from this grid search is then run until convergence and we then compute the test accuracy of such model and summarize the results in Table 1. As can be seen from the table, Neural Architecture Search can design several promising architectures that perform as well as some of the best models on this dataset.
Model Depth Parameters Error rate (%) Network in Network (Lin et al., 2013) All-CNN (Springenberg et al., 2014) Deeply Supervised Net (Lee et al., 2015) Highway Network (Srivastava et al., 2015) Scalable Bayesian Optimization (Snoek et al., 2015) - - - - - - - - - - 8.81 7.25 7.97 7.72 6.37 FractalNet (Larsson et al., 2016) with Dropout/Drop-path 21 21 38.6M 38.6M 5.22 4.60 ResNet (He et al., 2016a) 110 1.7M 6.61 ResNet (reported by Huang et al. (2016c)) 110 1.7M 6.41 ResNet with Stochastic Depth (Huang et al., 2016c) 110 1202 1.7M 10.2M 5.23 4.91 Wide ResNet (Zagoruyko & Komodakis, 2016) 16 28 11.0M 36.5M 4.81 4.17 ResNet (pre-activation) (He et al., 2016b) DenseNet (L = 40, k = 12) Huang et al. (2016a) DenseNet(L = 100, k = 12) Huang et al. (2016a) DenseNet (L = 100, k = 24) Huang et al. (2016a) DenseNet-BC (L = 100, k = 40) Huang et al. (2016b) 164 1001 40 100 100 190 1.7M 10.2M 1.0M 7.0M 27.2M 25.6M 5.46 4.62 5.24 4.10 3.74 3.46 Neural Architecture Search v1 no stride or pooling Neural Architecture Search v2 predicting strides Neural Architecture Search v3 max pooling Neural Architecture Search v3 max pooling + more ï¬lters 15 20 39 39 4.2M 2.5M 7.1M 37.4M 5.50 6.01 4.47 3.65
Table 1: Performance of Neural Architecture Search and other state-of-the-art models on CIFAR-10.
7
# Under review as a conference paper at ICLR 2017
First, if we ask the controller to not predict stride or pooling, it can design a 15-layer architecture that achieves 5.50% error rate on the test set. This architecture has a good balance between accuracy and depth. In fact, it is the shallowest and perhaps the most inexpensive architecture among the top performing networks in this table. This architecture is shown in Appendix A, Figure 7. A notable feature of this architecture is that it has many rectangular ï¬lters and it prefers larger ï¬lters at the top layers. Like residual networks (He et al., 2016a), the architecture also has many one-step skip connections. This architecture is a local optimum in the sense that if we perturb it, its performance becomes worse. For example, if we densely connect all layers with skip connections, its performance becomes slightly worse: 5.56%. If we remove all skip connections, its performance drops to 7.97%.
In the second set of experiments, we ask the controller to predict strides in addition to other hyper- parameters. As stated earlier, this is more challenging because the search space is larger. In this case, it ï¬nds a 20-layer architecture that achieves 6.01% error rate on the test set, which is not much worse than the ï¬rst set of experiments.
Finally, if we allow the controller to include 2 pooling layers at layer 13 and layer 24 of the archi- tectures, the controller can design a 39-layer network that achieves 4.47% which is very close to the best human-invented architecture that achieves 3.74%. To limit the search space complexity we have our model predict 13 layers where each layer prediction is a fully connected block of 3 layers. Additionally, we change the number of ï¬lters our model can predict from [24, 36, 48, 64] to [6, 12, 24, 36]. Our result can be improved to 3.65% by adding 40 more ï¬lters to each layer of our archi- tecture. Additionally this model with 40 ï¬lters added is 1.05x as fast as the DenseNet model that achieves 3.74%, while having better performance. The DenseNet model that achieves 3.46% error rate (Huang et al., 2016b) uses 1x1 convolutions to reduce its total number of parameters, which we did not do, so it is not an exact comparison.
4.2 LEARNING RECURRENT CELLS FOR PENN TREEBANK
Dataset: We apply Neural Architecture Search to the Penn Treebank dataset, a well-known bench- mark for language modeling. On this task, LSTM architectures tend to excel (Zaremba et al., 2014; Gal, 2015), and improving them is difï¬cult (Jozefowicz et al., 2015). As PTB is a small dataset, reg- ularization methods are needed to avoid overï¬tting. First, we make use of the embedding dropout and recurrent dropout techniques proposed in Zaremba et al. (2014) and (Gal, 2015). We also try to combine them with the method of sharing Input and Output embeddings, e.g., Bengio et al. (2003); Mnih & Hinton (2007), especially Inan et al. (2016) and Press & Wolf (2016). Results with this method are marked with âshared embeddings.â
Search space: Following Section 3.4, our controller sequentially predicts a combination method then an activation function for each node in the tree. For each node in the tree, the controller RNN needs to select a combination method in [add, elem mult] and an activation method in [identity, tanh, sigmoid, relu]. The number of input pairs to the RNN cell is called the âbase numberâ and set to 8 in our experiments. When the base number is 8, the search space is has ap- proximately 6 Ã 1016 architectures, which is much larger than 15,000, the number of architectures that we allow our controller to evaluate.
Training details: The controller and its training are almost identical to the CIFAR-10 experiments except for a few modiï¬cations: 1) the learning rate for the controller RNN is 0.0005, slightly smaller than that of the controller RNN in CIFAR-10, 2) in the distributed training, we set S to 20, K to 400 and m to 1, which means there are 400 networks being trained on 400 CPUs concurrently at any time, 3) during asynchronous training we only do parameter updates to the parameter-server once 10 gradients from replicas have been accumulated.
In our experiments, every child model is constructed and trained for 35 epochs. Every child model has two layers, with the number of hidden units adjusted so that total number of learnable parameters approximately match the âmediumâ baselines (Zaremba et al., 2014; Gal, 2015). In these experi- ments we only have the controller predict the RNN cell structure and ï¬x all other hyperparameters. The reward function is
After the controller RNN is done training, we take the best RNN cell according to the lowest val- idation perplexity and then run a grid search over learning rate, weight initialization, dropout rates
8
# Under review as a conference paper at ICLR 2017
and decay epoch. The best cell found was then run with three different conï¬gurations and sizes to increase its capacity.
Results: In Table 2, we provide a comprehensive list of architectures and their performance on the PTB dataset. As can be seen from the table, the models found by Neural Architecture Search outperform other state-of-the-art models on this dataset, and one of our best models achieves a gain of almost 3.6 perplexity. Not only is our cell is better, the model that achieves 64 perplexity is also more than two times faster because the previous best network requires running a cell 10 times per time step (Zilly et al., 2016).
Model Parameters Test Perplexity Mikolov & Zweig (2012) - KN-5 Mikolov & Zweig (2012) - KN5 + cache Mikolov & Zweig (2012) - RNN Mikolov & Zweig (2012) - RNN-LDA Mikolov & Zweig (2012) - RNN-LDA + KN-5 + cache Pascanu et al. (2013) - Deep RNN Cheng et al. (2014) - Sum-Prod Net Zaremba et al. (2014) - LSTM (medium) Zaremba et al. (2014) - LSTM (large) Gal (2015) - Variational LSTM (medium, untied) Gal (2015) - Variational LSTM (medium, untied, MC) Gal (2015) - Variational LSTM (large, untied) Gal (2015) - Variational LSTM (large, untied, MC) Kim et al. (2015) - CharCNN Press & Wolf (2016) - Variational LSTM, shared embeddings Merity et al. (2016) - Zoneout + Variational LSTM (medium) Merity et al. (2016) - Pointer Sentinel-LSTM (medium) Inan et al. (2016) - VD-LSTM + REAL (large) Zilly et al. (2016) - Variational RHN, shared embeddings 2Mâ¡ 2Mâ¡ 6Mâ¡ 7Mâ¡ 9Mâ¡ 6M 5Mâ¡ 20M 66M 20M 20M 66M 66M 19M 51M 20M 21M 51M 24M 141.2 125.7 124.7 113.7 92.0 107.5 100.0 82.7 78.4 79.7 78.6 75.2 73.4 78.9 73.2 80.6 70.9 68.5 66.0 Neural Architecture Search with base 8 Neural Architecture Search with base 8 and shared embeddings Neural Architecture Search with base 8 and shared embeddings 32M 25M 54M 67.9 64.0 62.4
Table 2: Single model perplexity on the test set of the Penn Treebank language modeling task. Parameter numbers with â¡ are estimates with reference to Merity et al. (2016).
The newly discovered cell is visualized in Figure 8 in Appendix A. The visualization reveals that the new cell has many similarities to the LSTM cell in the ï¬rst few steps, such as it likes to compute W1 â htâ1 + W2 â xt several times and send them to different components in the cell.
Transfer Learning Results: To understand whether the cell can generalize to a different task, we apply it to the character language modeling task on the same dataset. We use an experimental setup that is similar to Ha et al. (2016), but use variational dropout by Gal (2015). We also train our own LSTM with our setup to get a fair LSTM baseline. Models are trained for 80K steps and the best test set perplexity is taken according to the step where validation set perplexity is the best. The results on the test set of our method and state-of-art methods are reported in Table 3. The results on small settings with 5-6M parameters conï¬rm that the new cell does indeed generalize, and is better than the LSTM cell.
Additionally, we carry out a larger experiment where the model has 16.28M parameters. This model has a weight decay rate of 1e â 4, was trained for 600K steps (longer than the above models) and the test perplexity is taken where the validation set perplexity is highest. We use dropout rates of 0.2 and 0.5 as described in Gal (2015), but do not use embedding dropout. We use the ADAM optimizer with a learning rate of 0.001 and an input embedding size of 128. Our model had two layers with 800 hidden units. We used a minibatch size of 32 and BPTT length of 100. With this setting, our model achieves 1.214 perplexity, which is the new state-of-the-art result on this task.
Finally, we also drop our cell into the GNMT framework (Wu et al., 2016), which was previously tuned for LSTM cells, and train an WMT14 English â German translation model. The GNMT
9
# Under review as a conference paper at ICLR 2017
RNN Cell Type Ha et al. (2016) - Layer Norm HyperLSTM Ha et al. (2016) - Layer Norm HyperLSTM Large Embeddings Ha et al. (2016) - 2-Layer Norm HyperLSTM 4.92M 5.06M 14.41M 1.250 1.233 1.219 Two layer LSTM Two Layer with New Cell Two Layer with New Cell 6.57M 6.57M 16.28M 1.243 1.228 1.214
Table 3: Comparison between our cell and state-of-art methods on PTB character modeling. The new cell was found on word level language modeling.
network has 8 layers in the encoder, 8 layers in the decoder. The ï¬rst layer of the encoder has bidirectional connections. The attention module is a neural network with 1 hidden layer. When a LSTM cell is used, the number of hidden units in each layer is 1024. The model is trained in a distributed setting with a parameter sever and 12 workers. Additionally, each worker uses 8 GPUs and a minibatch of 128. We use Adam with a learning rate of 0.0002 in the ï¬rst 60K training steps, and SGD with a learning rate of 0.5 until 400K steps. After that the learning rate is annealed by dividing by 2 after every 100K steps until it reaches 0.1. Training is stopped at 800K steps. More details can be found in Wu et al. (2016).
In our experiment with the new cell, we make no change to the above settings except for dropping in the new cell and adjusting the hyperparameters so that the new model should have the same compu- tational complexity with the base model. The result shows that our cell, with the same computational complexity, achieves an improvement of 0.5 test set BLEU than the default LSTM cell. Though this improvement is not huge, the fact that the new cell can be used without any tuning on the existing GNMT framework is encouraging. We expect further tuning can help our cell perform better.
Control Experiment 1 â Adding more functions in the search space: To test the robustness of Neural Architecture Search, we add max to the list of combination functions and sin to the list of activation functions and rerun our experiments. The results show that even with a bigger search space, the model can achieve somewhat comparable performance. The best architecture with max and sin is shown in Figure 8 in Appendix A.
Control Experiment 2 â Comparison against Random Search: Instead of policy gradient, one can use random search to ï¬nd the best network. Although this baseline seems simple, it is often very hard to surpass (Bergstra & Bengio, 2012). We report the perplexity improvements using policy gradient against random search as training progresses in Figure 6. The results show that not only the best model using policy gradient is better than the best model using random search, but also the average of top models is also much better.
@â* Top_1_unique_models as||â* Top_5_unique_models e* Top_15_unique_models Perplexity Improvement 0 5000 70000 T5000 20000 725000 Iteration
Figure 6: Improvement of Neural Architecture Search over random search over time. We plot the difference between the average of the top k models our controller ï¬nds vs. random search every 400 models run.
10
# Under review as a conference paper at ICLR 2017
# 5 CONCLUSION
In this paper we introduce Neural Architecture Search, an idea of using a recurrent neural network to compose neural network architectures. By using recurrent network as the controller, our method is ï¬exible so that it can search variable-length architecture space. Our method has strong empirical per- formance on very challenging benchmarks and presents a new research direction for automatically ï¬nding good neural network architectures. The code for running the models found by the controller on CIFAR-10 and PTB will be released at https://github.com/tensorï¬ow/models . Additionally, we have added the RNN cell found using our method under the name NASCell into TensorFlow, so others can easily use it.
ACKNOWLEDGMENTS
We thank Greg Corrado, Jeff Dean, David Ha, Lukasz Kaiser and the Google Brain team for their help with the project.
# REFERENCES
Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Learning to compose neural networks for question answering. In NAACL, 2016.
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. arXiv preprint arXiv:1606.04474, 2016.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2015.
Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language model. JMLR, 2003.
James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. JMLR, 2012.
James Bergstra, R´emi Bardenet, Yoshua Bengio, and Bal´azs K´egl. Algorithms for hyper-parameter optimization. In NIPS, 2011.
James Bergstra, Daniel Yamins, and David D Cox. Making a science of model search: Hyperpa- rameter optimization in hundreds of dimensions for vision architectures. ICML, 2013.
Alan W. Biermann. The inference of regular LISP programs from examples. IEEE transactions on Systems, Man, and Cybernetics, 1978.
Wei-Chen Cheng, Stanley Kok, Hoai Vu Pham, Hai Leong Chieu, and Kian Ming Adam Chai. Language modeling with sum-product networks. In INTERSPEECH, 2014.
Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior, Paul Tucker, Ke Yang, Quoc V. Le, et al. Large scale distributed deep networks. In NIPS, 2012.
Dario Floreano, Peter D¨urr, and Claudio Mattiussi. Neuroevolution: from architectures to learning. Evolutionary Intelligence, 2008.
Yarin Gal. A theoretically grounded application of dropout in recurrent neural networks. arXiv preprint arXiv:1512.05287, 2015.
David Ha, Andrew Dai, and Quoc V. Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In CVPR, 2016a.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016b.
11
# Under review as a conference paper at ICLR 2017
Geoffrey Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 2012.
Sepp Hochreiter and Juergen Schmidhuber. Long short-term memory. Neural Computation, 1997.
Gao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016a.
Gao Huang, Zhuang Liu, Kilian Q. Weinberger, and Laurens van der Maaten. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016b.
Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochas- tic depth. arXiv preprint arXiv:1603.09382, 2016c.
Hakan Inan, Khashayar Khosravi, and Richard Socher. Tying word vectors and word classiï¬ers: A loss framework for language modeling. arXiv preprint arXiv:1611.01462, 2016.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
Kevin Jarrett, Koray Kavukcuoglu, Yann Lecun, et al. What is the best multi-stage architecture for object recognition? In ICCV, 2009.
Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recurrent network architectures. In ICML, 2015.
Yoon Kim, Yacine Jernite, David Sontag, and Alexander M. Rush. Character-aware neural language models. arXiv preprint arXiv:1508.06615, 2015.
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classiï¬cation with deep convo- lutional neural networks. In NIPS, 2012.
Brenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 2015.
Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural net- works without residuals. arXiv preprint arXiv:1605.07648, 2016.
Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998.
Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply- supervised nets. In AISTATS, 2015.
Ke Li and Jitendra Malik. Learning to optimize. arXiv preprint arXiv:1606.01885, 2016.
Percy Liang, Michael I. Jordan, and Dan Klein. Learning programs: A hierarchical Bayesian ap- proach. In ICML, 2010.
Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. In ICLR, 2013.
David G. Lowe. Object recognition from local scale-invariant features. In CVPR, 1999.
Hector Mendoza, Aaron Klein, Matthias Feurer, Jost Tobias Springenberg, and Frank Hutter. To- wards automatically-tuned neural networks. In Proceedings of the 2016 Workshop on Automatic Machine Learning, pp. 58â65, 2016.
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016.
Tomas Mikolov and Geoffrey Zweig. Context dependent recurrent neural network language model. In SLT, pp. 234â239, 2012.
12
# Under review as a conference paper at ICLR 2017
Andriy Mnih and Geoffrey Hinton. Three new graphical models for statistical language modelling. In ICML, 2007.
Vinod Nair and Geoffrey E. Hinton. Rectiï¬ed linear units improve restricted Boltzmann machines. In ICML, 2010.
Arvind Neelakantan, Quoc V. Le, and Ilya Sutskever. Neural programmer: Inducing latent programs with gradient descent. In ICLR, 2015.
Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. How to construct deep recurrent neural networks. arXiv preprint arXiv:1312.6026, 2013.
Oï¬r Press and Lior Wolf. Using the output embedding to improve language models. arXiv preprint arXiv:1608.05859, 2016.
MarcâAurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732, 2015.
Scott Reed and Nando de Freitas. Neural programmer-interpreters. In ICLR, 2015.
Shreyas Saxena and Jakob Verbeek. Convolutional neural fabrics. In NIPS, 2016.
Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. Minimum risk training for neural machine translation. In ACL, 2016.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. Practical Bayesian optimization of machine learning algorithms. In NIPS, 2012.
Jasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram, Mostofa Patwary, Mostofa Ali, Ryan P. Adams, et al. Scalable bayesian optimization using deep neural networks. In ICML, 2015.
Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.
Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387, 2015.
Kenneth O. Stanley, David B. DâAmbrosio, and Jason Gauci. A hypercube-based encoding for evolving large-scale neural networks. Artiï¬cial Life, 2009.
Phillip D. Summers. A methodology for LISP program construction from examples. Journal of the ACM, 1977.
Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initializa- tion and momentum in deep learning. In ICML, 2013.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In NIPS, 2014.
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du- mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In CVPR, 2015.
Sebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 2012.
Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In NIPS, 2015.
Daan Wierstra, Faustino J Gomez, and J¨urgen Schmidhuber. Modeling systems with internal state using evolino. In GECCO, 2005.
Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. In Machine Learning, 1992.
13
# Under review as a conference paper at ICLR 2017
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, et al. Googleâs neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In BMVC, 2016.
Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.
Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutn´ık, and J¨urgen Schmidhuber. Recurrent highway networks. arXiv preprint arXiv:1607.03474, 2016.
14
# Under review as a conference paper at ICLR 2017
# A APPENDIX
Softmax FH: 7 FW: 5 N: 48 FH: 7 FW: 5 N: 48 FH: 7 FW: 5 N: 48 FH: 7 FW: 7 N: 48 FH: 5 FW: 7 N: 36 FH: 7 FW: 7 N: 36, FH: 7 FW: 1 N: 36 FH: 7 FW: 3 N: 36 FH: 7 FW: 7 N: 48 FH: 7 FW: 7 N: 48 FH: 3 FW: 7 N: 48 FH: 5 FW: 5 N: 36 FH: 3 FW: 3 N: 36 FH: 3 FW: 3 N: 48 FH: 3 FW: 3 N: 36 Image
Figure 7: Convolutional architecture discovered by our method, when the search space does not have strides or pooling layers. FH is ï¬lter height, FW is ï¬lter width and N is number of ï¬lters. Note that the skip connections are not residual connections. If one layer has many input layers then all input layers are concatenated in the depth dimension.
15
# Under review as a conference paper at ICLR 2017
elem_mult . elem_mult identity add elem_mult tanh add sigmoid sigmoid tanh elem_mult elem_mult sigmoid identity add identity
Figure 8: A comparison of the original LSTM cell vs. two good cells our model found. Top left: LSTM cell. Top right: Cell found by our model when the search space does not include max and sin. Bottom: Cell found by our model when the search space includes max and sin (the controller did not choose to use the sin function).
16 | {
"id": "1611.01462"
} |
1611.01603 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | 8 1 0 2 n u J 1 2 ] L C . s c [
6 v 3 0 6 1 0 . 1 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# BI-DIRECTIONAL ATTENTION FLOW FOR MACHINE COMPREHENSION
Minjoon Seo1â University of Washington1, Allen Institute for Artiï¬cial Intelligence2 {minjoon,ali,hannaneh}@cs.washington.edu, {anik}@allenai.org
# ABSTRACT
Machine comprehension (MC), answering a query about a given context para- graph, requires modeling complex interactions between the context and the query. Recently, attention mechanisms have been successfully extended to MC. Typ- ically these methods use attention to focus on a small portion of the con- text and summarize it with a ï¬xed-size vector, couple attentions temporally, In this paper we introduce the and/or often form a uni-directional attention. Bi-Directional Attention Flow (BIDAF) network, a multi-stage hierarchical pro- cess that represents the context at different levels of granularity and uses bi- directional attention ï¬ow mechanism to obtain a query-aware context represen- tation without early summarization. Our experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze test.
# INTRODUCTION
The tasks of machine comprehension (MC) and question answering (QA) have gained signiï¬cant popularity over the past few years within the natural language processing and computer vision com- munities. Systems trained end-to-end now achieve promising results on a variety of tasks in the text and image domains. One of the key factors to the advancement has been the use of neural attention mechanism, which enables the system to focus on a targeted area within a context paragraph (for MC) or within an image (for Visual QA), that is most relevant to answer the question (Weston et al., 2015; Antol et al., 2015; Xiong et al., 2016a). Attention mechanisms in previous works typically have one or more of the following characteristics. First, the computed attention weights are often used to extract the most relevant information from the context for answering the question by sum- marizing the context into a ï¬xed-size vector. Second, in the text domain, they are often temporally dynamic, whereby the attention weights at the current time step are a function of the attended vector at the previous time step. Third, they are usually uni-directional, wherein the query attends on the context paragraph or the image.
In this paper, we introduce the Bi-Directional Attention Flow (BIDAF) network, a hierarchical multi-stage architecture for modeling the representations of the context paragraph at different levels of granularity (Figure 1). BIDAF includes character-level, word-level, and contextual embeddings, and uses bi-directional attention ï¬ow to obtain a query-aware context representation. Our attention mechanism offers following improvements to the previously popular attention paradigms. First, our attention layer is not used to summarize the context paragraph into a ï¬xed-size vector. Instead, the attention is computed for every time step, and the attended vector at each time step, along with the representations from previous layers, is allowed to ï¬ow through to the subsequent modeling layer. This reduces the information loss caused by early summarization. Second, we use a memory-less attention mechanism. That is, while we iteratively compute attention through time as in Bahdanau et al. (2015), the attention at each time step is a function of only the query and the context para- graph at the current time step and does not directly depend on the attention at the previous time step. We hypothesize that this simpliï¬cation leads to the division of labor between the attention layer and the modeling layer. It forces the attention layer to focus on learning the attention between the query and the context, and enables the modeling layer to focus on learning the interaction within the
âThe majority of the work was done while the author was interning at the Allen Institute for AI.
1
Published as a conference paper at ICLR 2017
Start End Query2Context Tec | wamecens _ as Output Layer Soameatuy ces My me my g = 4 <= Slelsteteti eld up 01000 Modeinglaye | = 7. EE 5 hy hp hr il vy ry ry a [iio ffl | : o Context2Query ASitenitsin Gea fr Query2Context and Context2Query t{ttttt ey Attention fi fi fi fi fi fi uy SSeS LS rot a ES Embed Layer 5 L | L] L] aL | LC] SHeeterterter tee â Uy Word Embed [ hy he pr Layer Charact Word Character pentecallciier c =) ] ! Embedding Embedding xy X2 X3 XT qh qu u J t 7 GLOVE Char-CNN Context Query
Figure 1: BiDirectional Attention Flow Model (best viewed in color)
query-aware context representation (the output of the attention layer). It also allows the attention at each time step to be unaffected from incorrect attendances at previous time steps. Our experi- ments show that memory-less attention gives a clear advantage over dynamic attention. Third, we use attention mechanisms in both directions, query-to-context and context-to-query, which provide complimentary information to each other.
Our BIDAF model1 outperforms all previous approaches on the highly-competitive Stanford Ques- tion Answering Dataset (SQuAD) test set leaderboard at the time of submission. With a modiï¬cation to only the output layer, BIDAF achieves the state-of-the-art results on the CNN/DailyMail cloze test. We also provide an in-depth ablation study of our model on the SQuAD development set, vi- sualize the intermediate feature spaces in our model, and analyse its performance as compared to a more traditional language model for machine comprehension (Rajpurkar et al., 2016).
2 MODEL
Our machine comprehension model is a hierarchical multi-stage process and consists of six layers (Figure 1):
1. Character Embedding Layer maps each word to a vector space using character-level CNNs.
2. Word Embedding Layer maps each word to a vector space using a pre-trained word em- bedding model.
3. Contextual Embedding Layer utilizes contextual cues from surrounding words to reï¬ne the embedding of the words. These ï¬rst three layers are applied to both the query and context.
4. Attention Flow Layer couples the query and context vectors and produces a set of query- aware feature vectors for each word in the context.
5. Modeling Layer employs a Recurrent Neural Network to scan the context.
6. Output Layer provides an answer to the query.
1Our code and interactive demo are available at: allenai.github.io/bi-att-flow/
2
Published as a conference paper at ICLR 2017
1. Character Embedding Layer. Character embedding layer is responsible for mapping each word to a high-dimensional vector space. Let {x1, . . . xT } and {q1, . . . qJ } represent the words in the input context paragraph and query, respectively. Following Kim (2014), we obtain the character- level embedding of each word using Convolutional Neural Networks (CNN). Characters are embed- ded into vectors, which can be considered as 1D inputs to the CNN, and whose size is the input channel size of the CNN. The outputs of the CNN are max-pooled over the entire width to obtain a ï¬xed-size vector for each word.
2. Word Embedding Layer. Word embedding layer also maps each word to a high-dimensional vector space. We use pre-trained word vectors, GloVe (Pennington et al., 2014), to obtain the ï¬xed word embedding of each word.
The concatenation of the character and word embedding vectors is passed to a two-layer Highway Network (Srivastava et al., 2015). The outputs of the Highway Network are two sequences of d- dimensional vectors, or more conveniently, two matrices: X â RdÃT for the context and Q â RdÃJ for the query.
3. Contextual Embedding Layer. We use a Long Short-Term Memory Network (LSTM) (Hochreiter & Schmidhuber, 1997) on top of the embeddings provided by the previous layers to model the temporal interactions between words. We place an LSTM in both directions, and concatenate the outputs of the two LSTMs. Hence we obtain H â R2dÃT from the context word vectors X, and U â R2dÃJ from query word vectors Q. Note that each column vector of H and U is 2d-dimensional because of the concatenation of the outputs of the forward and backward LSTMs, each with d-dimensional output.
It is worth noting that the ï¬rst three layers of the model are computing features from the query and context at different levels of granularity, akin to the multi-stage feature computation of convolutional neural networks in the computer vision ï¬eld.
4. Attention Flow Layer. Attention ï¬ow layer is responsible for linking and fusing information from the context and the query words. Unlike previously popular attention mechanisms (Weston et al., 2015; Hill et al., 2016; Sordoni et al., 2016; Shen et al., 2016), the attention ï¬ow layer is not used to summarize the query and context into single feature vectors. Instead, the attention vector at each time step, along with the embeddings from previous layers, are allowed to ï¬ow through to the subsequent modeling layer. This reduces the information loss caused by early summarization.
The inputs to the layer are contextual vector representations of the context H and the query U. The outputs of the layer are the query-aware vector representations of the context words, G, along with the contextual embeddings from the previous layer.
In this layer, we compute attentions in two directions: from context to query as well as from query to context. Both of these attentions, which will be discussed below, are derived from a shared similarity matrix, S â RT ÃJ , between the contextual embeddings of the context (H) and the query (U), where Stj indicates the similarity between t-th context word and j-th query word. The similarity matrix is computed by
S,; = 0(H.,,U.;) ER dd) where a is a trainable scalar function that encodes the similarity between its two input vectors, H., is t-th column vector of H, and U.,; is j-th column vector of U, We choose a(h, u) = Wis) {h; u; ho ul], where wis) ⬠Râ is a trainable weight vector, o is elementwise multiplication, {;] is vector concatenation across row, and implicit multiplication is matrix multiplication. Now we use S to obtain the attentions and the attended vectors in both directions.
Context-to-query Attention. Context-to-query (C2Q) attention signifies which query words are most relevant to each context word. Let a, ⬠R/ represent the attention weights on the query words by t-th context word, }> a,; = 1 for all t. The attention weight is computed by a, = softmax(S;,) ⬠R/â, and subsequently each attended query vector is U.. = yj a,j U.;. Hence U is a 2d-by-T matrix containing the attended query vectors for the entire context.
Query-to-context Attention. Query-to-context (Q2C) attention signiï¬es which context words have the closest similarity to one of the query words and are hence critical for answering the query.
3
Published as a conference paper at ICLR 2017
We obtain the attention weights on the context words by b = softmax(max,o1(S)) ⬠R7, where the maximum function (max,,;) is performed across the column. Then the attended context vector ish = > ,bH., ⬠R24. This vector indicates the weighted sum of the most important words in the context with respect to the query. his tiled T times across the column, thus giving He RT,
Finally, the contextual embeddings and the attention vectors are combined together to yield G, where each column vector can be considered as the query-aware representation of each context word. We deï¬ne G by
G:t = β(H:t, ËU:t, ËH:t) â RdG (2) where G:t is the t-th column vector (corresponding to t-th context word), β is a trainable vector function that fuses its (three) input vectors, and dG is the output dimension of the β function. While the β function can be an arbitrary trainable neural network, such as multi-layer perceptron, a simple concatenation as following still shows good performance in our experiments: β(h, Ëu, Ëh) = [h; Ëu; h ⦠Ëu; h ⦠Ëh] â R8dÃT (i.e., dG = 8d).
5. Modeling Layer. The input to the modeling layer is G, which encodes the query-aware rep- resentations of context words. The output of the modeling layer captures the interaction among the context words conditioned on the query. This is different from the contextual embedding layer, which captures the interaction among context words independent of the query. We use two layers of bi-directional LSTM, with the output size of d for each direction. Hence we obtain a matrix M â R2dÃT , which is passed onto the output layer to predict the answer. Each column vector of M is expected to contain contextual information about the word with respect to the entire context paragraph and the query.
6. Output Layer. The output layer is application-speciï¬c. The modular nature of BIDAF allows us to easily swap out the output layer based on the task, with the rest of the architecture remaining exactly the same. Here, we describe the output layer for the QA task. In section 5, we use a slight modiï¬cation of this output layer for cloze-style comprehension.
The QA task requires the model to ï¬nd a sub-phrase of the paragraph to answer the query. The phrase is derived by predicting the start and the end indices of the phrase in the paragraph. We obtain the probability distribution of the start index over the entire paragraph by
p= softmax(w (1) [G; M]), (3)
where w(p1) â R10d is a trainable weight vector. For the end index of the answer phrase, we pass M to another bidirectional LSTM layer and obtain M2 â R2dÃT . Then we use M2 to obtain the probability distribution of the end index in a similar manner:
p= softmax(w/(,2) [G; Mâ)) 4
Training. We deï¬ne the training loss (to be minimized) as the sum of the negative log probabilities of the true start and end indices by the predicted distributions, averaged over all examples:
1 N L(@) =~ Do log(pj:) + los (pee) (5)
where θ is the set of all trainable weights in the model (the weights and biases of CNN ï¬lters and LSTM cells, w(S), w(p1) and w(p2)), N is the number of examples in the dataset, y1 i are the true start and end indices of the i-th example, respectively, and pk indicates the k-th value of the vector p. Test. The answer span (k, l) where k ⤠l with the maximum value of p1 computed in linear time with dynamic programming.
3 RELATED WORK
Machine comprehension. A signiï¬cant contributor to the advancement of MC models has been the availability of large datasets. Early datasets such as MCTest (Richardson et al., 2013) were too
4
Published as a conference paper at ICLR 2017
small to train end-to-end neural models. Massive cloze test datasets (CNN/DailyMail by Hermann et al. (2015) and Childrens Book Test by Hill et al. (2016)), enabled the application of deep neural architectures to this task. More recently, Rajpurkar et al. (2016) released the Stanford Question Answering (SQuAD) dataset with over 100,000 questions. We evaluate the performance of our comprehension system on both SQuAD and CNN/DailyMail datasets.
Previous works in end-to-end machine comprehension use attention mechanisms in three distinct ways. The ï¬rst group (largely inspired by Bahdanau et al. (2015)) uses a dynamic attention mech- anism, in which the attention weights are updated dynamically given the query and the context as well as the previous attention. Hermann et al. (2015) argue that the dynamic attention model per- forms better than using a single ï¬xed query vector to attend on context words on CNN & DailyMail datasets. Chen et al. (2016) show that simply using bilinear term for computing the attention weights in the same model drastically improves the accuracy. Wang & Jiang (2016) reverse the direction of the attention (attending on query words as the context RNN progresses) for SQuAD. In contrast to these models, BIDAF uses a memory-less attention mechanism.
The second group computes the attention weights once, which are then fed into an output layer for ï¬nal prediction (e.g., Kadlec et al. (2016)). Attention-over-attention model (Cui et al., 2016) uses a 2D similarity matrix between the query and context words (similar to Equation 1) to compute the weighted average of query-to-context attention. In contrast to these models, BIDAF does not summarize the two modalities in the attention layer and instead lets the attention vectors ï¬ow into the modeling (RNN) layer.
The third group (considered as variants of Memory Network (Weston et al., 2015)) repeats comput- ing an attention vector between the query and the context through multiple layers, typically referred to as multi-hop (Sordoni et al., 2016; Dhingra et al., 2016). Shen et al. (2016) combine Memory Networks with Reinforcement Learning in order to dynamically control the number of hops. One can also extend our BIDAF model to incorporate multiple hops.
Visual question answering. The task of question answering has also gained a lot of interest in the computer vision community. Early works on visual question answering (VQA) involved encoding the question using an RNN, encoding the image using a CNN and combining them to answer the question (Antol et al., 2015; Malinowski et al., 2015). Attention mechanisms have also been suc- cessfully employed for the VQA task and can be broadly clustered based on the granularity of their attention and the approach to construct the attention matrix. At the coarse level of granularity, the question attends to different patches in the image (Zhu et al., 2016; Xiong et al., 2016a). At a ï¬ner level, each question word attends to each image patch and the highest attention value for each spatial location (Xu & Saenko, 2016) is adopted. A hybrid approach is to combine questions representa- tions at multiple levels of granularity (unigrams, bigrams, trigrams) (Yang et al., 2015). Several approaches to constructing the attention matrix have been used including element-wise product, element-wise sum, concatenation and Multimodal Compact Bilinear Pooling (Fukui et al., 2016).
Lu et al. (2016) have recently shown that in addition to attending from the question to image patches, attending from the image back to the question words provides an improvement on the VQA task. This ï¬nding in the visual domain is consistent with our ï¬nding in the language domain, where our bi-directional attention between the query and context provides improved results. Their model, however, uses the attention weights directly in the output layer and does not take advantage of the attention ï¬ow to the modeling layer.
# 4 QUESTION ANSWERING EXPERIMENTS
In this section, we evaluate our model on the task of question answering using the recently released SQuAD (Rajpurkar et al., 2016), which has gained a huge attention over a few months. In the next section, we evaluate our model on the task of cloze-style reading comprehension.
Dataset. SQuAD is a machine comprehension dataset on a large set of Wikipedia articles, with more than 100,000 questions. The answer to each question is always a span in the context. The model is given a credit if its answer matches one of the human written answers. Two metrics are used to evaluate models: Exact Match (EM) and a softer metric, F1 score, which measures the weighted average of the precision and recall rate at character level. The dataset consists of 90k/10k
5
Published as a conference paper at ICLR 2017
Logistic Regression Baselinea Dynamic Chunk Readerb Fine-Grained Gatingc Match-LSTMd Multi-Perspective Matchinge Dynamic Coattention Networksf R-Netg BIDAF (Ours) Single Model EM 40.4 62.5 62.5 64.7 65.5 66.2 68.4 68.0 F1 51.0 71.0 73.3 73.7 75.1 75.9 77.5 77.3 Ensemble EM F1 - - - 77.0 77.2 80.4 79.7 81.1 - - - 67.9 68.2 71.6 72.1 73.3 No char embedding No word embedding No C2Q attention No Q2C attention Dynamic attention BIDAF (single) BIDAF (ensemble) EM F1 75.4 65.0 66.8 55.5 67.7 57.2 73.7 63.6 73.6 63.5 77.3 67.7 80.7 72.6
(a) Results on the SQuAD test set
Table 1: (1a) The performance of our model BIDAF and competing approaches by Rajpurkar et al. (2016)a, Yu et al. (2016)b, Yang et al. (2016)c, Wang & Jiang (2016)d, IBM Watsone (unpublished), Xiong et al. (2016b)f , and Microsoft Research Asiag (unpublished) on the SQuAD test set. A concurrent work by Lee et al. (2016) does not report the test scores. All results shown here reï¬ect the SQuAD leaderboard (stanford-qa.com) as of 6 Dec 2016, 12pm PST. (1b) The performance of our model and its ablations on the SQuAD dev set. Ablation results are presented only for single runs.
train/dev question-context tuples with a large hidden test set. It is one of the largest available MC datasets with human-written questions and serves as a great test bed for our model.
Model Details. The model architecture used for this task is depicted in Figure 1. Each paragraph and question are tokenized by a regular-expression-based word tokenizer (PTB Tokenizer) and fed into the model. We use 100 1D ï¬lters for CNN char embedding, each with a width of 5. The hidden state size (d) of the model is 100. The model has about 2.6 million parameters. We use the AdaDelta (Zeiler, 2012) optimizer, with a minibatch size of 60 and an initial learning rate of 0.5, for 12 epochs. A dropout (Srivastava et al., 2014) rate of 0.2 is used for the CNN, all LSTM layers, and the linear transformation before the softmax for the answers. During training, the moving averages of all weights of the model are maintained with the exponential decay rate of 0.999. At test time, the moving averages instead of the raw weights are used. The training process takes roughly 20 hours on a single Titan X GPU. We also train an ensemble model consisting of 12 training runs with the identical architecture and hyper-parameters. At test time, we choose the answer with the highest sum of conï¬dence scores amongst the 12 runs for each question.
Results. The results of our model and competing approaches on the hidden test are summarized in Table 1a. BIDAF (ensemble) achieves an EM score of 73.3 and an F1 score of 81.1, outperforming all previous approaches.
Ablations. Table 1b shows the performance of our model and its ablations on the SQuAD dev set. Both char-level and word-level embeddings contribute towards the modelâs performance. We conjecture that word-level embedding is better at representing the semantics of each word as a whole, while char-level embedding can better handle out-of-vocab (OOV) or rare words. To evaluate bi- directional attention, we remove C2Q and Q2C attentions. For ablating C2Q attention, we replace the attended question vector ËU with the average of the output vectors of the questionâs contextual embedding layer (LSTM). C2Q attention proves to be critical with a drop of more than 10 points on both metrics. For ablating Q2C attention, the output of the attention layer, G, does not include terms that have the attended Q2C vectors, ËH. To evaluate the attention ï¬ow, we study a dynamic attention model, where the attention is dynamically computed within the modeling layerâs LSTM, following previous work (Bahdanau et al., 2015; Wang & Jiang, 2016). This is in contrast with our approach, where the attention is pre-computed before ï¬owing to the modeling layer. Despite being a simpler attention mechanism, our proposed static attention outperforms the dynamically computed attention by more than 3 points. We conjecture that separating out the attention layer results in a richer set of features computed in the ï¬rst 4 layers which are then incorporated by the modeling layer. We also show the performance of BIDAF with several different deï¬nitions of α and β functions (Equation 1 and 2) in Appendix B.
6
Published as a conference paper at ICLR 2017
Layer Query Closest words in the Context using cosine similarity Word When Contextual When Word Where Contextual Where Word Who Contextual Who city Word city Contextual January Word January Contextual Seahawks Word Seahawks Contextual date Word date Contextual when, When, After, after, He, he, But, but, before, Before When, when, 1945, 1991, 1971, 1967, 1990, 1972, 1965, 1953 Where, where, It, IT, it, they, They, that, That, city where, Where, Rotterdam, area, Nearby, location, outside, Area, across, locations Who, who, He, he, had, have, she, She, They, they who, whose, whom, Guiscard, person, John, Thomas, families, Elway, Louis City, city, town, Town, Capital, capital, district, cities, province, Downtown city, City, Angeles, Paris, Prague, Chicago, Port, Pittsburgh, London, Manhattan July, December, June, October, January, September, February, April, November, March January, March, December, August, December, July, July, July, March, December Seahawks, Broncos, 49ers, Ravens, Chargers, Steelers, quarterback, Vikings, Colts, NFL Seahawks, Broncos, Panthers, Vikings, Packers, Ravens, Patriots, Falcons, Steelers, Chargers date, dates, until, Until, June, July, Year, year, December, deadline date, dates, December, July, January, October, June, November, March, February
Table 2: Closest context words to a given query word, using a cosine similarity metric computed in the Word Embedding feature space and the Phrase Embedding feature space.
_15,___Word Embed Space 1s,___ Phrase Embed Space Questions answered correctly by our BIDAF model = sor ows and the more traditional baseline model tol what (4752) May how (1090)} 7 @ Sk from 28 January to 29 ~ may | but by September had beén who (1061) 5 _os debut on May 5 when (696 B 3 Opening in May 1852 at whieh (654) a -19] 509 3734 3585 in taal w â39] â Januay 5 of these may be moreâ where (433) | seston Hy | 2577 ann mes as on (aa) August . Baseline toa -a0) -39) âto -5 0 5 10 15 20 25 24 26 28 30 32 34 36 38 40 42 BIDAF | ep t-SNE Dimension 1 t-SNE Dimension 1 âof questions witn correct answers {a) (b) (c)
Figure 2: (a) t-SNE visualizations of the months names embedded in the two feature spaces. The contextual embedding layer is able to distinguish the two usages of the word May using context from the surrounding text. (b) Venn diagram of the questions answered correctly by our model and the more traditional baseline (Rajpurkar et al., 2016). (c) Correctly answered questions broken down by the 10 most frequent ï¬rst words in the question.
Visualizations. We now provide a qualitative analysis of our model on the SQuAD dev set. First, we visualize the feature spaces after the word and contextual embedding layers. These two layers are responsible for aligning the embeddings between the query and context words which are the inputs to the subsequent attention layer. To visualize the embeddings, we choose a few frequent query words in the dev data and look at the context words that have the highest cosine similarity to the query words (Table 2). At the word embedding layer, query words such as When, Where and Who are not well aligned to possible answers in the context, but this dramatically changes in the contextual embedding layer which has access to context from surrounding words and is just 1 layer below the attention layer. When begins to match years, Where matches locations, and Who matches names.
We also visualize these two feature spaces using t-SNE in Figure 2. t-SNE is performed on a large fraction of dev data but we only plot data points corresponding to the months of the year. An interesting pattern emerges in the Word space, where May is separated from the rest of the months because May has multiple meanings in the English language. The contextual embedding layer uses contextual cues from surrounding words and is able to separate the usages of the word May. Finally we visualize the attention matrices for some question-context tuples in the dev data in Figure 3. In the ï¬rst example, Where matches locations and in the second example, many matches quantities and numerical symbols. Also, entities in the question typically attend to the same entities in the context, thus providing a feature for the model to localize possible answers.
Discussions. We analyse the performance of our our model with a traditional language-feature- based baseline (Rajpurkar et al., 2016). Figure 2b shows a Venn diagram of the dev set questions correctly answered by the models. Our model is able to answer more than 86% of the questions
7
Published as a conference paper at ICLR 2017
Super Bow! SO was an American football game cemmememeareee | wer SUMP ITIMIIN SEMMII LUTEIUEY Ulf. 2: t,t sadn Lovin, soma, Ane Football League { NFL] forthe 2015 s The American Footbal Conference ( did it champion Denver Broncos defeated the Net Footal ofeence(NFC}chanoion | Super Super, Super, Super, Super, Super SuperBowl title. The game was played on | February 72016 at levlestecurinthesan | BOW! Bowl, Bowl, Bowl, Bowl, Bow! Francisco Bay Area at Sant Ciara, California. âAs this was the 50th Super Sow, the leg 50 50 emphasized the âgolden anniversary" with various gold-themed inlatves as well as take temporarily suspending the tradition of raming each Super Bow! game with Roman place numerals (under whieh the game would have | been known as "Super Bowl") so that the > (lll | I il WW logo could prominently feature the Arabic numerals 5. WL LIME nitatves âââââ] hm Ten Te â : many | ll | | | | 1] | | hundreds, few, among, 15, several, only, {3s} fom Warsaw, the Vistula rivers environment changes natural | | | natural, of Stkingy and featres a perfecty preserved ecosystem, with ahabitatof animalsthat | TES@rves reserves are mn WN NNN HAWN are, are, are, are, are, includes there i iak6w Lake, the lakes in th w Parks, Karnionek Lake. There are lakes inthe parks, but only afew in it them ot ponte jarsaw, Warsaw, Warsaw before winter to clean them of plants and Warsaw We â 2] TOTO AO EEE TMA ie species
Figure 3: Attention matrices for question-context tuples. The left palette shows the context paragraph (correct answer in red and underlined), the middle palette shows the attention matrix (each row is a question word, each column is a context word), and the right palette shows the top attention points for each question word, above a threshold.
correctly answered by the baseline. The 14% that are incorrectly answered does not have a clear pattern. This suggests that neural architectures are able to exploit much of the information captured by the language features. We also break this comparison down by the ï¬rst words in the questions (Figure 2c). Our model outperforms the traditional baseline comfortably in every category.
Error Analysis. We randomly select 50 incorrect questions (based on EM) and categorize them into 6 classes. 50% of errors are due to the imprecise boundaries of the answers, 28% involve syntactic complications and ambiguities, 14% are paraphrase problems, 4% require external knowl- edge, 2% need multiple sentences to answer, and 2% are due to mistakes during tokenization. See Appendix A for the examples of the error modes.
# 5 CLOZE TEST EXPERIMENTS
We also evaluate our model on the task of cloze-style reading comprehension using the CNN and Daily Mail datasets (Hermann et al., 2015).
Dataset. In a cloze test, the reader is asked to ï¬ll in words that have been removed from a passage, for measuring oneâs ability to comprehend text. Hermann et al. (2015) have recently compiled a mas- sive Cloze-style comprehension dataset, consisting of 300k/4k/3k and 879k/65k/53k (train/dev/test) examples from CNN and DailyMail news articles, respectively. Each example has a news article and an incomplete sentence extracted from the human-written summary of the article. To distinguish this task from language modeling and force one to refer to the article to predict the correct missing word, the missing word is always a named entity, anonymized with a random ID. Also, the IDs must be shufï¬ed constantly during test, which is also critical for full anonymization.
Model Details. The model architecture used for this task is very similar to that for SQuAD (Sec- tion 4) with only a few small changes to adapt it to the cloze test. Since each answer in the CNN/DailyMail datasets is always a single word (entity), we only need to predict the start index (p1); the prediction for the end index (p2) is omitted from the loss function. Also, we mask out all non-entity words in the ï¬nal classiï¬cation layer so that they are forced to be excluded from possible answers. Another important difference from SQuAD is that the answer entity might appear more than once in the context paragraph. To address this, we follow a similar strategy from Kadlec et al. (2016). During training, after we obtain p1, we sum all probability values of the entity instances
8
13, 9
Published as a conference paper at ICLR 2017
in the context that correspond to the correct answer. Then the loss function is computed from the summed probability. We use a minibatch size of 48 and train for 8 epochs, with early stop when the accuracy on validation data starts to drop. Inspired by the window-based method (Hill et al., 2016), we split each article into short sentences where each sentence is a 19-word window around each entity (hence the same word might appear in multiple sentences). The RNNs in BIDAF are not feed-forwarded or back-propagated across sentences, which speed up the training process by par- allelization. The entire training process takes roughly 60 hours on eight Titan X GPUs. The other hyper-parameters are identical to the model described in Section 4.
Results. The results of our single-run models and competing approaches on the CNN/DailyMail datasets are summarized in Table 3. â indicates ensemble methods. BIDAF outperforms previous single-run models on both datasets for both val and test data. On the DailyMail test, our single-run model even outperforms the best ensemble method.
Attentive Reader (Hermann et al., 2015) MemNN (Hill et al., 2016) AS Reader (Kadlec et al., 2016) DER Network (Kobayashi et al., 2016) Iterative Attention (Sordoni et al., 2016) EpiReader (Trischler et al., 2016) Stanford AR (Chen et al., 2016) GAReader (Dhingra et al., 2016) AoA Reader (Cui et al., 2016) ReasoNet (Shen et al., 2016) BIDAF (Ours) MemNNâ (Hill et al., 2016) ASReaderâ (Kadlec et al., 2016) Iterative Attentionâ (Sordoni et al., 2016) GA Readerâ (Dhingra et al., 2016) Stanford ARâ (Chen et al., 2016) CNN val 61.6 63.4 68.6 71.3 72.6 73.4 73.8 73.0 73.1 72.9 76.3 66.2 73.9 74.5 76.4 77.2 test 63.0 6.8 69.5 72.9 73.3 74.0 73.6 73.8 74.4 74.7 76.9 69.4 75.4 75.7 77.4 77.6 DailyMail test val 69.0 70.5 - - 73.9 75.0 - - - - - - 76.6 77.6 75.7 76.7 - - 76.6 77.6 79.6 80.3 - - 77.7 78.7 - - 78.1 79.1 79.2 80.2
Table 3: Results on CNN/DailyMail datasets. We also include the results of previous ensemble methods (marked with â) for completeness.
# 6 CONCLUSION
In this paper, we introduce BIDAF, a multi-stage hierarchical process that represents the context at different levels of granularity and uses a bi-directional attention ï¬ow mechanism to achieve a query- aware context representation without early summarization. The experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze test. The ablation analyses demonstrate the importance of each compo- nent in our model. The visualizations and discussions show that our model is learning a suitable representation for MC and is capable of answering complex questions by attending to correct loca- tions in the given paragraph. Future work involves extending our approach to incorporate multiple hops of the attention layer.
ACKNOWLEDGMENTS
This research was supported by the NSF (IIS 1616112), NSF (III 1703166), Allen Institute for AI (66-9175), Allen Distinguished Investigator Award, Google Research Faculty Award, and Samsung GRO Award. We thank the anonymous reviewers for their helpful comments.
9
Published as a conference paper at ICLR 2017
# REFERENCES
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zit- nick, and Devi Parikh. Vqa: Visual question answering. In ICCV, 2015.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015.
Danqi Chen, Jason Bolton, and Christopher D. Manning. A thorough examination of the cnn/daily mail reading comprehension task. In ACL, 2016.
Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. Attention-over- attention neural networks for reading comprehension. arXiv preprint arXiv:1607.04423, 2016.
Bhuwan Dhingra, Hanxiao Liu, William W Cohen, and Ruslan Salakhutdinov. Gated-attention readers for text comprehension. arXiv preprint arXiv:1606.01549, 2016.
Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. Multimodal compact bilinear pooling for visual question answering and visual grounding. In EMNLP, 2016.
Karl Moritz Hermann, Tom´as Kocisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In NIPS, 2015.
Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading childrenâs books with explicit memory representations. In ICLR, 2016.
Sepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 1997.
Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with the attention sum reader network. In ACL, 2016.
Yoon Kim. Convolutional neural networks for sentence classiï¬cation. In EMNLP, 2014.
Sosuke Kobayashi, Ran Tian, Naoaki Okazaki, and Kentaro Inui. Dynamic entity representation with max-pooling improves machine reading. In NAACL-HLT, 2016.
Kenton Lee, Tom Kwiatkowski, Ankur Parikh, and Dipanjan Das. Learning recurrent span repre- sentations for extractive question answering. arXiv preprint arXiv:1611.01436, 2016.
Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. Hierarchical question-image co-attention for visual question answering. In NIPS, 2016.
Mateusz Malinowski, Marcus Rohrbach, and Mario Fritz. Ask your neurons: A neural-based ap- proach to answering questions about images. In ICCV, 2015.
Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In EMNLP, 2014.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. In EMNLP, 2016.
Matthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP, 2013.
Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. Reasonet: Learning to stop reading in machine comprehension. arXiv preprint arXiv:1609.05284, 2016.
Alessandro Sordoni, Phillip Bachman, and Yoshua Bengio. Iterative alternating neural attention for machine reading. arXiv preprint arXiv:1606.02245, 2016.
Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overï¬tting. JMLR, 2014.
Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387, 2015.
10
Published as a conference paper at ICLR 2017
Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehension with the epireader. In EMNLP, 2016.
Shuohang Wang and Jing Jiang. Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905, 2016.
Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In ICLR, 2015.
Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textual question answering. In ICML, 2016a.
Caiming Xiong, Victor Zhong, and Richard Socher. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604, 2016b.
Huijuan Xu and Kate Saenko. Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. In ECCV, 2016.
Zhilin Yang, Bhuwan Dhingra, Ye Yuan, Junjie Hu, William W Cohen, and Ruslan Salakhut- dinov. Words or characters? ï¬ne-grained gating for reading comprehension. arXiv preprint arXiv:1611.01724, 2016.
Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. Stacked attention networks for image question answering. arXiv preprint arXiv:1511.02274, 2015.
Yang Yu, Wei Zhang, Kazi Hasan, Mo Yu, Bing Xiang, and Bowen Zhou. End-to-end reading comprehension with dynamic answer chunk ranking. arXiv preprint arXiv:1610.09996, 2016.
Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
Yuke Zhu, Oliver Groth, Michael S. Bernstein, and Li Fei-Fei. Visual7w: Grounded question an- swering in images. In CVPR, 2016.
11
Published as a conference paper at ICLR 2017
# A ERROR ANALYSIS
Table 4 summarizes the modes of errors by BIDAF and shows examples for each category of error in SQuAD.
Error type Imprecise answer boundaries Ratio (%) 50 Example Context: âThe Free Movement of Workers Regulation articles 1 to 7 set out the main provisions on equal treatment of workers.â Question: âWhich articles of the Free Movement of Workers Regulation set out the primary provisions on equal treatment of workers?â Prediction: â1 to 7â, Answer: âarticles 1 to 7â Syntactic complications and ambiguities 28 Context: âA piece of paper was later found on which Luther had written his last statement. â Question: âWhat was later discovered written by Luther?â Prediction: âA piece of paperâ, Answer: âhis last statementâ Paraphrase problems 14 Context: âGenerally, education in Australia follows the three- tier model which includes primary education (primary schools), followed by secondary education (secondary schools/high schools) and tertiary education (universities and/or TAFE colleges).â Question: âWhat is the ï¬rst model of education, in the Aus- tralian system?â Prediction: âthree-tierâ, Answer: âprimary educationâ External knowledge 4 Context: âOn June 4, 2014, the NFL announced that the practice of branding Super Bowl games with Roman numerals, a practice established at Super Bowl V, would be temporarily suspended, and that the game would be named using Arabic numerals as Super Bowl 50 as opposed to Super Bowl L.â Question: âIf Roman numerals were used in the naming of the 50th Super Bowl, which one would have been used?â Prediction: âSuper Bowl 50â, Answer: âLâ Multi- sentence 2 Context: âOver the next several years in addition to host to host interactive connections the network was enhanced to support terminal to host connections, host to host batch connections (remote job submission, remote printing, batch ï¬le transfer), interactive ï¬le transfer, gateways to the Tymnet and Telenet public data networks, X.25 host attachments, gateways to X.25 data networks, Ethernet attached hosts, and eventually TCP/IP and additional public universities in Michigan join the network. All of this set the stage for Meritâs role in the NSFNET project starting in the mid-1980s.â Question: âWhat set the stage for Merits role in NSFNETâ Prediction: âAll of this set the stage for Merit âs role in the NSFNET project starting in the mid-1980sâ, Answer: âEthernet attached hosts, and eventually TCP/IP and additional public universities in Michigan join the networkâ
Incorrect preprocessing 2 Context: âEnglish chemist John Mayow (1641-1679) reï¬ned this work by showing that ï¬re requires only a part of air that he called spiritus nitroaereus or just nitroaereus.â Question: âJohn Mayow died in what year?â Prediction: â1641-1679â, Answer: â1679â
Table 4: Error analysis on SQuAD. We randomly selected EM-incorrect answers and classiï¬ed them into 6 different categories. Only relevant sentence(s) from the context shown for brevity.
12
Published as a conference paper at ICLR 2017
# B VARIATIONS OF SIMILARITY AND FUSION FUNCTIONS
Eqn. 1: dot product Eqn. 1: linear Eqn. 1: bilinear Eqn. 1: linear after MLP Eqn. 2: MLP after concat BIDAF (single) EM F1 75.5 65.5 69.7 59.5 71.8 61.6 76.4 66.2 77.0 67.1 77.3 68.0
Table 5: Variations of similarity function α (Equation 1) and fusion function β (Equation 2) and their performance on the dev data of SQuAD. See Appendix B for the details of each variation.
In this appendix section, we experimentally demonstrate how different choices of the similarity function α (Equation 1) and the fusion function β (Equation 2) impact the performance of our model. Each variation is deï¬ned as following:
Eqn. 1: dot product. Dot product α is deï¬ned as
a(h,u) =h'u (6)
where T indicates matrix transpose. Dot product has been used for the measurement of similarity between two vectors by {Hill et al.|(2016).
Eqn. 1: linear. Linear α is deï¬ned as
(7) lin â R4d is a trainable weight matrix. This can be considered as the simpliï¬cation of
where wi ⬠R*4 is a trainable weight matrix. This can be considered as the simplification of Equation|1|by dropping the term h o u in the concatenation.
Eqn. 1: bilinear. Bilinear α is deï¬ned as
a(h, u) = h' Wu (8) where Wj; ⬠R?¢*?4 is a trainable weight matrix. Bilinear term has been used by |Chen et al. (2016).
Eqn. 1: linear after MLP. We can also perform linear mapping after single layer of perceptron:
a(h, u) = wi, tanh(Wip[h; u] + Drip) (9)
where Wmlp and bmlp are trainable weight matrix and bias, respectively. Linear mapping after perceptron layer has been used by Hermann et al. (2015).
# Eqn. 2: MLP after concatenation. We can deï¬ne β as
β(h, Ëu, Ëh) = max(0, Wmlp[h; Ëu; h ⦠Ëu; h ⦠Ëh] + bmlp) where Wmlp â R2dÃ8d and bmlp â R2d are trainable weight matrix and bias. This is equivalent to adding ReLU after linearly transforming the original deï¬nition of β. Since the output dimension of β changes, the input dimension of the ï¬rst LSTM of the modeling layer will change as well.
The results of these variations on the dev data of SQuAD are shown in Table 5. It is important to note that there are non-trivial gaps between our deï¬nition of α and other deï¬nitions employed by previous work. Adding MLP in β does not seem to help, yielding slightly worse result than β without MLP.
13 | {
"id": "1606.02245"
} |
1611.01626 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | 7 1 0 2
r p A 7 ] G L . s c [
3 v 6 2 6 1 0 . 1 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# COMBINING POLICY GRADIENT AND Q-LEARNING
# Brendan OâDonoghue, R´emi Munos, Koray Kavukcuoglu & Volodymyr Mnih Deepmind {bodonoghue,munos,korayk,vmnih}@google.com
# ABSTRACT
Policy gradient is an efï¬cient technique for improving a policy in a reinforcement learning setting. However, vanilla online variants are on-policy only and not able to take advantage of off-policy data. In this paper we describe a new technique that combines policy gradient with off-policy Q-learning, drawing experience from a replay buffer. This is motivated by making a connection between the ï¬xed points of the regularized policy gradient algorithm and the Q-values. This connection allows us to estimate the Q-values from the action preferences of the policy, to which we apply Q-learning updates. We refer to the new technique as âPGQLâ, for policy gradient and Q-learning. We also establish an equivalency between action-value ï¬tting techniques and actor-critic algorithms, showing that regular- ized policy gradient techniques can be interpreted as advantage function learning algorithms. We conclude with some numerical examples that demonstrate im- proved data efï¬ciency and stability of PGQL. In particular, we tested PGQL on the full suite of Atari games and achieved performance exceeding that of both asynchronous advantage actor-critic (A3C) and Q-learning.
# INTRODUCTION
In reinforcement learning an agent explores an environment and through the use of a reward signal learns to optimize its behavior to maximize the expected long-term return. Reinforcement learning has seen success in several areas including robotics (Lin, 1993; Levine et al., 2015), computer games (Mnih et al., 2013; 2015), online advertising (Pednault et al., 2002), board games (Tesauro, 1995; Silver et al., 2016), and many others. For an introduction to reinforcement learning we refer to the classic text by Sutton & Barto (1998). In this paper we consider model-free reinforcement learning, where the state-transition function is not known or learned. There are many different algorithms for model-free reinforcement learning, but most fall into one of two families: action-value ï¬tting and policy gradient techniques.
Action-value techniques involve ï¬tting a function, called the Q-values, that captures the expected return for taking a particular action at a particular state, and then following a particular policy there- after. Two alternatives we discuss in this paper are SARSA (Rummery & Niranjan, 1994) and Q-learning (Watkins, 1989), although there are many others. SARSA is an on-policy algorithm whereby the action-value function is ï¬t to the current policy, which is then reï¬ned by being mostly greedy with respect to those action-values. On the other hand, Q-learning attempts to ï¬nd the Q- values associated with the optimal policy directly and does not ï¬t to the policy that was used to generate the data. Q-learning is an off-policy algorithm that can use data generated by another agent or from a replay buffer of old experience. Under certain conditions both SARSA and Q-learning can be shown to converge to the optimal Q-values, from which we can derive the optimal policy (Sutton, 1988; Bertsekas & Tsitsiklis, 1996).
In policy gradient techniques the policy is represented explicitly and we improve the policy by updating the parameters in the direction of the gradient of the performance (Sutton et al., 1999; Silver et al., 2014; Kakade, 2001). Online policy gradient typically requires an estimate of the action-value function of the current policy. For this reason they are often referred to as actor-critic methods, where the actor refers to the policy and the critic to the estimate of the action-value function (Konda & Tsitsiklis, 2003). Vanilla actor-critic methods are on-policy only, although some attempts have been made to extend them to off-policy data (Degris et al., 2012; Levine & Koltun, 2013).
1
Published as a conference paper at ICLR 2017
In this paper we derive a link between the Q-values induced by a policy and the policy itself when the policy is the ï¬xed point of a regularized policy gradient algorithm (where the gradient van- ishes). This connection allows us to derive an estimate of the Q-values from the current policy, which we can reï¬ne using off-policy data and Q-learning. We show in the tabular setting that when the regularization penalty is small (the usual case) the resulting policy is close to the policy that would be found without the addition of the Q-learning update. Separately, we show that regularized actor-critic methods can be interpreted as action-value ï¬tting methods, where the Q-values have been parameterized in a particular way. We conclude with some numerical examples that provide empirical evidence of improved data efï¬ciency and stability of PGQL.
1.1 PRIOR WORK
Here we highlight various axes along which our work can be compared to others. In this paper we use entropy regularization to ensure exploration in the policy, which is a common practice in policy gradient (Williams & Peng, 1991; Mnih et al., 2016). An alternative is to use KL-divergence instead of entropy as a regularizer, or as a constraint on how much deviation is permitted from a prior policy (Bagnell & Schneider, 2003; Peters et al., 2010; Schulman et al., 2015; Fox et al., 2015). Natural policy gradient can also be interpreted as putting a constraint on the KL-divergence at each step of the policy improvement (Amari, 1998; Kakade, 2001; Pascanu & Bengio, 2013). In Sallans & Hinton (2004) the authors use a Boltzmann exploration policy over estimated Q-values which they update using TD-learning. In Heess et al. (2012) this was extended to use an actor-critic algorithm instead of TD-learning, however the two updates were not combined as we have done in this paper. In Azar et al. (2012) the authors develop an algorithm called dynamic policy programming, whereby they apply a Bellman-like update to the action-preferences of a policy, which is similar in spirit to the update we describe here. In Norouzi et al. (2016) the authors augment a maximum likelihood objective with a reward in a supervised learning setting, and develop a connection that resembles the one we develop here between the policy and the Q-values. Other works have attempted to com- bine on and off-policy learning, primarily using action-value ï¬tting methods (Wang et al., 2013; Hausknecht & Stone, 2016; Lehnert & Precup, 2015), with varying degrees of success. In this paper we establish a connection between actor-critic algorithms and action-value learning algorithms. In particular we show that TD-actor-critic (Konda & Tsitsiklis, 2003) is equivalent to expected-SARSA (Sutton & Barto, 1998, Exercise 6.10) with Boltzmann exploration where the Q-values are decom- posed into advantage function and value function. The algorithm we develop extends actor-critic with a Q-learning style update that, due to the decomposition of the Q-values, resembles the update of the dueling architecture (Wang et al., 2016). Recently, the ï¬eld of deep reinforcement learning, i.e., the use of deep neural networks to represent action-values or a policy, has seen a lot of success (Mnih et al., 2015; 2016; Silver et al., 2016; Riedmiller, 2005; Lillicrap et al., 2015; Van Hasselt et al., 2016). In the examples section we use a neural network with PGQL to play the Atari games suite.
# 2 REINFORCEMENT LEARNING
We consider the infinite horizon, discounted, finite state and action space Markov decision process, with state space S, action space A and rewards at each time period denoted by r; ⬠R. A policy am: Sx A â R, is a mapping from state-action pair to the probability of taking that action at that state, so it must satisfy }>,. 4 7(s,@) = 1 for all states s ⬠S. Any policy m induces a probability distribution over visited states, d⢠: S â+ R, (which may depend on the initial state), so the probability of seeing state-action pair (s,a) ⬠S x Ais dâ¢(s)r(s,a).
In reinforcement learning an âagentâ interacts with an environment over a number of times steps. At each time step t the agent receives a state s; and a reward r;, and selects an action a; from the policy mt, at which point the agent moves to the next state 5:4; ~ P(-,s:,a1), where P(sâ,s,a) is the probability of transitioning from state s to state sâ after taking action a. This continues until the agent encounters a terminal state (after which the process is typically restarted). The goal of the agent is to find a policy 7 that maximizes the expected total discounted return J(7) = E()0P29 y'r: | 7), where the expectation is with respect to the initial state distribution, the state-transition probabilities, and the policy, and where 7 ⬠(0, 1) is the discount factor that, loosely speaking, controls how much the agent prioritizes long-term versus short-term rewards. Since the agent starts with no knowledge
2
Published as a conference paper at ICLR 2017
of the environment it must continually explore the state space and so will typically use a stochastic policy.
Action-values. The action-value, or Q-value, of a particular state under policy a is the ex- pected total discounted return from taking that action at that state and following 7 thereafter, i.e., Qâ¢(s,a) = EQ Y'r: | 80 = 8,40 = a, 7). The value of state s under policy 7 is denoted by Vâ¢(s) = E(0207'7: | 80 = 8,7), which is the expected total discounted return of policy 7 from state s. The optimal action-value function is denoted Q* and satisfies Q*(s,a) = max, Qâ¢(s, a) for each (s, a). The policy that achieves the maximum is the optimal policy 7*, with value function V*. The advantage function is the difference between the action-value and the value function, i.e., Aâ¢(s,a) = Q*(s, a) âV%(s), and represents the additional expected reward of taking action a over the average performance of the policy from state s. Since V"(s) = S>, 7(s,a)Q*(s,a) we have the identity }>, 7(s,a)A*(s,a) = 0, which simply states that the policy 7 has no advantage over itself.
Bellman equation. The Bellman operator T Ï (Bellman, 1957) for policy Ï is deï¬ned as
T*Q(s,a) = EB (r(s,a) +7Q(s',0))
where the expectation is over next state sâ ~ P(-,s,a), the reward r(s, a), and the action b from policy 7,. The Q-value function for policy 7 is the fixed point of the Bellman operator for 7, i.e., T7 Q⢠= Qâ. The optimal Bellman operator 7* is defined as
T*Q(s,a) = Ells, a) + ymax Q(s',b)),
where the expectation is over the next state sâ ~ P(-,s,a), and the reward r(s,a). The optimal Q-value function is the fixed point of the optimal Bellman equation, i.e, 7*Q* = Q*. Both the m-Bellman operator and the optimal Bellman operator are y-contraction mappings in the sup-norm, ie., ||TQi â TQalloo < YI]Qi â Qalloo, for any Qi, Q2 ⬠RS*4. From this fact one can show that the fixed point of each operator is unique, and that value iteration converges, i.e., (T")*Q > Qâ and (T*)*Q â Q* from any initial Q. 2005).
2.1 ACTION-VALUE LEARNING
In value based reinforcement learning we approximate the Q-values using a function approximator. We then update the parameters so that the Q-values are as close to the fixed point of a Bellman equation as possible. If we denote by Q(s,a;0) the approximate Q-values parameterized by 0, then Q-learning updates the Q-values along direction Es ,4(T*Q(s, a; 4) â Q(s, a; 6))VoQ(s, a; 4) and SARSA updates the Q-values along direction E, «(77 Q(s, a; 0) â Q(s, a; 6))VoQ(s, a; 6). In the online setting the Bellman operator is approximated by sampling and bootstrapping, whereby the Q-values at any state are updated using the Q-values from the next visited state. Exploration is achieved by not always taking the action with the highest Q-value at each time step. One common technique called âepsilon greedyâ is to sample a random action with probability « > 0, where e⬠starts high and decreases over time. Another popular technique is âBoltzmann explo- rationâ, where the policy is given by the softmax over the Q-values with a temperature T, i.e., m(s,a) = exp(Q(s,a)/T)/ >>, exp(Q(s, 6)/T), where it is common to decrease the temperature over time.
2.2 POLICY GRADIENT
Alternatively, we can parameterize the policy directly and attempt to improve it via gradient ascent on the performance J. The policy gradient theorem (Sutton et al., 1999) states that the gradient of J with respect to the parameters of the policy is given by
âθJ(Ï) = E s,a QÏ(s, a)âθ log Ï(s, a), (1)
where the expectation is over (s, a) with probability dÏ(s)Ï(s, a). In the original derivation of the policy gradient theorem the expectation is over the discounted distribution of states, i.e., over dÏ,s0 t=0 γtP r{st = s | s0, Ï}. However, the gradient update in that case will assign a low γ
3
Published as a conference paper at ICLR 2017
weight to states that take a long time to reach and can therefore have poor empirical performance. In practice the non-discounted distribution of states is frequently used instead. In certain cases this is equivalent to maximizing the average (i.e., non-discounted) policy performance, even when QÏ uses a discount factor (Thomas, 2014). Throughout this paper we will use the non-discounted distribution of states.
In the online case it is common to add an entropy regularizer to the gradient in order to prevent the policy becoming deterministic. This ensures that the agent will explore continually. In that case the (batch) update becomes
âθ â E s,a QÏ(s, a)âθ log Ï(s, a) + α E s âθH Ï(s), (2)
where H7(s) = â >, 7(s, a) log 7(s, a) denotes the entropy of policy 7, and a > 0 is the reg- ularization penalty parameter. Throughout this paper we will make use of entropy regularization, however many of the results are true for other choices of regularizers with only minor modification, e.g., KL-divergence. Note that equation (2) requires exact knowledge of the Q-values. In practice they can be estimated, e.g., by the sum of discounted rewards along an observed trajectory 1992), and the policy gradient will still perform well (Konda & Tsitsiklis] |2003).
# 3 REGULARIZED POLICY GRADIENT ALGORITHM
In this section we derive a relationship between the policy and the Q-values when using a regularized policy gradient algorithm. This allows us to transform a policy into an estimate of the Q-values. We then show that for small regularization the Q-values induced by the policy at the ï¬xed point of the algorithm have a small Bellman error in the tabular case.
3.1 TABULAR CASE
Consider the fixed points of the entropy regularized policy gradient update Qh. Let us define f(@) = Es, Q" (8, a)Vo log 7(s, a) + aE, VoH (5), and gs(7) = 3°, (s, a) for each s. A fixed point is one where we can no longer update 6 in the direction of f (0) without violating one of the constraints gs(7) = 1, i.e, where f(@) is in the span of the vectors {Vogs(7)}. In other words, any fixed point must satisfy f(0) = >>, AsVogs(), where for each s the Lagrange multiplier \, ⬠R ensures that gs(7) = 1. Substituting in terms to this equation we obtain
E s,a (QÏ(s, a) â α log Ï(s, a) â cs) âθ log Ï(s, a) = 0, (3)
where we have absorbed all constants into c â R|S|. Any solution Ï to this equation is strictly positive element-wise since it must lie in the domain of the entropy function. In the tabular case Ï is represented by a single number for each state and action pair and the gradient of the policy with respect to the parameters is the indicator function, i.e., âθ(t,b)Ï(s, a) = 1(t,b)=(s,a). From this we obtain QÏ(s, a) â α log Ï(s, a) â cs = 0 for each s (assuming that the measure dÏ(s) > 0). Multiplying by Ï(a, s) and summing over a â A we get cs = αH Ï(s) + V Ï(s). Substituting c into equation (3) we have the following formulation for the policy:
Ï(s, a) = exp(AÏ(s, a)/α â H Ï(s)), (4)
for all s â S and a â A. In other words, the policy at the ï¬xed point is a softmax over the advantage function induced by that policy, where the regularization parameter α can be interpreted as the temperature. Therefore, we can use the policy to derive an estimate of the Q-values,
ËQÏ(s, a) = ËAÏ(s, a) + V Ï(s) = α(log Ï(s, a) + H Ï(s)) + V Ï(s). (5)
With this we can rewrite the gradient update (2) as
âθ â E s,a (QÏ(s, a) â ËQÏ(s, a))âθ log Ï(s, a), (6)
since the update is unchanged by per-state constant offsets. When the policy is parameterized as a softmax, i.e., 7(s,a) = exp(W(s,a))/ >>, exp W(s,b), the quantity W is sometimes referred to as the action-preferences of the policy (Sutton & Barto} Chapter 6.6). Equation (7) states that the action preferences are equal to the Q-values scaled by 1/a, up to an additive per-state constant.
4
Published as a conference paper at ICLR 2017
3.2 GENERAL CASE
Consider the following optimization problem:
minimize Es,4(q(s,a) â alog 7(s, a))? a) subjectto S°,a(s,a)=1, sES over variable 6 which parameterizes 7, where we consider both the measure in the expectation and the values q(s, a) to be independent of @. The optimality condition for this problem is
# E s,a
(q(s, a) â α log Ï(s, a) + cs)âθ log Ï(s, a) = 0,
where c â R|S| is the Lagrange multiplier associated with the constraint that the policy sum to one at each state. Comparing this to equation (3), we see that if q = QÏ and the measure in the expectation is the same then they describe the same set of ï¬xed points. This suggests an interpretation of the ï¬xed points of the regularized policy gradient as a regression of the log-policy onto the Q-values. In the general case of using an approximation architecture we can interpret equation (3) as indicating that the error between QÏ and ËQÏ is orthogonal to âθi log Ï for each i, and so cannot be reduced further by changing the parameters, at least locally. In this case equation (4) is unlikely to hold at a solution to (3), however with a good approximation architecture it may hold approximately, so that the we can derive an estimate of the Q-values from the policy using equation (5). We will use this estimate of the Q-values in the next section.
3.3 CONNECTION TO ACTION-VALUE METHODS
The previous section made a connection between regularized policy gradient and a regression onto the Q-values at the ï¬xed point. In this section we go one step further, showing that actor-critic methods can be interpreted as action-value ï¬tting methods, where the exact method depends on the choice of critic.
Actor-critic methods. Consider an agent using an actor-critic method to learn both a policy Ï and a value function V . At any iteration k, the value function V k has parameters wk, and the policy is of the form
a*(s,a) = exp(W*(s, a)/a)/ S> exp(W*(s,b)/a), 8) b
where W* is parameterized by 6" and a > 0 is the entropy regularization penalty. In this case Vo log r*(s,a) = (1/a)(VeW*(s, a) â 0, 7(s,b)VeW*(s, b)). Using equation a) the parame- ters are updated as
AO x E dac(VoW*(s,a) â S> m*(s,b)VoW*(s,b)), Aw oc E dacVwV*(s) (9) sa sa b
where δac is the critic minus baseline term, which depends on the variant of actor-critic being used (see the remark below).
Action-value methods. Compare this to the case where an agent is learning Q-values with a du- eling architecture (Wang et al., 2016), which at iteration k is given by Qk(s, a) = Y k(s, a) â
µ(s, b)Y k(s, b) + V k(s),
b where µ is a probability distribution, Y k is parameterized by θk, V k is parameterized by wk, and the exploration policy is Boltzmann with temperature α, i.e.,
Ïk(s, a) = exp(Y k(s, a)/α)/ exp(Y k(s, b)/α). b (10)
In action value ï¬tting methods at each iteration the parameters are updated to reduce some error, where the update is given by
AO x E dbav(VoÂ¥*(s,a) â Ss (s,b)VeY*(s,b)), Aw ox E davVwV*(s) (11) 3a 7 sa
where δav is the action-value error term and depends on which algorithm is being used (see the remark below).
5
Published as a conference paper at ICLR 2017
Equivalence. The two policies (8) and (10) are identical if W* = Y* for all k. Since X° and Y° can be initialized and parameterized in the same way, and assuming the two value function estimates are initialized and parameterized in the same way, all that remains is to show that the updates in equations and (9p are identical. Comparing the two, and assuming that dac = day (see remark), we see that the only difference is that the measure is not fixed in (9). but is equal to the current policy and therefore changes after each update. Replacing ju in (11) with 7* makes the updates identical, in which case W* = Y* at all iterations and the two policies and are always the same. In other words, the slightly modified action-value method is equivalent to an actor-critic policy gradient method, and vice-versa (modulo using the non-discounted distribu- tion of states, as discussed in 2.2). In particular, regularized policy gradient methods can be inter- preted as advantage function learning techniques Cretan, since at the optimum the quantity W(s,a) â do, 7(s,b)W(s,b) = a(log 7(s, a) + Hâ¢(s)) will be equal to the advantage function values in the tabular case.
Remark. In SARSA (Rummery & Niranjan] 1994) we set day = r(s,a) + yQ(sâ,b) â Q(s, a), where b is the action selected at state sâ, which would be equivalent to using a bootstrap critic in equation (6) where Qâ(s,a) = r(s,a) + yQ(sâ,b). In expected-SARSA (Sutton & Barto} {1998 Exercise 6.10), (Van Seijen et al.|[2009)) we take the expectation over the Q-values at the next state, $0 day = T(s,a)+7V(sâ) âQ(s, a). This is equivalent to TD-actor-critic (Konda & Tsitsiklis}/2003) r V In where we use the value function to provide the critic, which is given by Q* = r(s,a) + yV(sâ). Q-learning Say = 7 (8, a) + ymax, Q(sâ, b) â Q(s, a), which would be equivalent to using an optimizing critic that bootstraps using the max Q-value at the next state, i.e, Q7(s,a) = r(s,a) + ymaxp Q (sâ,b). In REINFORCE the critic is the Monte Carlo return from that state on, ie, Q"(s,a) = (Pg 7'Tt | 80 = 8,a9 = a). If the return trace is truncated and a bootstrap is performed after n-steps, this is equivalent to n-step SARSA or n-step Q-learning, depending on the form of the bootstrap (Peng & Williams} |T996p.
3.4 BELLMAN RESIDUAL
In this section we show that ||7*Q** â Q7«|| > 0 with decreasing regularization penalty a, where Tq is the policy defined by (4) and Q* is the corresponding Q-value function, both of which are functions of a. We shall show that it converges to zero by bounding the sequence below by zero and above with a sequence that converges to zero. First, we have that T*Q7* > Tâ¢Â°Qâ¢* = Qâ¢, since J* is greedy with respect to the Q-values. So T*Q7« â Q7* > 0. Now, to bound from above we need the fact that 7.(s,a) = exp(Q**(s, a)/a)/ >>, exp(Q7*(s, b)/a) < exp((Q7*(s, a) â max, Q7*(s,c))/a). Using this we have
0 < T*Q7(s,a) â Qâ¢(s,a) = TOF (s,a) âTQ*(s,a) = E, (max, Qre (sâ,c) -âdo, Tals! b)Qâ¢= (s', b)) = By Sy nals! dylimax, Q*(s!,c) â Q*(s!,0)) < Ey d7, exp((Q⢠(s', b) â Q* (s/, b*))/a) (max. Q**(s', c) â Q*(s',b)) Ey oy fa (max, Q7*(sâ,c) â Q7*(s',b)),
where we deï¬ne fα(x) = x exp(âx/α). To conclude our proof we use the fact that fα(x) ⤠supx fα(x) = fα(α) = αeâ1, which yields
0< T*Qâ¢(s,a) â Q**(s,a) < |Alaeâ¢*
for all (s,a), and so the Bellman residual converges to zero with decreasing a. In other words, for small enough a (which is the regime we are interested in) the Q-values induced by the policy will have a small Bellman residual. Moreover, this implies that limy_,9 Q7* = Q*, as one might expect.
# 4 PGQL
In this section we introduce the main contribution of the paper, which is a technique to combine pol- icy gradient with Q-learning. We call our technique âPGQLâ, for policy gradient and Q-learning. In the previous section we showed that the Bellman residual is small at the ï¬xed point of a regularized
6
Published as a conference paper at ICLR 2017
policy gradient algorithm when the regularization penalty is sufï¬ciently small. This suggests adding an auxiliary update where we explicitly attempt to reduce the Bellman residual as estimated from the policy, i.e., a hybrid between policy gradient and Q-learning. We ï¬rst present the technique in a batch update setting, with a perfect knowledge of QÏ (i.e., a perfect critic). Later we discuss the practical implementation of the technique in a reinforcement learning setting with function approximation, where the agent generates experience from interacting with the environment and needs to estimate a critic simultaneously with the policy.
4.1 PGQL UPDATE
Deï¬ne the estimate of Q using the policy as
ËQÏ(s, a) = α(log Ï(s, a) + H Ï(s)) + V (s), (12) where V has parameters w and is not necessarily V Ï as it was in equation (5). In (2) it was unneces- sary to estimate the constant since the update was invariant to constant offsets, although in practice it is often estimated for use in a variance reduction technique (Williams, 1992; Sutton et al., 1999).
Since we know that at the ï¬xed point the Bellman residual will be small for small α, we can consider updating the parameters to reduce the Bellman residual in a fashion similar to Q-learning, i.e.,
Aé x E(T*Q"(s,a) â Q"(s,a))Vologn(s,a), Aw x E(7*Q"(s,a) â Q7(s,a))VwV(s). sa s,a (13) This is Q-learning applied to a particular form of the Q-values, and can also be interpreted as an actor-critic algorithm with an optimizing (and therefore biased) critic.
The full scheme simply combines two updates to the policy, the regularized policy gradient update (2) and the Q-learning update (13). Assuming we have an architecture that provides a policy Ï, a value function estimate V , and an action-value critic QÏ, then the parameter updates can be written as (suppressing the (s, a) notation)
AO x (1 =) Ex,a(Q⢠â Q") Vo log 7 + 7 Es,a(T*Q" â Q7)Vo log, (14) Aw « (1 = 1) Es,a(Qâ â Q")VuV + 1 Es.a(T*Q" â Q7)VuV,
here η â [0, 1] is a weighting parameter that controls how much of each update we apply. In the case where η = 0 the above scheme reduces to entropy regularized policy gradient. If η = 1 then it becomes a variant of (batch) Q-learning with an architecture similar to the dueling architecture (Wang et al., 2016). Intermediate values of η produce a hybrid between the two. Examining the update we see that two error terms are trading off. The ï¬rst term encourages consistency with critic, and the second term encourages optimality over time. However, since we know that under standard policy gradient the Bellman residual will be small, then it follows that adding a term that reduces that error should not make much difference at the ï¬xed point. That is, the updates should be complementary, pointing in the same general direction, at least far away from a ï¬xed point. This update can also be interpreted as an actor-critic update where the critic is given by a weighted combination of a standard critic and an optimizing critic. Yet another interpretation of the update is a combination of expected-SARSA and Q-learning, where the Q-values are parameterized as the sum of an advantage function and a value function.
# 4.2 PRACTICAL IMPLEMENTATION
The updates presented in (14) are batch updates, with an exact critic QÏ. In practice we want to run this scheme online, with an estimate of the critic, where we donât necessarily apply the policy gradient update at the same time or from same data source as the Q-learning update.
Our proposal scheme is as follows. One or more agents interact with an environment, encountering states and rewards and performing on-policy updates of (shared) parameters using an actor-critic algorithm where both the policy and the critic are being updated online. Each time an agent receives new data from the environment it writes it to a shared replay memory buffer. Periodically a separate learner process samples from the replay buffer and performs a step of Q-learning on the parameters of the policy using (13). This scheme has several advantages. The critic can accumulate the Monte
7
Published as a conference paper at ICLR 2017
(a) Grid world. (b) Performance versus agent steps in grid world.
Figure 1: Grid world experiment.
Carlo return over many time periods, allowing us to spread the inï¬uence of a reward received in the future backwards in time. Furthermore, the replay buffer can be used to store and replay âimportantâ past experiences by prioritizing those samples (Schaul et al., 2015). The use of the replay buffer can help to reduce problems associated with correlated training data, as generated by an agent explor- ing an environment where the states are likely to be similar from one time step to the next. Also the use of replay can act as a kind of regularizer, preventing the policy from moving too far from satisfying the Bellman equation, thereby improving stability, in a similar sense to that of a policy âtrust-regionâ (Schulman et al., 2015). Moreover, by batching up replay samples to update the net- work we can leverage GPUs to perform the updates quickly, this is in comparison to pure policy gradient techniques which are generally implemented on CPU (Mnih et al., 2016).
Since we perform Q-learning using samples from a replay buffer that were generated by a old policy we are performing (slightly) off-policy learning. However, Q-learning is known to converge to the optimal Q-values in the off-policy tabular case (under certain conditions) (Sutton & Barto, 1998), and has shown good performance off-policy in the function approximation case (Mnih et al., 2013).
4.3 MODIFIED FIXED POINT
The PGQL updates in equation (14) have modiï¬ed the ï¬xed point of the algorithm, so the analysis of §3 is no longer valid. Considering the tabular case once again, it is still the case that the policy Ï â exp( ËQÏ/α) as before, where ËQÏ is deï¬ned by (12), however where previously the ï¬xed point satisï¬ed ËQÏ = QÏ, with QÏ corresponding to the Q-values induced by Ï, now we have
Q⢠= (1 n)Q" +nT*Q", (1s) Or equivalently, if 7 < 1, we have Q⢠= (1 â 7) ro 1*(T*)*Qâ¢. In the appendix we show that |Q⢠â Qâ¢|| > 0 and that ||7*Q* â Qâ|| > 0 with decreasing a in the tabular case. That is, for small a the induced Q-values and the Q-values estimated from the policy are close, and we still have the guarantee that in the limit the Q-values are optimal. In other words, we have not perturbed the policy very much by the addition of the auxiliary update.
# 5 NUMERICAL EXPERIMENTS
5.1 GRID WORLD
In this section we discuss the results of running PGQL on a toy 4 by 6 grid world, as shown in Figure 1a. The agent always begins in the square marked âSâ and the episode continues until it reaches the square marked âTâ, upon which it receives a reward of 1. All other times it receives no reward. For this experiment we chose regularization parameter α = 0.001 and discount factor γ = 0.95.
Figure 1b shows the performance traces of three different agents learning in the grid world, running from the same initial random seed. The lines show the true expected performance of the policy
8
Published as a conference paper at ICLR 2017
Q-learning . Q Policy XY, t TD learning gradient NS Policy / a, A Input
Figure 2: PGQL network augmentation.
from the start state, as calculated by value iteration after each update. The blue-line is standard TD-actor-critic (Konda & Tsitsiklis, 2003), where we maintain an estimate of the value function and use that to generate an estimate of the Q-values for use as the critic. The green line is Q-learning where at each step an update is performed using data drawn from a replay buffer of prior experience and where the Q-values are parameterized as in equation (12). The policy is a softmax over the Q-value estimates with temperature α. The red line is PGQL, which at each step ï¬rst performs the TD-actor-critic update, then performs the Q-learning update as in (14).
The grid world was totally deterministic, so the step size could be large and was chosen to be 1. A step-size any larger than this made the pure actor-critic agent fail to learn, but both PGQL and Q-learning could handle some increase in the step-size, possibly due to the stabilizing effect of using replay.
It is clear that PGQL outperforms the other two. At any point along the x-axis the agents have seen the same amount of data, which would indicate that PGQL is more data efï¬cient than either of the vanilla methods since it has the highest performance at practically every point.
# 5.2 ATARI
We tested our algorithm on the full suite of Atari benchmarks (Bellemare et al., 2012), using a neural network to parameterize the policy. In ï¬gure 2 we show how a policy network can be augmented with a parameterless additional layer which outputs the Q-value estimate. With the exception of the extra layer, the architecture and parameters were chosen to exactly match the asynchronous advantage actor-critic (A3C) algorithm presented in Mnih et al. (2016), which in turn reused many of the settings from Mnih et al. (2015). Speciï¬cally we used the exact same learning rate, number of workers, entropy penalty, bootstrap horizon, and network architecture. This allows a fair comparison between A3C and PGQL, since the only difference is the addition of the Q-learning step. Our technique augmented A3C with the following change: After each actor-learner has accumulated the gradient for the policy update, it performs a single step of Q-learning from replay data as described in equation (13), where the minibatch size was 32 and the Q-learning learning rate was chosen to be 0.5 times the actor-critic learning rate (we mention learning rate ratios rather than choice of η in (14) because the updates happen at different frequencies and from different data sources). Each actor-learner thread maintained a replay buffer of the last 100k transitions seen by that thread. We ran the learning for 50 million agent steps (200 million Atari frames), as in (Mnih et al., 2016).
In the results we compare against both A3C and a variant of asynchronous deep Q-learning. The changes we made to Q-learning are to make it similar to our method, with some tuning of the hyper- parameters for performance. We use the exact same network, the exploration policy is a softmax over the Q-values with a temperature of 0.1, and the Q-values are parameterized as in equation (12) (i.e., similar to the dueling architecture (Wang et al., 2016)), where α = 0.1. The Q-value updates are performed every 4 steps with a minibatch of 32 (roughly 5 times more frequently than PGQL). For each method, all games used identical hyper-parameters.
The results across all games are given in table 3 in the appendix. All scores have been normal- ized by subtracting the average score achieved by an agent that takes actions uniformly at random.
9
Published as a conference paper at ICLR 2017
Each game was tested 5 times per method with the same hyper-parameters but with different ran- dom seeds. The scores presented correspond to the best score obtained by any run from a random start evaluation condition (Mnih et al., 2016). Overall, PGQL performed best in 34 games, A3C performed best in 7 games, and Q-learning was best in 10 games. In 6 games two or more methods tied. In tables 1 and 2 we give the mean and median normalized scores as percentage of an expert human normalized score across all games for each tested algorithm from random and human-start conditions respectively. In a human-start condition the agent takes over control of the game from randomly selected human-play starting points, which generally leads to lower performance since the agent may not have found itself in that state during training. In both cases, PGQL has both the highest mean and median, and the median score exceeds 100%, the human performance threshold.
It is worth noting that PGQL was the worst performer in only one game, in cases where it was not the outright winner it was generally somewhere in between the performance of the other two algorithms. Figure 3 shows some sample traces of games where PGQL was the best performer. In these cases PGQL has far better data efï¬ciency than the other methods. In ï¬gure 4 we show some of the games where PGQL under-performed. In practically every case where PGQL did not perform well it had better data efï¬ciency early on in the learning, but performance saturated or collapsed. We hypothesize that in these cases the policy has reached a local optimum, or over-ï¬t to the early data, and might perform better were the hyper-parameters to be tuned.
Mean Median A3C Q-learning 636.8 107.3 756.3 58.9 PGQL 877.2 145.6
Table 1: Mean and median normalized scores for the Atari suite from random starts, as a percentage of human normalized score.
Mean Median A3C Q-learning 266.6 58.3 246.6 30.5 PGQL 416.7 103.3
Table 2: Mean and median normalized scores for the Atari suite from human starts, as a percentage of human normalized score.
12000 assault 16000 battle zone â asc â asc 10000 ââ Q-learning 14000 QJearning â PGQL 12000 ââ PGQL 8000 10000 6000 8000 4000 6000 4000 2000 2000 0 1 2 3 4 5 0 1 2 3 4 5 agent steps le7 agent steps le7 12000 chopper command Lovoo0 yars revenge â asc â asc 10000 ââ Q-learning â Qlearning 80000 PGQL â PGQL 8000 60000 6000 40000 4000 Pry 20000 oO oO oO 1 2 3 4 5 oO 1 2 3 4 5 agent steps 1e7 agent steps 1e7
Figure 3: Some Atari runs where PGQL performed well.
10
Published as a conference paper at ICLR 2017
Py) breakout 35000 hero â A3c â A3c 700 | Q-learning 30000 â Qlearning 600 â PGaL 25000 500 2 20000 5 400 & 15000 300 200 10000 100 5000 0 1 2 3 4 5 agent steps le7 agent steps le7 25000 qbert 80000 up n down â_â age goo00 | â| Ase 20000 ââ Qlearning â @learning PGQL 60000 ââ PGQL 15000 50000 40000 10000 30000 20000 5000 10000 ° ° 0 1 2 3 4 5 0 1 2 3 4 5 agent steps le7 agent steps le7
Figure 4: Some Atari runs where PGQL performed poorly.
# 6 CONCLUSIONS
We have made a connection between the ï¬xed point of regularized policy gradient techniques and the Q-values of the resulting policy. For small regularization (the usual case) we have shown that the Bellman residual of the induced Q-values must be small. This leads us to consider adding an auxiliary update to the policy gradient which is related to the Bellman residual evaluated on a transformation of the policy. This update can be performed off-policy, using stored experience. We call the resulting method âPGQLâ, for policy gradient and Q-learning. Empirically, we observe better data efï¬ciency and stability of PGQL when compared to actor-critic or Q-learning alone. We veriï¬ed the performance of PGQL on a suite of Atari games, where we parameterize the policy using a neural network, and achieved performance exceeding that of both A3C and Q-learning.
# 7 ACKNOWLEDGMENTS
We thank Joseph Modayil for many comments and suggestions on the paper, and Hubert Soyer for help with performance evaluation. We would also like to thank the anonymous reviewers for their constructive feedback.
11
Published as a conference paper at ICLR 2017
# REFERENCES
Shun-Ichi Amari. Natural gradient works efï¬ciently in learning. Neural computation, 10(2):251â 276, 1998.
Mohammad Gheshlaghi Azar, Vicenc¸ G´omez, and Hilbert J Kappen. Dynamic policy programming. Journal of Machine Learning Research, 13(Nov):3207â3245, 2012.
J Andrew Bagnell and Jeff Schneider. Covariant policy search. In IJCAI, 2003.
Leemon C Baird III. Advantage updating. Technical Report WL-TR-93-1146, Wright-Patterson Air Force Base Ohio: Wright Laboratory, 1993.
Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning envi- ronment: An evaluation platform for general agents. Journal of Artiï¬cial Intelligence Research, 2012.
# Richard Bellman. Dynamic programming. Princeton University Press, 1957.
Dimitri P Bertsekas. Dynamic programming and optimal control, volume 1. Athena Scientiï¬c, 2005.
Dimitri P. Bertsekas and John N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientiï¬c, 1996.
Thomas Degris, Martha White, and Richard S Sutton. Off-policy actor-critic. 2012.
Roy Fox, Ari Pakman, and Naftali Tishby. Taming the noise in reinforcement learning via soft updates. arXiv preprint arXiv:1207.4708, 2015.
Matthew Hausknecht and Peter Stone. On-policy vs. off-policy updates for deep reinforcement learning. Deep Reinforcement Learning: Frontiers and Challenges, IJCAI 2016 Workshop, 2016.
Nicolas Heess, David Silver, and Yee Whye Teh. Actor-critic reinforcement learning with energy- based policies. In JMLR: Workshop and Conference Proceedings 24, pp. 43â57, 2012.
Sham Kakade. A natural policy gradient. In Advances in Neural Information Processing Systems, volume 14, pp. 1531â1538, 2001.
Vijay R Konda and John N Tsitsiklis. On actor-critic algorithms. SIAM Journal on Control and Optimization, 42(4):1143â1166, 2003.
Lucas Lehnert and Doina Precup. Policy gradient methods for off-policy control. arXiv preprint arXiv:1512.04105, 2015.
Sergey Levine and Vladlen Koltun. Guided policy search. In Proceedings of the 30th International Conference on Machine Learning (ICML), pp. 1â9, 2013.
Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuo- motor policies. arXiv preprint arXiv:1504.00702, 2015.
Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
Long-Ji Lin. Reinforcement learning for robots using neural networks. Technical report, DTIC Document, 1993.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier- stra, and Martin Riedmiller. Playing atari with deep reinforcement learning. In NIPS Deep Learn- ing Workshop. 2013.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Pe- tersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 02 2015. URL http://dx.doi.org/10.1038/ nature14236.
12
Published as a conference paper at ICLR 2017
Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783, 2016.
Mohammad Norouzi, Samy Bengio, Zhifeng Chen, Navdeep Jaitly, Mike Schuster, Yonghui Wu, and Dale Schuurmans. Reward augmented maximum likelihood for neural structured prediction. arXiv preprint arXiv:1609.00150, 2016.
Razvan Pascanu and Yoshua Bengio. Revisiting natural gradient for deep networks. arXiv preprint arXiv:1301.3584, 2013.
Edwin Pednault, Naoki Abe, and Bianca Zadrozny. Sequential cost-sensitive decision making with reinforcement learning. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 259â268. ACM, 2002.
Jing Peng and Ronald J Williams. Incremental multi-step Q-learning. Machine Learning, 22(1-3): 283â290, 1996.
Jan Peters, Katharina M¨ulling, and Yasemin Altun. Relative entropy policy search. In AAAI. Atlanta, 2010.
Martin Riedmiller. Neural ï¬tted Q iterationâï¬rst experiences with a data efï¬cient neural reinforce- ment learning method. In Machine Learning: ECML 2005, pp. 317â328. Springer Berlin Heidel- berg, 2005.
Gavin A Rummery and Mahesan Niranjan. On-line Q-learning using connectionist systems. 1994.
Brian Sallans and Geoffrey E Hinton. Reinforcement learning with factored states and actions. Journal of Machine Learning Research, 5(Aug):1063â1088, 2004.
Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. arXiv preprint arXiv:1511.05952, 2015.
John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In Proceedings of The 32nd International Conference on Machine Learning, pp. 1889â1897, 2015.
David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In Proceedings of the 31st International Conference on Machine Learning (ICML), pp. 387â395, 2014.
David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016.
R. Sutton and A. Barto. Reinforcement Learning: an Introduction. MIT Press, 1998.
Richard S Sutton. Learning to predict by the methods of temporal differences. Machine learning, 3 (1):9â44, 1988.
Richard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Infor- mation Processing Systems, volume 99, pp. 1057â1063, 1999.
Gerald Tesauro. Temporal difference learning and TD-Gammon. Communications of the ACM, 38 (3):58â68, 1995.
Philip Thomas. Bias in natural actor-critic algorithms. In Proceedings of The 31st International Conference on Machine Learning, pp. 441â448, 2014.
Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double Q- learning. In Proceedings of the Thirtieth AAAI Conference on Artiï¬cial Intelligence (AAAI-16), pp. 2094â2100, 2016.
13
Published as a conference paper at ICLR 2017
Harm Van Seijen, Hado Van Hasselt, Shimon Whiteson, and Marco Wiering. A theoretical and em- pirical analysis of expected sarsa. In 2009 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning, pp. 177â184. IEEE, 2009.
Yin-Hao Wang, Tzuu-Hseng S Li, and Chih-Jui Lin. Backward q-learning: The combination of sarsa algorithm and q-learning. Engineering Applications of Artiï¬cial Intelligence, 26(9):2184â2193, 2013.
Ziyu Wang, Tom Schaul, Matteo Hessel, Hado van Hasselt, Marc Lanctot, and Nando de Freitas. Dueling network architectures for deep reinforcement learning. In Proceedings of the 33rd Inter- national Conference on Machine Learning (ICML), pp. 1995â2003, 2016.
Christopher John Cornish Hellaby Watkins. Learning from delayed rewards. PhD thesis, University of Cambridge England, 1989.
Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
Ronald J Williams and Jing Peng. Function optimization using connectionist reinforcement learning algorithms. Connection Science, 3(3):241â268, 1991.
# A PGQL BELLMAN RESIDUAL
Here we demonstrate that in the tabular case the Bellman residual of the induced Q-values for the PGQL updates of converges to zero as the temperature a decreases, which is the same guarantee as vanilla regularized policy gradient 2). We will use the notation that 7r is the policy at the fixed point of PGQL updates (14) for some a, i.â¬., Ty, « exp(Q** ), with induced Q-value function Q7*. First, note that we can apply the same argument as in to show that limg_59 |T*Q7= â Tr Qt || = 0 (the only difference is that we lack the property that Qt« is the fixed point of T7«). Secondly, from equation we can write Q** â Q⢠= n(T*Q⢠â Qâ¢). Combining these two facts we have
]Qr> â Qâ¢|| n\|T*Q7 â Q* ||
Qâ¢|| n\|T*Q7 â Q* || . n\|T*Qâ¢* â TQ + TQ â Q⢠|| n(\|T*Q7%* â Tâ¢Q⢠|| + ||T*2Q7* â TQ" |I) n(\|T*Q7* â T° Q⢠|| + y|]Q7" â Q⢠||) n/(Lây)IIT*Q7* â TQ ||, IAIA IA Il
and so ||Q*= â Q7«|| + 0. as a â 0. Using this fact we have
|T*Q7 _ Qt I |T*Q"" _ To Qt +4 Tr Qt â Qt 4 Q⢠â Q⢠I 7 T*Qâ¢* â T° Q* | + ||T*Q⢠â T° Q⢠|| + []Q7 â Q⢠|] |T*Qr âT%Qr|| + (1+ 7)|1Q% âQ"| 3/1 = m)|T*Q â T*Q⢠|], AIAIA Il
which therefore also converges to zero in the limit. Finally we obtain
TQ" â Qe) = |T*Qt âT2Q% + T*Qe â Ge + Qe = Qâ¢| Il
= |T*Qt âT2Q% + T*Qe â Ge + Qe = Qâ¢| T*Qt â T*Q*|| + ||T*Q2 â Qe || + ]Q7* â Q7|| (L+ V[lQ7 â Q* || + |T*Q7> â Q⢠|], IAIA Il
which combined with the two previous results implies that lim, _,9 ||T*Q"* â Qâ* || = 0, as before.
14
Published as a conference paper at ICLR 2017
# B ATARI SCORES
Game alien amidar assault asterix asteroids atlantis bank heist battle zone beam rider berzerk bowling boxing breakout centipede chopper command crazy climber defender demon attack double dunk enduro ï¬shing derby freeway frostbite gopher gravitar hero ice hockey jamesbond kangaroo krull kung fu master montezuma revenge ms pacman name this game phoenix pitfall pong private eye qbert riverraid road runner robotank seaquest skiing solaris space invaders star gunner surround tennis time pilot tutankham up n down venture video pinball wizard of wor yars revenge zaxxon A3C 38.43 68.69 854.64 191.69 24.37 15496.01 210.28 21.63 59.55 79.38 2.70 510.30 2341.13 50.22 61.13 510.25 475.93 4027.57 1250.00 9.94 140.84 -0.26 5.85 429.76 0.71 145.71 62.25 133.90 -0.94 736.30 182.34 -0.49 17.91 102.01 447.05 5.48 116.37 -0.88 186.91 107.25 603.11 15.71 3.81 54.27 27.05 188.65 756.60 28.29 145.58 270.74 224.76 1637.01 -1.76 3007.37 150.52 81.54 4.01 Q-learning 25.53 12.29 1695.21 98.53 5.32 13635.88 91.80 2.89 79.94 55.55 -7.09 299.49 3291.22 105.98 19.18 189.01 58.94 3449.27 91.35 9.94 -14.48 -0.13 10.71 9131.97 1.35 15.47 21.57 110.97 -0.94 3586.30 260.14 1.80 10.71 113.89 812.99 5.49 24.96 0.03 159.71 65.01 179.69 134.87 3.71 54.10 34.61 146.39 205.70 -1.51 -15.35 91.59 110.11 148.10 -1.76 4325.02 88.07 23.39 44.11 PGQL 46.70 71.00 2802.87 3790.08 50.23 16217.49 212.15 52.00 155.71 92.85 3.85 902.77 2959.16 73.88 162.93 476.11 911.13 3994.49 1375.00 9.94 145.57 -0.13 5.71 2060.41 1.74 92.88 76.96 142.08 -0.75 557.44 254.42 -0.48 25.76 188.90 1507.07 5.49 116.37 -0.04 136.17 128.63 519.51 71.50 5.88 54.16 28.66 608.44 977.99 78.15 145.58 438.50 239.58 1484.43 -1.76 4743.68 325.39 252.83 224.89
Table 3: Normalized scores for the Atari suite from random starts, as a percentage of human nor- malized score.
15 | {
"id": "1602.01783"
} |
1611.01673 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | 7 1 0 2
r a M 2 ] G L . s c [
3 v 3 7 6 1 0 . 1 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# GENERATIVE MULTI-ADVERSARIAL NETWORKS
# Ishan Durugkar*, Ian Gemp*, Sridhar Mahadevan
Durugkar*, Gemp*, College of Information and Computer Sciences University of Massachusetts, Amherst Amherst, MA 01060, USA {idurugkar, imgemp, mahadeva}@cs.umass.edu
# ABSTRACT
Generative adversarial networks (GANs) are a framework for producing a gen- erative model by way of a two-player minimax game. In this paper, we propose the Generative Multi-Adversarial Network (GMAN), a framework that extends GANs to multiple discriminators. In previous work, the successful training of GANs requires modifying the minimax objective to accelerate training early on. In contrast, GMAN can be reliably trained with the original, untampered objec- tive. We explore a number of design perspectives with the discriminator role rang- ing from formidable adversary to forgiving teacher. Image generation tasks com- paring the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric.
# 1 INTRODUCTION
Generative adversarial networks (Goodfellow et al. (2014)) (GANs) are a framework for producing generative model by way of a two-player minimax game. One player, the generator, attempts to generate realistic data samples by transforming noisy samples, z, drawn from a simple distribution (e.g., z ~ N(0, 1)) using a transformation function Gg(z) with learned weights, 9. The generator receives feedback as to how realistic its synthetic sample is from another player, the discriminator, which attempts to discern between synthetic data samples produced by the generator and samples drawn from an actual dataset using a function D,,(«) with learned weights, w.
# a
The GAN framework is one of the more recent successes in a line of research on adversarial train- ing in machine learning (Schmidhuber (1992); Bagnell (2005); Ajakan et al. (2014)) where games between learners are carefully crafted so that Nash equilibria coincide with some set of desired op- timality criteria. Preliminary work on GANs focused on generating images (e.g., MNIST (LeCun et al. (1998)), CIFAR (Krizhevsky (2009))), however, GANs have proven useful in a variety of appli- cation domains including learning censored representations (Edwards & Storkey (2015)), imitating expert policies (Ho & Ermon (2016)), and domain transfer (Yoo et al. (2016)). Work extending GANs to semi-supervised learning (Chen et al. (2016); Mirza & Osindero (2014); Gauthier (2014); Springenberg (2015)), inference (Makhzani et al. (2015); Dumoulin et al. (2016)), feature learning (Donahue et al. (2016)), and improved image generation (Im et al. (2016); Denton et al. (2015); Radford et al. (2015)) have shown promise as well.
Despite these successes, GANs are reputably difficult to train. While research is still underway to improve training techniques and heuristics (Salimans et al. (2016)), most approaches have focused on understanding and generalizing GANs theoretically with the aim of exploring more tractable formulations (Zhao et al. (2016); Li et al. (2015); Uehara et al. (2016); Nowozin et al. (2016)).
In this paper, we theoretically and empirically justify generalizing the GAN framework to multiple discriminators. We review GANs and summarize our extension in Section 2. In Sections 3 and 4, we present our V-discriminator extension to the GAN framework (Generative Multi-Adversarial Networks) with several variants which range the role of the discriminator from formidable adversary to forgiving teacher. Section 4.2 explains how this extension makes training with the untampered minimax objective tractable. In Section 5, we define an intuitive metric (GMAM) to quantify GMAN
*Equal contribution
Published as a conference paper at ICLR 2017
performance and evaluate our framework on a variety of image generation tasks. Section 6 concludes with a summary of our contributions and directions for future research.
ContributionsâTo summarize, our main contributions are: i) a multi-discriminator GAN frame- work, GMAN, that allows training with the original, untampered minimax objective; ii) a generative multi-adversarial metric (GMAM) to perform pairwise evaluation of separately trained frameworks; iii) a particular instance of GMAN, GMANâ%, that allows the generator to automatically regulate training and reach higher performance (as measured by GMAM) in a fraction of the training time required for the standard GAN model.
2 GENERATIVE ADVERSARIAL NETWORKS TO GMAN
The original formulation of a GAN is a minimax game between a generator, Gg(z) : z > x, anda discriminator, D,,(x) : « > {0, 1],
min max V(D,G) = Exnpsata(e) [1og(D(x))| + Ezvp.(z) [log(t - D(Ge)))| ; (1)
where Pdata(2) is the true data distribution and p,(z) is a simple (usually fixed) distribution that is easy to draw samples from (e.g., (0, 1)). We differentiate between the function space of discrim- inators, D, and elements of this space, D. Let pg(«) be the distribution induced by the generator, Go(z). We assume D, G to be deep neural networks as is typically the case.
In their original work, Goodfellow et al. (2014) proved that given sufficient network capacities and an oracle providing the optimal discriminator, D* = arg maxp V (D,G), gradient descent on pa(x) will recover the desired globally optimal solution, pg(x) = Paata(x), so that the generator distribution exactly matches the data distribution. In practice, they replaced the second term, log(1â D(G(z))), with â log(D(G(z))) to enhance gradient signals at the start of the game; note this is no longer a zero-sum game. Part of their convergence and optimality proof involves using the oracle, D*, to reduce the minimax game to a minimization over G only:
min V(D*,G) = min {C(G) = âlog(4) +2 JSD(Paatallpc) } (2)
where JSD denotes Jensen-Shannon divergence. Minimizing C(G) necessarily minimizes JS'D, however, we rarely know D* and so we instead minimize V(D, G), which is only a lower bound.
This perspective of minimizing the distance between the distributions, paara and pg, motivated Li et al. (2015) to develop a generative model that matches all moments of pg(x) with Paata(x) (at optimality) by minimizing maximum mean discrepancy (MMD). Another approach, EBGAN, (Zhao et al. (2016)) explores a larger class of games (non-zero-sum games) which generalize the generator and discriminator objectives to take real-valued âenergiesâ as input instead of probabilities. Nowozin et al. (2016) and then Uehara et al. (2016) extended the JSD perspective on GANS to more general divergences, specifically f-divergences and then Bregman-divergences respectively.
In general, these approaches focus on exploring fundamental reformulations of V (D, G). Similarly, our work focuses on a fundamental reformulation, however, our aim is to provide a framework that accelerates training of the generator to a more robust state irrespective of the choice of V.
2.1 GMAN: A MULTI-ADVERSARIAL EXTENSION
We propose introducing multiple discriminators, which brings with it a number of design possibil- ities. We explore approaches ranging between two extremes: 1) a more discriminating D (better approximating maxp V(D,G)) and 2) a D better matched to the generatorâs capabilities. Math- ematically, we reformulate Gâs objective as ming max F(V(D1,G),...,V(Dwy,G)) for different choices of Fâ (see Figure 1). Each D; is still expected to independently maximize its own V(D;, G) (i.e. no cooperation). We sometimes abbreviate V (D;,G) with V; and F(Vi,..., Vx) with Fg(Vi).
# 3 A FORMIDABLE ADVERSARY
Here, we consider multi-discriminator variants that attempt to better approximate maxp V (D,G), providing a harsher critic to the generator.
Published as a conference paper at ICLR 2017
G FO) ee VO,.6) (0,6) Or D,
Figure 1: (GMAN) The generator trains using feedback aggregated over multiple discriminators. If F := max, G trains against the best discriminator. If F := mean, G trains against an ensemble. We explore other alternatives to F' in Sections 4.1 & 4.4 that improve on both these options.
3.1 MAXIMIZING V(D,G)
For a fixed G, maximizing F¢(V;) with F := max and N randomly instantiated copies of our dis- criminator is functionally equivalent to optimizing V (e.g., stochastic gradient ascent) with random restarts in parallel and then presenting maxjef1,___,.v} V (Dj, G) as the loss to the generator âa very pragmatic approach to the difficulties presented by the non-convexity of V caused by the deep net. Requiring the generator to minimize the max forces G to generate high fidelity samples that must hold up under the scrutiny of all V discriminators, each potentially representing a distinct max.
In practice, maxp,ep V(D;,G) is not performed to convergence (or global optimality), so the above problem is oversimplified. Furthermore, introducing N discriminators affects the dynam- ics of the game which affects the trajectories of the discriminators. This prevents us from claiming max{V1(t),..., Viv(t)} > max{Vj (t)} Vt even if we initalize D,(0) = D{,(0) as it is unlikely that D,(t) = D{(t) at some time ¢ after the start of the game.
3.2 BOOSTING
We can also consider taking the max over NV discriminators as a form of boosting for the discrim- inatorâs online classification problem (online because G' can produce an infinite data stream). The boosted discriminator is given a sample x; and must predict whether it came from the generator or the dataset. The booster then makes its prediction using the predictions of the N weaker D;.
There are a few differences between taking the max (case 1) and online boosting (case 2). In case 1, our booster is limited to selecting a single weak discriminator (i.e. a pure strategy), while in case 2, many boosting algorithms more generally use linear combinations of the discriminators. Moreover, in case 2, a booster must make a prediction before receiving a loss function. In case 1, we assume access to the loss function at prediction time, which allows us to compute the max.
It is possible to train the weak discriminators using boosting and then ignore the boosterâs prediction by instead presenting max{V;}. We explore both variants in our experiments, using the adaptive al- gorithm proposed in Beygelzimer et al. (2015). Unfortunately, boosting failed to produce promising results on the image generation tasks. It is possible that boosting produces too strong an adversary for learning which motivates the next section. Boosting results appear in Appendix A.7.
# 4 A FORGIVING TEACHER
The previous perspectives focus on improving the discriminator with the goal of presenting a better approximation of maxp V(D,G) to the generator. Our next perspective asks the question, âIs max V (D, G) too harsh a critic?â
4.1 Soft-DISCRIMINATOR
In practice, training against a far superior discriminator can impede the generatorâs learning. This is because the generator is unlikely to generate any samples considered ârealisticâ by the discrimi- natorâs standards, and so the generator will receive uniformly negative feedback. This is problem-
Published as a conference paper at ICLR 2017
atic because the information contained in the gradient derived from negative feedback only dictates where to drive down pg(x), not specifically where to increase pg (x). ala) driving down pa(2) necessarily increases pg(a) in other regions of Â¥ (to maintain [, pq(x) = 1) which may or may not contain samples from the true dataset (whack-a-mole dilemma). In contrast, a generator is more likely to see positive feedback against a more lenient discriminator, which may better guide a generator towards amassing pg(«) in approximately correct regions of 1â.
For this reason, we explore a variety of functions that allow us to soften the max operator. We choose to focus on soft versions of the three classical Pythagorean means parameterized by \ where X = 0 corresponds to the mean and the max is recovered as \ > 00:
AMsogt(V, A) -Su Vi (3)
GMoopt(V, A) = â exp (>. w; log(â Vi) (4)
HM,of+(V, A) -( wil, oy (5)
where w; = Vi /DjeV with \ > 0, V; < 0. Using a softmax also has the well known advantage of being differentiable (as opposed to subdifferentiable for max). Note that we only require continuity to guarantee that computing the softmax is actually equivalent to computing V(D,G) where D is some convex combination of D; (see Appendix A.5).
4.2 USING THE ORIGINAL MINIMAX OBJECTIVE
To illustrate the effect the softmax has on training, observe that the component of AM,, ft(V, 0) relevant to generator training can be rewritten as
x y Be~po(s | log(l â Di(2))] = 7Be~pote)| loa(2)]. (6)
where z = ma â D;(x)). Note that the generator gradient, | 2icate)), is minimized at z = 1 over E (0, yi. From this form, it is clear that z = 1 if and only if D; OVi, so G only receives a vanishing gradient if all D; agree that the sample is fake; this is especially unlikely for large N. In other words, G only needs to fool a single D; to receive constructive feedback. This result allows the generator to successfully minimize the original generator objective, log(1 â D). This is in contrast to the more popular â log(D) introduced to artificially enhance gradients at the start of training.
At the beginning of training, when maxp, V (Dj, G) is likely too harsh a critic for the generator, we can set A closer to zero to use the mean, increasing the odds of providing constructive feedback to the generator. In addition, the discriminators have the added benefit of functioning as an ensemble, reducing the variance of the feedback presented to the generator, which is especially important when the discriminators are far from optimal and are still learning a reasonable decision boundary. As training progresses and the discriminators improve, we can increase \ to become more critical of the generator for more refined training.
4.3. MAINTAINING MULTIPLE HYPOTHESES
We argue for this ensemble approach on a more fundamental level as well. Here, we draw on the density ratio estimation perspective of GANs (Uehara et al. (2016)). The original GAN proof assumes we have access to Paata(), if only implicitly. In most cases of interest, the discriminator only has access to a finite dataset sampled from pyata(x); therefore, when computing expectations of V(D,G), we only draw samples from our finite dataset. This is equivalent to training a GAN with Paata(%) = Pdata Which is a distribution consisting of point masses on all the data points in the dataset. For the sake of argument, letâs assume we are training a discriminator and generator, each
'VeV= -y; Be oP: Thi âD;)= -t OP r for D, = 1, Dzx = 0. Our argument ignores OP e .
Published as a conference paper at ICLR 2017
with infinite capacity. In this case, the global optimum (pq(x) = Pdata(z)) fails to capture any of the interesting structure from pyata(x), the true distribution we are trying to learn. Therefore, it is actually critical that we avoid this global optimum.
> x
Figure 2: Consider a dataset consisting of the nine 1-dimensional samples in black. Their corre- sponding probability mass function is given in light gray. After training GMAN, three discrimina- tors converge to distinct local optima which implicitly define distributions over the data (red, blue, yellow). Each discriminator may specialize in discriminating a region of the data space (placing more diffuse mass in other regions). Averaging over the three discriminators results in the distribu- tion in black, which we expect has higher likelihood under reasonable assumptions on the structure of the true distribution.
In practice, this degenerate result is avoided by employing learners with limited capacity and corrupt- ing data samples with noise (i.e., dropout), but we might better accomplish this by simultaneously training a variety of limited capacity discriminators. With this approach, we might obtain a diverse set of seemingly tenable hypotheses for the true paata(#). Averaging over these multiple locally optimal discriminators increases the entropy of Pdara(2) by diffusing the probability mass over the data space (see Figure 2 for an example).
4.4. AUTOMATING REGULATION
The problem of keeping the discriminator and generator in balance has been widely recognized in previous work with GANs. Issues with unstable dynamics, oscillatory behavior, and generator col- lapse are not uncommon. In addition, the discriminator is often times able to achieve a high degree of classification accuracy (producing a single scalar) before the generator has made sufficient progress on the arguably more difficult generative task (producing a high dimensional sample). Salimans et al. (2016) suggested label smoothing to reduce the vulnerability of the generator to a relatively superior discriminator. Here, we explore an approach that enables the generator to automatically temper the performance of the discriminator when necessary, but still encourages the generator to challenge itself against more accurate adversaries. Specifically, we augment the generator objective:
gain, Fe(Vi) ~ FQ) @)
where f(A) is monotonically increasing in \ which appears in the softmax equations, (3)â(5). In experiments, we simply set f(A) = cA with c a constant (e.g., 0.001). The generator is incentivized to increase \ to reduce its objective at the expense of competing against the best available adversary D* (see Appendix A.6).
# 5 EVALUATION
Evaluating GANs is still an open problem. In their original work, Goodfellow et al. (2014) report log likelihood estimates from Gaussian Parzen windows, which they admit, has high variance and is known not to perform well in high dimensions. Theis et al. (2016) recommend avoiding Parzen windows and argue that generative models should be evaluated with respect to their intended appli- cation. Salimans et al. (2016) suggest an Inception score, however, it assumes labels exist for the dataset. Recently, Im et al. (2016) introduced the Generative Adversarial Metric (GAM) for mak- ing pairwise comparisons between independently trained GAN models. The core idea behind their approach is given two generator, discriminator pairs (G1, D) and (G2, D2), we should be able to learn their relative performance by judging each generator under the opponentâs discriminator.
Published as a conference paper at ICLR 2017
5.1 METRIC
In GMAN, the opponent may have multiple discriminators, which makes it unclear how to perform the swaps needed for GAM. We introduce a variant of GAM, the generative multi-adversarial metric (GMAM), that is amenable to training with multiple discriminators,
FE, V") FeV) FEA)! FB, (V2) GMAM = log ( (8)
where a and b refer to the two GMAN variants (see Section 3 for notation F¢(V;)). The idea here is similar. If G2 performs better than G, with respect to both D, and D2, then GMAM>0 (remember V <0 always). If G; performs better in both cases, GMAM<0, otherwise, the result is indeterminate.
5.2 EXPERIMENTS
We evaluate the aforementioned variations of GMAN on a variety of image generation tasks: MNIST (LeCun et al. (1998)), CIFAR-10 (Krizhevsky (2009)) and CelebA (Liu et al. (2015)). We focus on rates of convergence to steady state along with quality of the steady state generator according to the GMAM metric. To summarize, loosely in order of increasing discriminator leniency, we compare
e F-boost: A single AdaBoost.OL-boosted discriminator (see Appendix A.7).
e P-boost: Dj; is trained according to AdaBoost.OL. A max over the weak learner losses is presented to the generator instead of the boosted prediction (see Appendix A.7).
e GMAN-max: max{V;} is presented to the generator.
e GAN: Standard GAN with a single discriminator (see Appendix A.2).
mod-GAN: GAN with modified objective (generator minimizes â log(D(G(z))).
# e
e GMAN-): GMAN with F :=arithmetic softmax with parameter \.
GMAN*: The arithmetic softmax is controlled by the generator through X.
# e
All generator and discriminator models are deep (de)convolutional networks (Radford et al. (2015)), and aside from the boosted variants, all are trained with Adam (Kingma & Ba (2014)) and batch normalization (loffe & Szegedy (2015)). Discriminators convert the real-valued outputs of their networks to probabilities with squashed-sigmoids to prevent saturating logarithms in the minimax objective (« + Ss ). See Appendix A.8 for further details. We test GMAN systems with N = {2,5} discriminators. We maintain discriminator diversity by varying dropout and network depth.
# 5.2.1 MNIST
Figure 3 reveals that increasing the number of discriminators reduces the number of iterations to steady-state by 2x on MNIST; increasing N (the size of the discriminator ensemble) also has the added benefit of reducing the variance the minimax objective over runs. Figure 4 displays the vari- ance of the same objective over a sliding time window, reaffirming GMANâs acceleration to steady- state. Figure 5 corroborates this conclusion with recognizable digits appearing approximately an epoch before the single discriminator run; digits at steady-state appear slightly sharper as well.
Our GMAM metric (see Table 1) agrees with the relative quality of images in Figure 5 with GMAN* achieving the best overall performance. Figure 6 reveals GMAN*âs attempt to regulate the difficulty
Score | Variant | GMAN* | GMAN-0 | GMAN-max | _mod-GAN + 0.127 GMAN* - â0.020 + 0.009 | â0.028 + 0.019 | â0.089 + 0.036 + | 0.007 GMAN-0_ | 0.020 + 0.009 - â0.013 + 0.015 0.027 3 | â0.034 | GMAN-max | 0.028+ 0.019 | 0.013 + 0.015 - â0.011 + 0.024 | â0.122 mod-GAN 0.089 + 0.036 | 0.018 + 0.027 0.011 + 0.024 -
Table 1: Pairwise GMAM metric means with stdev for select models on MNIST. For each column, a positive GMAM indicates better performance relative to the row opponent; negative implies worse. Scores are obtained by summing each variantâs column.
Published as a conference paper at ICLR 2017 1 original 1 modified 2 1000 2000 3000 4000 5000 6000 Iteration # Figure 3: Generator objective, F', averaged over 5 training runs on MNIST. Increas- ing the number of discriminators accelerates convergence of F to steady state (solid line) and reduces its variance, o? (filled shadow +1o). Figure 4 provides alternative evidence of GMAN*âs accelerated convergence. 1 Discriminator 2 Discriminators Cumulative STD of F(V(D,G)) e is} 3 Q 1000 2000 3000 4000 5000 6000 Iteration # Figure 4: Stdev, o, of the generator objec- tive over a sliding window of 500 iterations. Lower values indicate a more steady-state. GMAN* with NV = 5 achieves steady-state at 2x speed of GAN (N = 1). Note Fig- ure 3âs filled shadows reveal stdev of F' over runs, while this plot shows stdev over time. PFs fey ela "epoch 2 epochs 3 epochs 5 epochs 10 epochs 5 Discriminators Figure 5: Comparison of image quality across epochs for NV = {1, 2,5} using GMAN-O on MNIST. of the game to accelerate learning. Figure 7 displays the GMAM scores comparing variable \ controlled by GMAN*. 0 2000 4000 6000 8000 1000012000 Iteration # Figure 6: GMAN* regulates difficulty of the Figure 7: Pairwise fixed \âs to the Score »* A=1 A=0 5) > =0.008 | â0.019 tT 0.028 =0.009 | £0.010 oO 5 _ 0.008 - =0.008 a 0.001 A=1 =0.009 =0.010 _ 5 _ 0.019 0.008 - 0.025 | A=0 0.010 | £0.010 GMAM sde(CMAMD for GMAN-) and game by adjusting A. Initially, G reduces o_ GMAN* (\*) over 5 runs on MNIST. ease learning and then gradually increases \ for a more challenging learning environment.
Published as a conference paper at ICLR 2017 5.2.2 CELEBA & CIFAR-10 We see similar accelerated convergence behavior for the CelebA dataset in Figure 8. OF = HER RSAlAdeam alae > a2 ~~ 3 A SeSea dg age Aaa ae AB eta 5 Beds eS eoap-â ANAC HEO ANE atone 1 Discriminator 2 Discriminators 3 Discriminators Figure 8: Image quality improvement across number of generators at same number of iterations for GMAN-0 on CelebA. Figure 9 displays images generated by GMAN-0 on CIFAR-10. See Appendix A.3 for more results. ted | Pte de Pe SS a Eee Er 4 OY" 8. BGe-Baee een én Generated Images Real Images Figure 9: Images generated by GMAN-O on the CIFAR-10 dataset. We also found that GMAN is robust to mode collapse. We believe this is because the generator must appease a diverse set of discriminators in each minibatch. Emitting a single sample will score well for one discriminator at the expense of the rest of the discriminators. Current solutions (e.g., minibatch discrimination) are quadratic in batch size. GMAN, however, is linear in batch size. 6 CONCLUSION We introduced multiple discriminators into the GAN framework and explored discriminator roles ranging from a formidable adversary to a forgiving teacher. Allowing the generator to automatically tune its learning schedule (GMAN*) outperformed GANs with a single discriminator on MNIST. In general, GMAN variants achieved faster convergence to a higher quality steady state on a variety of tasks as measured by a GAM-type metric (GMAM). In addition, GMAN makes using the original GAN objective possible by increasing the odds of the generator receiving constructive feedback. In future work, we will look at more sophisticated mechanisms for letting the generator control the game as well as other ways to ensure diversity among the discriminators. Introducing multiple generators is conceptually an obvious next step, however, we expect difficulties to arise from more complex game dynamics. For this reason, game theory and game design will likely be important. ACKNOWLEDGMENTS We acknowledge helpful conversations with Stefan Dernbach, Archan Ray, Luke Vilnis, Ben Turtel, Stephen Giguere, Rajarshi Das, and Subhransu Maji. We also thank NVIDIA for donating a K40 GPU. This material is based upon work supported by the National Science Foundation under Grant Nos. IIS-1564032. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF.
Published as a conference paper at ICLR 2017
# BIBLIOGRAPHY
Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv: 1603.04467, 2016.
Hana Ajakan, Pascal Germain, Hugo Larochelle, Frangois Laviolette, and Mario Marchand. Domain-adversarial neural networks. arXiv preprint arXiv: 1412.4446, 2014.
J Andrew Bagnell. Robust supervised learning. In Proceedings Of The National Conference On Artificial Intelligence, volume 20, pp. 714. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 2005.
Alina Beygelzimer, Satyen Kale, and Haipeng Luo. Optimal and adaptive algorithms for online boosting. arXiv preprint arXiv: 1502.02651, 2015.
Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Info- gan: Interpretable representation learning by information maximizing generative adversarial nets. arXiv preprint arXiv: 1606.03657, 2016.
Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in neural information processing systems, pp. 1486-1494, 2015.
Jeff Donahue, Philipp Krahenbiihl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv: 1605.09782, 2016.
Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropi- etro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv: 1606.00704, 2016.
Harrison Edwards and Amos Storkey. Censoring representations with an adversary. arXiv preprint arXiv:1511.05897, 2015.
Jon Gauthier. Conditional generative adversarial nets for convolutional face generation. Class Project for Stanford CS231N: Convolutional Neural Networks for Visual Recognition, Winter semester, 2014, 2014.
Tan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor- mation Processing Systems, pp. 2672-2680, 2014.
Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. arXiv preprint arXiv: 1606.03476, 2016.
Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating images with recurrent adversarial networks. arXiv preprint arXiv: 1602.05110, 2016.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv: 1502.03167, 2015.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Alex Krizhevsky. Learning multiple layers of features from tiny images. Masterâs Thesis, 2009.
Yann LeCun, Corinna Cortes, and Christopher JC Burges. The mnist database of handwritten digits, 1998.
Yujia Li, Kevin Swersky, and Richard Zemel. Generative moment matching networks. In Jnterna- tional Conference on Machine Learning, pp. 1718-1727, 2015.
Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015.
Published as a conference paper at ICLR 2017
Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. Adversarial autoencoders. arXiv preprint arXiv: 1511.05644, 2015.
Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. arXiv preprint arXiv: 1606.00709, 2016.
Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv: 1511.06434, 2015.
Siamak Ravanbakhsh, Francois Lanusse, Rachel Mandelbaum, Jeff Schneider, and Barnabas Poczos. Enabling dark energy science with deep generative models of galaxy images. arXiv preprint arXiv: 1609.05796, 2016.
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. arXiv preprint arXiv: 1606.03498, 2016.
Jiirgen Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation, 4(6):863-879, 1992.
Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390, 2015.
Lucas Theis, Adron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. arXiv preprint arXiv: 1511.01844v3, 2016.
Masatoshi Uehara, Issei Sato, Masahiro Suzuki, Kotaro Nakayama, and Yutaka Matsuo. Generative adversarial nets from a density ratio estimation perspective. arXiv preprint arXiv:1610.02920, 2016.
Donggeun Yoo, Namil Kim, Sunggyun Park, Anthony S Paek, and In So Kweon. Pixel-level domain transfer. arXiv preprint arXiv: 1603.07442, 2016.
Matthew D Zeiler, Dilip Krishnan, Graham W Taylor, and Rob Fergus. Deconvolutional networks. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pp. 2528-2535. IEEE, 2010.
Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. arXiv preprint arXiv: 1609.03126, 2016.
10
Published as a conference paper at ICLR 2017 A APPENDIX A.1 ACCELERATED CONVERGENCE & REDUCED VARIANCE See Figures 10, 11, 12, and 13. ~*""9 2000 4000 6000 8000 1000012000 Iteration # Figure 10: Generator objective, Fâ, averaged over 5 training runs on CelebA. Increasing N (# of D) accelerates convergence of F to steady state (solid line) and reduces its vari- ance, a? (filled shadow +1¢). Figure 11 pro- vides alternative evidence of GMAN-Oâs ac- celerated convergence. N=1 Original 1 Modified N=2,\=0 N=2,A=1 FAMD.G)) 0 5000 10000 15000 20000 25000 30000 Iteration # Figure 12: Generator objective, Fâ, averaged over 5 training runs on CIFAR-10. Increas- ing N (# of D) accelerates convergence of F to steady state (solid line) and reduces its variance, o? (filled shadow +10). Figure 13 provides alternative evidence of GMAN-0âs accelerated convergence. A.2. ADDITIONAL GMAM TABLES )) 10° | 2% 2 wee â aa aaa 107 LAN gol ore" OQ 2000 4000 6000 8000 1000012000 Iteration # a Cumulative STD of F(WD,G » Oo Figure 11: Stdev, a, of the generator objec- tive over a sliding window of 500 iterations. Lower values indicate a more steady-state. GMAN-O0 with NV = 5 achieves steady-state at 2x speed of GAN (N = 1). Note Fig- ure 10âs filled shadows reveal stdev of F over runs, while this plot shows stdev over time. â N=1 Original = _ 1 Modified S S S102 = 10 is 2 & e $ â107 5 E 5 3 107 0 5000 10000 15000 20000 25000 30000 Iteration # Figure 13: Stdev, a, of the generator objec- tive over a sliding window of 500 iterations. Lower values indicate a more steady-state. GMAN-O with NV = 5 achieves steady-state at 2x speed of GAN (N = 1). Note Fig- ure 12âs filled shadows reveal stdev of F over runs, while this plot shows stdev over time. See Tables 2, 3, 4, 5, 6. Increasing the number of discriminators from 2 to 5 on CIFAR-10 signif- icantly improves scores over the standard GAN both in terms of the GMAM metric and Inception scores. A.3 GENERATED IMAGES See Figures 14 and 15.
Published as a conference paper at ICLR 2017 Table 2: Pairwise GMAM metric means for select models on MNIST. For each column, a positive GMAM indicates better performance relative to the row opponent; negative implies worse. Scores are obtained by summing each column. | Score | Variant | GMAN-O | GMAN-1 | GMAN* | mod-GAN | t 0.172 | GMAN-0 - â0.022 â0.062 â0.088 5 | 0.050 | GMAN-1 0.022 - 0.006 â0.078 3 | â0.055 | GMAN* 0.062 â0.006 - â0.001 | â0.167 | mod-GAN 0.088 0.078 0.001 - Table 3: Pairwise GMAM metric means for select models on CIFAR-10. For each column, a positive GMAM indicates better performance relative to the row opponent; negative implies worse. Scores are obtained by summing each column. GMAN variants were trained with two discriminators. GMAN-0 | GMAN-1 | mod-GAN_| _GMAN* Score | 5.878 = 0.193 | 5.765 £ 0.108 | 5.738 £ 0.176 | 5.539 + 0.099 Table 4: Inception score means with standard deviations for select models on CIFAR-10. Higher scores are better. GMAN variants were trained with two discriminators. | Score | Variant | GMAN-0 | GMAN* | GMAN-I | mod-GAN | t 0.180 | GMAN-0 - â0.008 â0.041 â0.132 5) 0.122 GMAN* 0.008 - â0.038 â0.092 3} 0.010 | GMAN-1 0.041 0.038 - â0.089 | â0.313 | mod-GAN 0.132 0.092 0.089 - Table 5: Pairwise GMAM metric means for select models on CIFAR-10. For each column, a positive GMAM indicates better performance relative to the row opponent; negative implies worse. Scores are obtained by summing each column. GMAN variants were trained with five discriminators. | GMAN-1 | GMAN-0 | GMAN* | _mod-GAN Score [6.001 £0.194 | 5.957 £0.135 | 5.955 £0.153 | 5.738 £0.176 Table 6: Inception score means with standard deviations for select models on CIFAR-10. Higher scores are better. GMAN variants were trained with five discriminators. ERE Bae eae HeEBCA Bae PRREe Bee Ea Ga Ge ba ee Ee T= |S] esl | 1 Discriminator 5 discriminator GMAN* 5 discriminator GMAN -0 Figure 14: Sample of pictures generated on CelebA cropped dataset. 12 Score | Variant _| GMAN* | GMAN-I | GAN_ | GMAN-0 | GMAN-max | mod-GAN 0.184 GMAN* - â0.007 | â0.040 | â0.020 â0.028 â0.089 0.067 GMAN-1 0.007 - â0.008 | â0.008 â0.021 â0.037 tT 0.030 GAN 0.040 0.008 - 0.002 â0.018 â0.058 2} 0.005 GMAN-O 0.020 0.008 0.002 - â0.013 â0.018 | â0.091 | GMAN-max 0.028 0.021 0.018 0.013 - â0.011 â0.213 | mod-GAN 0.089 0.037 0.058 0.018 0.011 -
Score | Variant _| GMAN* | GMAN-I | GAN_ | GMAN-0 | GMAN-max | mod-GAN 0.184 GMAN* - â0.007 | â0.040 | â0.020 â0.028 â0.089 0.067 GMAN-1 0.007 - â0.008 | â0.008 â0.021 â0.037 tT 0.030 GAN 0.040 0.008 - 0.002 â0.018 â0.058 2} 0.005 GMAN-O 0.020 0.008 0.002 - â0.013 â0.018 | â0.091 | GMAN-max 0.028 0.021 0.018 0.013 - â0.011 â0.213 | mod-GAN 0.089 0.037 0.058 0.018 0.011 -
| Score | Variant | GMAN-O | GMAN-1 | GMAN* | mod-GAN | t 0.172 | GMAN-0 - â0.022 â0.062 â0.088 5 | 0.050 | GMAN-1 0.022 - 0.006 â0.078 3 | â0.055 | GMAN* 0.062 â0.006 - â0.001 | â0.167 | mod-GAN 0.088 0.078 0.001 -
| Score | Variant | GMAN-0 | GMAN* | GMAN-I | mod-GAN | t 0.180 | GMAN-0 - â0.008 â0.041 â0.132 5) 0.122 GMAN* 0.008 - â0.038 â0.092 3} 0.010 | GMAN-1 0.041 0.038 - â0.089 | â0.313 | mod-GAN 0.132 0.092 0.089 -
| GMAN-1 | GMAN-0 | GMAN* | _mod-GAN Score [6.001 £0.194 | 5.957 £0.135 | 5.955 £0.153 | 5.738 £0.176
Published as a conference paper at ICLR 2017 ie TT se BERRA Hees ST tet tet tt He®tis«abiiass afe4S8880 ee Ps | ASABE ERE Bia aneaee ES aecihe GG. eae eet LF ite ed eR Dt tet bl ae. PRES SEES E*2LRe VAR RMGEEEA fhoavesete Basak eba det a de Pate | SR ZEaa2ER8 w EAM he 4g2a8 Real Images Generated Images Figure 15: Sample of pictures generated by GMAN-O on CIFAR dataset. A.4. SOMEWHAT RELATED WORK A GAN framework with two discriminators appeared in Yoo et al. (2016), however, it is applica- ble only in a semi-supervised case where a label can be assigned to subsets of the dataset (e.g., X = {X, = Domain 1, Â¥% = Domain 2,...}). In contrast, our framework applies to an unsu- pervised scenario where an obvious partition of the dataset is unknown. Furthermore, extending GMAN to the semi-supervised domain-adaptation scenario would suggest multiple discriminators per domain, therefore our line of research is strictly orthogonal to that of their multi-domain dis- criminator approach. Also, note that assigning a discriminator to each domain is akin to prescribing anew discriminator to each value of a conditional variable in conditional GANs (Mirza & Osindero (2014)). In this case, we interpret GMAN as introducing multiple conditional discriminators and not a discriminator for each of the possibly exponentially many conditional labels. In Section 4.4, we describe an approach to customize adversarial training to better suit the devel- opment of the generator. An approach with similar conceptual underpinnings was described in Ravanbakhsh et al. (2016), however, similar to the above, it is only admissible in a semi-supervised scenario whereas our applies to the unsupervised case. A.5 Softmax REPRESENTABILITY Let softmax(V;) = Ve {miny,,maxy,]. Also let a = arg min; V;, b = arg max; V;, and V(t) = V((1 â t)Da + tDp) so that V(0) = V, and V(1) = Vy. The softmax and minimax objective V (Dj, G) are both continuous in their inputs, so by the intermediate value theorem, we have that 3¢ ⬠[0,1] st. V(é) = V, which implies 3D ⬠D s.t. V(D,G) = V. This result implies that the softmax (and any other continuous substitute) can be interpreted as returning v(d, G) for some D selected by computing an another, unknown function over the space of the discriminators. This result holds even if D is not representable by the architecture chosen for Dâs neural network. 13
Published as a conference paper at ICLR 2017 A.6 UNCONSTRAINED OPTIMIZATION To convert GMAN* minimax formulation to an unconstrained minimax formulation, we introduce an auxiliary variable, A, define \(A) = log(1 + e*), and let the generator minimize over A ⬠R. A.7 BOOSTING WITH AdaBoost.OL AdaBoost.OL (Beygelzimer et al. (2015)) does not require knowledge of the weak learnerâs slight edge over random guessing (P(correct label) = 0.5 + y ⬠(0,0.5]), and in fact, allows 7 < 0. This is crucial because our weak learners are deep nets with unknown, possibly negative, 7yâs. BIBI 22] 4] 4) ele BBaAAoo BBanAoo AMMO Figure 16: Sample of pictures generated across 4 independent runs on MNIST with F-boost (similar results with P-boost). ee Poe fo [| SEE fo: fo: fo: fo" A.8 EXPERIMENTAL SETUP All experiments were conducted using an architecture similar to DCGAN (Radford et al. (2015)). We use convolutional transpose layers (Zeiler et al. (2010)) for G and strided convolutions for D except for the input of G' and the last layer of D. We use the single step gradient method as in (Nowozin et al. (2016)), and batch normalization (loffe & Szegedy (2015)) was used in each of the generator layers. The different discriminators were trained with varying dropout rates from (0.3, 0.7]. Variations in the discriminators were effected in two ways. We varied the architecture by varying the number of filters in the discriminator layers (reduced by factors of 2, 4 and so on), as well as varying dropout rates. Secondly we also decorrelated the samples that the disriminators were training on by splitting the minibatch across the discriminators. The code was written in Tensorflow (Abadi et al. (2016)) and run on Nvidia GTX 980 GPUs. Code to reproduce experiments and plots is at https://github.com/iDurugkar/GMAN. Specifics for the MNIST architecture and training are: ¢ Generator latent variables z ~ U/ (â1,1)'°° e Generator convolution transpose layers: (4, 4, 128) , (8, 8, 64) , (16, 16, 32) , (32, 32, 1) e Base Discriminator architecture: (32, 32, 1) , (16, 16, 32) , (8, 8, 64) , (4, 4, 128). e Variants have either convolution 3 (4,4,128) removed or all the filter sizes are divided by 2 or 4. That is, (32,32, 1), (16,16, 16) , (8,8, 32) ,(4,4,64) or (32, 32, 1) , (16, 16, 8) , (8, 8, 16) , (4, 4, 32). e ReLu activations for all the hidden units. Tanh activation at the output units of the generator. Sigmoid at the output of the Discriminator. e Training was performed with Adam (Kingma & Ba (2014)) (lr = 2 x 10-4, 6, = 0.5). e MNIST was trained for 20 epochs with a minibatch of size 100. e CelebA and CIFAR were trained over 24000 iterations with a minibatch of size 100. 14 | {
"id": "1511.06390"
} |
1611.01436 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | 7 1 0 2
r a M 7 1 ] L C . s c [
2 v 6 3 4 1 0 . 1 1 6 1 : v i X r a
# LEARNING RECURRENT SPAN REPRESENTATIONS FOR EXTRACTIVE QUESTION ANSWERING
Kenton Leet, Shimi Salant*, Tom Kwiatkowksi', Ankur Parikh?, Dipanjan Das?, and Jonathan Berant*
kentonl@cs.washington.edu, shimonsalant@mail.tau.ac.il {tomkwiat, aparikh, dipanjand}@google.com, joberant@cs.tau.ac.il
tUniversity of Washington, Seattle, USA *Tel-Aviv University, Tel-Aviv, Israel tGoogle Research, New York, USA
# ABSTRACT
The reading comprehension task, that asks questions about a given evidence docu- ment, is a central problem in natural language understanding. Recent formulations of this task have typically focused on answer selection from a set of candidates pre-deï¬ned manually or through the use of an external NLP pipeline. However, Rajpurkar et al. (2016) recently released the SQUAD dataset in which the an- swers can be arbitrary strings from the supplied text. In this paper, we focus on this answer extraction task, presenting a novel model architecture that efï¬ciently builds ï¬xed length representations of all spans in the evidence document with a re- current network. We show that scoring explicit span representations signiï¬cantly improves performance over other approaches that factor the prediction into sep- arate predictions about words or start and end markers. Our approach improves upon the best published results of Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.âs baseline by > 50%.
# INTRODUCTION
A primary goal of natural language processing is to develop systems that can answer questions about the contents of documents. The reading comprehension task is of practical interest â we want computers to be able to read the worldâs text and then answer our questions â and, since we believe it requires deep language understanding, it has also become a ï¬agship task in NLP research.
A number of reading comprehension datasets have been developed that focus on answer selection from a small set of alternatives deï¬ned by annotators (Richardson et al., 2013) or existing NLP pipelines that cannot be trained end-to-end (Hill et al., 2016; Hermann et al., 2015). Subsequently, the models proposed for this task have tended to make use of the limited set of candidates, basing their predictions on mention-level attention weights (Hermann et al., 2015), or centering classi- ï¬ers (Chen et al., 2016), or network memories (Hill et al., 2016) on candidate locations.
Recently, Rajpurkar et al. (2016) released the less restricted SQUAD dataset1 that does not place any constraints on the set of allowed answers, other than that they should be drawn from the evidence document. Rajpurkar et al. proposed a baseline system that chooses answers from the constituents identiï¬ed by an existing syntactic parser. This allows them to prune the O(N 2) answer candidates in each document of length N , but it also effectively renders 20.7% of all questions unanswerable.
Subsequent work by Wang & Jiang (2016) signiï¬cantly improve upon this baseline by using an end- to-end neural network architecture to identify answer spans by labeling either individual words, or the start and end of the answer span. Both of these methods do not make independence assumptions about substructures, but they are susceptible to search errors due to greedy training and decoding.
1http://stanford-qa.com
1
In contrast, here we argue that it is beneï¬cial to simplify the decoding procedure by enumerating all possible answer spans. By explicitly representing each answer span, our model can be globally normalized during training and decoded exactly during evaluation. A naive approach to building the O(N 2) spans of up to length N would require a network that is cubic in size with respect to the passage length, and such a network would be untrainable. To overcome this, we present a novel neural architecture called RASOR that builds ï¬xed-length span representations, reusing recurrent computations for shared substructures. We demonstrate that directly classifying each of the competing spans, and training with global normalization over all possible spans, leads to a signiï¬cant increase in performance. In our experiments, we show an increase in performance over Wang & Jiang (2016) of 5% in terms of exact match to a reference answer, and 3.6% in terms of predicted answer F1 with respect to the reference. On both of these metrics, we close the gap between Rajpurkar et al.âs baseline and the human-performance upper-bound by > 50%.
2 EXTRACTIVE QUESTION ANSWERING
2.1 TASK DEFINITION
Extractive question answering systems take as input a question q = {qo,..., qn} and a passage of text p = {po,...,Pm} from which they predict a single answer span a = (Astart, Gena), fepresented as a pair of indices into p. Machine learned extractive question answering systems, such as the one presented here, learn a predictor function f(q, p) > a from a training dataset of (q, p, a) triples.
2.2 RELATED WORK
For the SQUAD dataset, the original paper from Rajpurkar et al. (2016) implemented a linear model with sparse features based on n-grams and part-of-speech tags present in the question and the can- didate answer. Other than lexical features, they also used syntactic information in the form of de- pendency paths to extract more general features. They set a strong baseline for following work and also presented an in depth analysis, showing that lexical and syntactic features contribute most strongly to their modelâs performance. Subsequent work by Wang & Jiang (2016) use an end-to-end neural network method that uses a Match-LSTM to model the question and the passage, and uses pointer networks (Vinyals et al., 2015) to extract the answer span from the passage. This model resorts to greedy decoding and falls short in terms of performance compared to our model (see Sec- tion 5 for more detail). While we only compare to published baselines, there are other unpublished competitive systems on the SQUAD leaderboard, as listed in footnote 4.
A task that is closely related to extractive question answering is the Cloze task (Taylor, 1953), in which the goal is to predict a concealed span from a declarative sentence given a passage of supporting text. Recently, Hermann et al. (2015) presented a Cloze dataset in which the task is to predict the correct entity in an incomplete sentence given an abstractive summary of a news article. Hermann et al. also present various neural architectures to solve the problem. Although this dataset is large and varied in domain, recent analysis by Chen et al. (2016) shows that simple models can achieve close to the human upper bound. As noted by the authors of the SQUAD paper, the annotated answers in the SQUAD dataset are often spans that include non-entities and can be longer phrases, unlike the Cloze datasets, thus making the task more challenging.
Another, more traditional line of work has focused on extractive question answering on sentences, where the task is to extract a sentence from a document, given a question. Relevant datasets include datasets from the annual TREC evaluations (Voorhees & Tice, 2000) and WikiQA (Yang et al., 2015), where the latter dataset speciï¬cally focused on Wikipedia passages. There has been a line of interesting recent publications using neural architectures, focused on this variety of extractive question answering (Tymoshenko et al., 2016; Wang et al., 2016, inter alia). These methods model the question and a candidate answer sentence, but do not focus on possible candidate answer spans that may contain the answer to the given question. In this work, we focus on the more challenging problem of extracting the precise answer span.
2
# 3 MODEL
We propose a model architecture called RASOR2 illustrated in Figure 1, that explicitly computes embedding representations for candidate answer spans. In most structured prediction problems (e.g. sequence labeling or parsing), the number of possible output structures is exponential in the input length, and computing representations for every candidate is prohibitively expensive. However, we exploit the simplicity of our task, where we can trivially and tractably enumerate all candidates. This facilitates an expressive model that computes joint representations of every answer span, that can be globally normalized during learning.
In order to compute these span representations, we must aggregate information from the passage and the question for every answer candidate. For the example in Figure 1, RASOR computes an embedding for the candidate answer spans: ï¬xed to, ï¬xed to the, to the, etc. A naive approach for these aggregations would require a network that is cubic in size with respect to the passage length. Instead, our model reduces this to a quadratic size by reusing recurrent computations for shared substructures (i.e. common passage words) from different spans.
Since the choice of answer span depends on the original question, we must incorporate this infor- mation into the computation of the span representation. We model this by augmenting the passage word embeddings with additional embedding representations of the question.
In this section, we motivate and describe the architecture for RASOR in a top-down manner.
3.1 SCORING ANSWER SPANS
The goal of our extractive question answering system is to predict the single best answer span among all candidates from the passage p, denoted as A(p). Therefore, we deï¬ne a probability distribution over all possible answer spans given the question q and passage p, and the predictor function ï¬nds the answer span with the maximum likelihood:
f (q, p) := argmax aâA(p) One might be tempted to introduce independence assumptions that would enable cheaper decoding. For example, this distribution can be modeled as (1) a product of conditionally independent distribu- tions (binary) for every word or (2) a product of conditionally independent distributions (over words) for the start and end indices of the answer span. However, we show in Section 5.2 that such inde- pendence assumptions hurt the accuracy of the model, and instead we only assume a ï¬xed-length representation ha of each candidate span that is scored and normalized with a softmax layer (Span score and Softmax in Figure 1):
a â A(p)
# Sa = Wa FFNN(ha) _ exp(Sa) = 5 Texplaa)
_ exp(Sa) P| 4B) = 5 Texplaa) ac A(p) (3)
where FFNN(·) denotes a fully connected feed-forward neural network that provides a non-linear mapping of its input embedding.
3.2 RASOR: RECURRENT SPAN REPRESENTATION
The previously defined probability distribution depends on the answer span representations, ha. When computing ha, we assume access to representations of individual passage words that have been augmented with a representation of the question. We denote these question-focused passage word embeddings as {pj,...,p%*,} and describe their creation in Section In order to reuse computation for shared substructures, we use a bidirectional LSTM (Hochreiter & Schmidhuber| allows us to simply concatenate the bidirectional LSTM (BiLSTM) outputs at the endpoints of a span to jointly encode its inside and outside information (Span embedding in Figure[I}: {py ,--+;Pim} = BILSTM({pj, --- Pm }) (4) | (starts Gend) ⬠A(p) (5) start
2An abbreviation for Recurrent Span Representations, pronounced as razor.
3
(2)
where BILSTM(-) denotes a BiLSTM over its input embedding sequence and p*â is the concatenation of forward and backward outputs at time-step 7. While the visualization in Figure}1]shows a single layer BiLSTM for simplicity, we use a multi-layer BiLSTM in our experiments. The concatenated output of each layer is used as input for the subsequent layer, allowing the upper layers to depend on the entire passage.
3.3 QUESTION-FOCUSED PASSAGE WORD EMBEDDING
Computing the question-focused passage word embeddings {pâ m} requires integrating ques- tion information into the passage. The architecture for this integration is ï¬exible and likely depends on the nature of the dataset. For the SQUAD dataset, we ï¬nd that both passage-aligned and passage- independent question representations are effective at incorporating this contextual information, and experiments will show that their beneï¬ts are complementary. To incorporate these question rep- resentations, we simply concatenate them with the passage word embeddings (Question-focused passage word embedding in Figure 1).
We use ï¬xed pretrained embeddings to represent question and passage words. Therefore, in the fol- lowing discussion, notation for the words are interchangeable with their embedding representations.
Question-independent passage word embedding The ï¬rst component simply looks up the pre- trained word embedding for the passage word, pi.
Passage-aligned question representation In this dataset, the question-passage pairs often contain large lexical overlap or similarity near the correct answer span. To encourage the model to exploit these similarities, we include a ï¬xed-length representation of the question based on soft-alignments with the passage word. The alignments are computed via neural attention (Bahdanau et al., 2014), and we use the variant proposed by Parikh et al. (2016), where attention scores are dot products between non-linear mappings of word embeddings.
1 ⤠j ⤠n (6)
# sij = FFNN(pi) · FFNN(qj) exp(sij) k=1 exp(sik)
exp(sij) . ay = Sa l<j<n (7) 0 ST exp(san)
n ails _ aij (8) j=l
Passage-independent question representation We also include a representation of the question that does not depend on the passage and is shared for all passage words.
Similar to the previous question representation, an attention score is computed via a dot-product, except the question word is compared to a universal learned embedding rather any particular passage word. Additionally, we incorporate contextual information with a BiLSTM before aggregating the outputs using this attention mechanism.
The goal is to generate a coarse-grained summary of the question that depends on word order. For- mally, the passage-independent question representation qindep is computed as follows:
= BILSTM(q) 8; = Wa FFNN(q;) exp(s;) gq = 7 Vihar exp(se)
{91,---+4n} = BILSTM(q) (9)
1 ⤠j ⤠n (10)
exp(s;) . gq = l<j<n (11) 7 Vihar exp(se)
n gine? = SP aja (12) j=l
j=1
This representation is a bidirectional generalization of the question representation recently proposed by Li et al. (2016) for a different question-answering task.
Given the above three components, the complete question-focused passage word embedding for pi is their concatenation: pâ
4
(9)
Softmax Span score Hidden layer ï¬xed to ï¬xed to the to the to the turbine the turbine Span embedding Passage-level BiLSTM Question-focused passage word embedding ï¬xed to the turbine Passage-independent question representation (3) + Question-level BiLSTM What are stators attached to ? Passage-aligned question representation (1) ï¬xed + (2)
Figure 1: A visualization of RASOR, where the question is âWhat are the stators attached to?â and the passage is â. . . ï¬xed to the turbine . . . â. The model constructs question-focused passage word embeddings by concate- nating (1) the original passage word embedding, (2) a passage-aligned representation of the question, and (3) a passage-independent representation of the question shared across all passage words. We use a BiLSTM over these concatenated embeddings to efï¬ciently recover embedding representations of all possible spans, which are then scored by the ï¬nal layer of the model.
3.4 LEARNING
Given the above model speciï¬cation, learning is straightforward. We simply maximize the log- likelihood of the correct answer candidates and backpropagate the errors end-to-end.
# 4 EXPERIMENTAL SETUP
We represent each of the words in the question and document using 300 dimensional GloVe embed- dings trained on a corpus of 840bn words (Pennington et al., 2014). These embeddings cover 200k words and all out of vocabulary (OOV) words are projected onto one of 1m randomly initialized 300d embeddings. We couple the input and forget gates in our LSTMs, as described in Greff et al. (2016), and we use a single dropout mask to apply dropout across all LSTM time-steps as proposed by Gal & Ghahramani (2016). Hidden layers in the feed forward neural networks use rectiï¬ed linear units (Nair & Hinton, 2010). Answer candidates are limited to spans with at most 30 words.
To choose the ï¬nal model conï¬guration, we ran grid searches over: the dimensionality of the LSTM hidden states; the width and depth of the feed forward neural networks; dropout for the LSTMs; the number of stacked LSTM layers (1, 2, 3); and the decay multiplier [0.9, 0.95, 1.0] with which we multiply the learning rate every 10k steps. The best model uses 50d LSTM states; two-layer BiLSTMs for the span encoder and the passage-independent question representation; dropout of 0.1 throughout; and a learning rate decay of 5% every 10k steps.
5
All models are implemented using TensorFlow3 and trained on the SQUAD training set using the ADAM (Kingma & Ba, 2015) optimizer with a mini-batch size of 4 and trained using 10 asyn- chronous training threads on a single machine.
# 5 RESULTS
We train on the 80k (question, passage, answer span) triples in the SQUAD training set and report results on the 10k examples in the SQUAD development and test sets.
All results are calculated using the ofï¬cial SQUAD evaluation script, which reports exact answer match and F1 overlap of the unigrams between the predicted answer and the closest labeled answer from the 3 reference answers given in the SQUAD development set.
# 5.1 COMPARISONS TO OTHER WORK
Our model with recurrent span representations (RASOR) is compared to all previously published systems 4. Rajpurkar et al. (2016) published a logistic regression baseline as well as human perfor- mance on the SQUAD task. The logistic regression baseline uses the output of an existing syntactic parser both as a constraint on the set of allowed answer spans, and as a method of creating sparse features for an answer-centric scoring model. Despite not having access to any external representa- tion of linguistic structure, RASOR achieves an error reduction of more than 50% over this baseline, both in terms of exact match and F1, relative to the human performance upper bound.
Dev Test System EM F1 EM F1 Logistic regression baseline Match-LSTM (Sequence) Match-LSTM (Boundary) RASOR Human 39.8 54.5 60.5 66.4 81.4 51.0 67.7 70.7 74.9 91.0 40.4 54.8 59.4 67.4 82.3 51.0 68.0 70.0 75.5 91.2
Table 1: Exact match (EM) and span F1 on SQUAD.
More closely related to RASOR is the boundary model with Match-LSTMs and Pointer Networks by Wang & Jiang (2016). Their model similarly uses recurrent networks to learn embeddings of each passage word in the context of the question, and it can also capture interactions between endpoints, since the end index probability distribution is conditioned on the start index. However, both training and evaluation are greedy, making their system susceptible to search errors when decoding. In contrast, RASOR can efï¬ciently and explicitly model the quadratic number of possible answers, which leads to a 14% error reduction over the best performing Match-LSTM model.
5.2 MODEL VARIATIONS
We investigate two main questions in the following ablations and comparisons. (1) How important are the two methods of representing the question described in Section 3.3? (2) What is the impact of learning a loss function that accurately reï¬ects the span prediction task?
Question representations Table 2a shows the performance of RASOR when either of the two question representations described in Section 3.3 is removed. The passage-aligned question repre- sentation is crucial, since lexically similar regions of the passage provide strong signal for relevant answer spans. If the question is only integrated through the inclusion of a passage-independent rep- resentation, performance drops drastically. The passage-independent question representation over
3www.tensorflow.org 4As of submission, other unpublished systems are shown on the SQUAD leaderboard, including Match- LSTM with Ans-Ptr (Boundary+Ensemble), Co-attention, r-net, Match-LSTM with Bi-Ans-Ptr (Boundary), Co- attention old, Dynamic Chunk Reader, Dynamic Chunk Ranker with Convolution layer, Attentive Chunker.
6
the BiLSTM is less important, but it still accounts for over 3% exact match and F1. The input of both of these components is analyzed qualitatively in Section 6.
Question representation EM F1 Learning objective EM F1 Only passage-independent Only passage-aligned RASOR 48.7 63.1 66.4 56.6 71.3 74.9 Membership prediction BIO sequence prediction Endpoints prediction Span prediction w/ log loss 57.9 63.9 65.3 65.2 69.7 73.0 75.1 73.6 (a) Ablation of question representations. (b) Comparisons for different learning objectives given the same passage-level BiLSTM.
Table 2: Results for variations of the model architecture presented in Section 3.
Learning objectives Given a ï¬xed architecture that is capable of encoding the input question- passage pairs, there are many ways of setting up a learning objective to encourage the model to predict the correct span. In Table 2b, we provide comparisons of some alternatives (learned end-to- end) given only the passage-level BiLSTM from RASOR. In order to provide clean comparisons, we restrict the alternatives to objectives that are trained and evaluated with exact decoding.
The simplest alternative is to consider this task as binary classiï¬cation for every word (Membership prediction in Table 2b). In this baseline, we optimize the logistic loss for binary labels indicating whether passage words belong to the correct answer span. At prediction time, a valid span can be recovered in linear time by ï¬nding the maximum contiguous sum of scores.
Li et al. (2016) proposed a sequence-labeling scheme that is similar to the above baseline (BIO sequence prediction in Table 2b). We follow their proposed model and learn a conditional random ï¬eld (CRF) layer after the passage-level BiLSTM to model transitions between the different labels. At prediction time, a valid span can be recovered in linear time using Viterbi decoding, with hard transition constraints to enforce a single contiguous output.
We also consider a model that independently predicts the two endpoints of the answer span (End- points prediction in Table 2b). This model uses the softmax loss over passage words during learning. When decoding, we only need to enforce the constraint that the start index is no greater than the end index. Without the interactions between the endpoints, this can be computed in linear time. Note that this model has the same expressivity as RASOR if the span-level FFNN were removed.
Lastly, we compare with a model using the same architecture as RASOR but is trained with a binary logistic loss rather than a softmax loss over spans (Span prediction w/ logistic loss in Table 2b).
The trend in Table 2b shows that the model is better at leveraging the supervision as the learning objective more accurately reï¬ects the fundamental task at hand: determining the best answer span.
First, we observe general improvements when using labels that closely align with the task. For example, the labels for membership prediction simply happens to provide single contiguous spans in the supervision. The model must consider far more possible answers than it needs to (the power set of all words). The same problem holds for BIO sequence predictionâ the model must do additional work to learn the semantics of the BIO tags. On the other hand, in RASOR, the semantics of an answer span is naturally encoded by the set of labels.
Second, we observe the importance of allowing interactions between the endpoints using the span- level FFNN. RASOR outperforms the endpoint prediction model by 1.1 in exact match, The interac- tion between endpoints enables RASOR to enforce consistency across its two substructures. While this does not provide improvements for predicting the correct region of the answer (captured by the F1 metric, which drops by 0.2), it is more likely to predict a clean answer span that matches human judgment exactly (captured by the exact-match metric).
7
# 6 ANALYSIS
Figure 2 shows how the performances of RASOR and the endpoint predictor introduced in Sec- tion 5.2 degrade as the lengths of their predictions increase. It is clear that explicitly modeling interactions between end markers is increasingly important as the span grows in length.
0.8 6 ; 2 £ 506 a 2 a F 5 o g 3 ; id é Y < > % 2 3 % 3 > Z g B04Y 7% ak 8 3 % \ 2 g Z vy Tt ReSoR Fa \ 5 3 Zz a \ 2 $077 ame \) F 2 ZG aso Ni Z Z ââ Endpoint EM Z 02 ZY Y G44, y0% ZGGR, Zz ZEGGGBo 123 4 5 67 8 >8 Answer Le! 3 ath
Which = 8 == 8 a people brought i | forward : one 7 Me n the | | = earliest a examples of a | Gvil Disobedience a = What ll m= 5 = ff does i cvil disobedience protest San 7 : ? = * gues 2 >roogyyzesgey z esos igo gseegsâ Bae 5 s eSgiessâs 55 , : - 5 s
Figure 2: F1 and Exact Match (EM) accuracy of RASOR and the endpoint predictor baseline over different prediction lengths.
Figure 3: Attention masks from RASOR. Top predictions for the ï¬rst example are âEgyptiansâ, âEgyptians against the Britishâ, âBritishâ. Top predictions for the second are âunjust lawsâ, âwhat they deem to be unjust lawsâ, âlawsâ.
Figure 3 shows attention masks for both of RASORâs question representations. The passage- independent question representation pays most attention to the words that could attach to the answer in the passage (âbroughtâ, âagainstâ) or describe the answer category (âpeopleâ). Meanwhile, the passage-aligned question representation pays attention to similar words. The top predictions for both examples are all valid syntactic constituents, and they all have the correct semantic category. How- ever, RASOR assigns almost as much probability mass to itâs incorrect third prediction âBritishâ as it does to the top scoring correct prediction âEgyptianâ. This showcases a common failure case for RASOR, where it can ï¬nd an answer of the correct type close to a phrase that overlaps with the question â but it cannot accurately represent the semantic dependency on that phrase.
# 7 CONCLUSION
We have shown a novel approach for perform extractive question answering on the SQUAD dataset by explicitly representing and scoring answer span candidates. The core of our model relies on a recurrent network that enables shared computation for the shared substructure across span candi- dates. We explore different methods of encoding the passage and question, showing the beneï¬ts of including both passage-independent and passage-aligned question representations. While we show that this encoding method is beneï¬cial for the task, this is orthogonal to the core contribution of efï¬ciently computing span representation. In future work, we plan to explore alternate architectures that provide input to the recurrent span representations.
# REFERENCES
Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
Danqi Chen, Jason Bolton, and Christopher D. Manning. A thorough examination of the cnn/daily mail reading comprehension task. In Proceedings of ACL, 2016.
Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. Proceedings of NIPS, 2016.
8
Klaus Greff, Rupesh Kumar Srivastava, Jan Koutn´ık, Bas R. Steunebrink, and J¨urgen Schmidhuber. LSTM: A search space odyssey. IEEE Transactions on Neural Networks and Learning Systems, PP:1â11, 2016.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Proceedings of NIPS, 2015.
Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading childrenâs books with explicit memory representations. In Proceedings of ICLR, 2016.
Sepp Hochreiter and J¨urgen Schmidhuber. Long Short-term Memory. Neural computation, 9(8): 1735â1780, 1997.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. Proceedings of ICLR, 2015.
Peng Li, Wei Li, Zhengyan He, Xuguang Wang, Ying Cao, Jie Zhou, and Wei Xu. Dataset and neural recurrent sequence labeling model for open-domain factoid question answering. CoRR, abs/1607.06275, 2016.
Vinod Nair and Geoffrey E Hinton. Rectiï¬ed linear units improve restricted boltzmann machines. In Proceedings of ICML, 2010.
Ankur P Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention model for natural language inference. In Proceedings of EMNLP, 2016.
Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In Proceedings of EMNLP, 2014.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100, 000+ questions for machine comprehension of text. In Proceedings of EMNLP, 2016.
Matthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of EMNLP, 2013.
Wilson Taylor. Cloze procedure: A new tool for measuring readability. Journalism Quarterly, 30: 415â433, 1953.
Kateryna Tymoshenko, Daniele Bonadiman, and Alessandro Moschitti. Convolutional neural net- works vs. convolution kernels: Feature engineering for answer sentence reranking. In Proceedings of NAACL, 2016.
Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Proceedings of NIPS, 2015.
Ellen M. Voorhees and Dawn M. Tice. Building a question answering test collection. In Proceedings of SIGIR, 2000.
Bingning Wang, Kang Liu, and Jun Zhao. Inner attention based recurrent neural networks for answer selection. In Proceedings of ACL, 2016.
Shuohang Wang and Jing Jiang. Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905, 2016.
Yi Yang, Wen-tau Yih, and Christopher Meek. Wikiqa: A challenge dataset for open-domain ques- tion answering. In Proceedings of EMNLP, 2015.
9 | {
"id": "1608.07905"
} |
1611.01368 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | 6 1 0 2 v o N 4 ] L C . s c [
1 v 8 6 3 1 0 . 1 1 6 1 : v i X r a
# Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
LSCP1 & IJN2, CNRS, EHESS and ENS, PSL Research University {tal.linzen, emmanuel.dupoux}@ens.fr
Yoav Goldberg Computer Science Department Bar Ilan University yoav.goldberg@gmail.com
# Abstract
The success of long short-term memory (LSTM) neural networks in language process- ing is typically attributed to their ability to capture long-distance statistical regularities. Linguistic regularities are often sensitive to syntactic structure; can such dependencies be captured by LSTMs, which do not have ex- plicit structural representations? We begin ad- dressing this question using number agreement in English subject-verb dependencies. We probe the architectureâs grammatical compe- tence both using training objectives with an explicit grammatical target (number prediction, grammaticality judgments) and using language models. In the strongly supervised settings, the LSTM achieved very high overall accu- racy (less than 1% errors), but errors increased when sequential and structural information con- ï¬icted. The frequency of such errors rose sharply in the language-modeling setting. We conclude that LSTMs can capture a non-trivial amount of grammatical structure given targeted supervision, but stronger architectures may be required to further reduce errors; furthermore, the language modeling signal is insufï¬cient for capturing syntax-sensitive dependencies, and should be supplemented with more direct supervision if such dependencies need to be captured.
(Hochreiter and Schmidhuber, 1997) or gated recur- rent units (GRU) (Cho et al., 2014), has led to sig- niï¬cant gains in language modeling (Mikolov et al., 2010; Sundermeyer et al., 2012), parsing (Vinyals et al., 2015; Kiperwasser and Goldberg, 2016; Dyer et al., 2016), machine translation (Bahdanau et al., 2015) and other tasks.
The effectiveness of RNNs1 is attributed to their ability to capture statistical contingencies that may span an arbitrary number of words. The word France, for example, is more likely to occur somewhere in a sentence that begins with Paris than in a sentence that begins with Penguins. The fact that an arbitrary number of words can intervene between the mutually predictive words implies that they cannot be captured by models with a ï¬xed window such as n-gram mod- els, but can in principle be captured by RNNs, which do not have an architecturally ï¬xed limit on depen- dency length.
RNNs are sequence models: they do not explicitly incorporate syntactic structure. Indeed, many word co-occurrence statistics can be captured by treating the sentence as an unstructured list of words (Paris- France); it is therefore unsurprising that RNNs can learn them well. Other dependencies, however, are sensitive to the syntactic structure of the sentence (Chomsky, 1965; Everaert et al., 2015). To what extent can RNNs learn to model such phenomena based only on sequential cues?
# Introduction
Recurrent neural networks (RNNs) are highly effec- tive models of sequential data (Elman, 1990). The rapid adoption of RNNs in NLP systems in recent years, in particular of RNNs with gating mecha- nisms such as long short-term memory (LSTM) units
Previous research has shown that RNNs (in particu- lar LSTMs) can learn artiï¬cial context-free languages (Gers and Schmidhuber, 2001) as well as nesting and
1In this work we use the term RNN to refer to the entire class of sequential recurrent neural networks. Instances of the class include long short-term memory networks (LSTM) and the Simple Recurrent Network (SRN) due to Elman (1990).
indentation in a programming language (Karpathy et al., 2016). The goal of the present work is to probe their ability to learn natural language hierarchical (syntactic) structures from a corpus without syntactic annotations. As a ï¬rst step, we focus on a particular dependency that is commonly regarded as evidence for hierarchical structure in human language: English subject-verb agreement, the phenomenon in which the form of a verb depends on whether the subject is singular or plural (the kids play but the kid plays; see additional details in Section 2). If an RNN-based model succeeded in learning this dependency, that would indicate that it can learn to approximate or even faithfully implement syntactic structure.
Our main interest is in whether LSTMs have the capacity to learn structural dependencies from a nat- ural corpus. We therefore begin by addressing this question under the most favorable conditions: train- ing with explicit supervision. In the setting with the strongest supervision, which we refer to as the num- ber prediction task, we train it directly on the task of guessing the number of a verb based on the words that preceded it (Sections 3 and 4). We further experiment with a grammaticality judgment training objective, in which we provide the model with full sentences an- notated as to whether or not they violate subject-verb number agreement, without an indication of the locus of the violation (Section 5). Finally, we trained the model without any grammatical supervision, using a language modeling objective (predicting the next word).
Our quantitative results (Section 4) and qualitative analysis (Section 7) indicate that most naturally oc- curring agreement cases in the Wikipedia corpus are easy: they can be resolved without syntactic informa- tion, based only on the sequence of nouns preceding the verb. This leads to high overall accuracy in all models. Most of our experiments focus on the super- vised number prediction model. The accuracy of this model was lower on harder cases, which require the model to encode or approximate structural informa- tion; nevertheless, it succeeded in recovering the ma- jority of agreement cases even when four nouns of the opposite number intervened between the subject and the verb (17% errors). Baseline models failed spec- tacularly on these hard cases, performing far below chance levels. Fine-grained analysis revealed that mistakes are much more common when no overt cues
to syntactic structure (in particular function words) are available, as is the case in noun-noun compounds and reduced relative clauses. This indicates that the number prediction model indeed managed to capture a decent amount of syntactic knowledge, but was overly reliant on function words.
Error rates increased only mildly when we switched to more indirect supervision consisting only of sentence-level grammaticality annotations without an indication of the crucial verb. By contrast, the language model trained without explicit grammati- cal supervision performed worse than chance on the harder agreement prediction cases. Even a state-of- the-art large-scale language model (Jozefowicz et al., 2016) was highly sensitive to recent but struc- turally irrelevant nouns, making more than ï¬ve times as many mistakes as the number prediction model on these harder cases. These results suggest that explicit supervision is necessary for learning the agreement dependency using this architecture, limiting its plau- sibility as a model of child language acquisition (El- man, 1990). From a more applied perspective, this result suggests that for tasks in which it is desirable to capture syntactic dependencies (e.g., machine trans- lation or language generation), language modeling objectives should be supplemented by supervision signals that directly capture the desired behavior.
# 2 Background: Subject-Verb Agreement as Evidence for Syntactic Structure
The form of an English third-person present tense verb depends on whether the head of the syntactic subject is plural or singular:2
(1) The key is on the table.
a. b. *The key are on the table. c. *The keys is on the table. d.
While in these examples the subjectâs head is adjacent to the verb, in general the two can be separated by some sentential material:3
2 Identifying the head of the subject is typically straightfor- ward. In what follows we will use the shorthand âthe subjectâ to refer to the head of the subject.
3In the examples, the subject and the corresponding verb are marked in boldface, agreement attractors are underlined and intervening nouns of the same number as the subject are marked in italics. Asterisks mark unacceptable sentences.
# (2) The keys to the cabinet are on the table.
Given a syntactic parse of the sentence and a verb, it is straightforward to identify the head of the subject that corresponds to that verb, and use that information to determine the number of the verb (Figure 1).
root nsubj det prep pobj det prep pobj det The keys to the cabinet are on the table
Figure 1: The form of the verb is determined by the head of the subject, which is directly connected to it via an nsubj edge. Other nouns that intervene between the head of the subject and the verb (here cabinet is such a noun) are irrelevant for determining the form of the verb and need to be ignored.
By contrast, models that are insensitive to structure may run into substantial difï¬culties capturing this de- pendency. One potential issue is that there is no limit to the complexity of the subject NP, and any number of sentence-level modiï¬ers and parentheticalsâand therefore an arbitrary number of wordsâcan appear between the subject and the verb:
The building on the far right thatâs quite old and run down is the Kilgore Bank Building.
This property of the dependency entails that it can- not be captured by an n-gram model with a ï¬xed n. RNNs are in principle able to capture dependencies of an unbounded length; however, it is an empirical question whether or not they will learn to do so in practice when trained on a natural corpus.
A more fundamental challenge that the depen- dency poses for structure-insensitive models is the possibility of agreement attraction errors (Bock and Miller, 1991). The correct form in (3) could be se- lected using simple heuristics such as âagree with the most recent nounâ, which are readily available to sequence models. In general, however, such heuris- tics are unreliable, since other nouns can intervene between the subject and the verb in the linear se- quence of the sentence. Those intervening nouns can have the same number as the subject, as in (4), or the opposite number as in (5)-(7):
Alluvial soils carried in the ï¬oodwaters add nutrients to the ï¬oodplains.
(5)
The only championship banners that are cur- rently displayed within the building are for national or NCAA Championships.
The length of the forewings is 12-13.
Yet the ratio of men who survive to the women and children who survive is not clear in this story.
Intervening nouns with the opposite number from the subject are called agreement attractors. The potential presence of agreement attractor entails that the model must identify the head of the syntactic subject that corresponds to a given verb in order to choose the correct inï¬ected form of that verb.
Given the difï¬culty in identifying the subject from the linear sequence of the sentence, dependencies such as subject-verb agreement serve as an argument for structured syntactic representations in humans (Everaert et al., 2015); they may challenge models such as RNNs that do not have pre-wired syntac- tic representations. We note that subject-verb num- ber agreement is only one of a number of structure- sensitive dependencies; other examples include nega- tive polarity items (e.g., any) and reï¬exive pronouns (herself ). Nonetheless, a modelâs success in learning subject-verb agreement would be highly suggestive of its ability to master hierarchical structure.
# 3 The Number Prediction Task
To what extent can a sequence model learn to be sensi- tive to the hierarchical structure of natural language? To study this question, we propose the number pre- diction task. In this task, the model sees the sentence up to but not including a present-tense verb, e.g.:
(8) The keys to the cabinet
It then needs to guess the number of the following verb (a binary choice, either PLURAL or SINGULAR). We examine variations on this task in Section 5.
In order to perform well on this task, the model needs to encode the concepts of syntactic number and syntactic subjecthood: it needs to learn that some words are singular and others are plural, and to be able to identify the correct subject. As we have illus-
trated in Section 2, correctly identifying the subject that corresponds to a particular verb often requires sensitivity to hierarchical syntax.
Data: An appealing property of the number predic- tion task is that we can generate practically unlimited training and testing examples for this task by query- ing a corpus for sentences with present-tense verbs, and noting the number of the verb. Importantly, we do not need to correctly identify the subject in order to create a training or test example. We generated a corpus of â¼1.35 million number prediction problems based on Wikipedia, of which â¼121,500 (9%) were used for training, â¼13,500 (1%) for validation, and the remaining â¼1.21 million (90%) were reserved for testing.4 The large number of test sentences was necessary to ensure that we had a good variety of test sentences representing less common constructions (see Section 4).5
Model and baselines: We encode words as one- hot vectors: the model does not have access to the characters that make up the word. Those vectors are then embedded into a 50-dimensional vector space. An LSTM with 50 hidden units reads those embed- ding vectors in sequence; the state of the LSTM at the end of the sequence is then fed into a logistic regression classiï¬er. The network is trained6 in an end-to-end fashion, including the word embeddings.7 To isolate the effect of syntactic structure, we also consider a baseline which is exposed only to the nouns in the sentence, in the order in which they appeared originally, and is then asked to predict the number of the following verb. The goal of this base-
4We limited our search to sentences that were shorter than 50 words. Whenever a sentence had more than one subject-verb dependency, we selected one of the dependencies at random.
5Code and data are available at http://tallinzen. net/projects/lstm_agreement.
6The network was optimized using Adam (Kingma and Ba, 2015) and early stopping based on validation set error. We trained the number prediction model 20 times with different random initializations, and report accuracy averaged across all runs. The models described in Sections 5 and 6 are based on 10 runs, with the exception of the language model, which is slower to train and was trained once.
7The size of the vocabulary was capped at 10000 (after low- ercasing). Infrequent words were replaced with their part of speech (Penn Treebank tagset, which explicitly encodes number distinctions); this was the case for 9.6% of all tokens and 7.1% of the subjects.
line is to withhold the syntactic information carried by function words, verbs and other parts of speech. We explore two variations on this baseline: one that only receives common nouns (dogs, pipe), and an- other that also receives pronouns (he) and proper nouns (France). We refer to these as the noun-only baselines.
# 4 Number Prediction Results
Overall accuracy: Accuracy was very high over- all: the system made an incorrect number prediction only in 0.83% of the dependencies. The noun-only baselines performed signiï¬cantly worse: 4.2% errors for the common-nouns case and 4.5% errors for the all-nouns case. This suggests that function words, verbs and other syntactically informative elements play an important role in the modelâs ability to cor- rectly predict the verbâs number. However, while the noun-only baselines made more than four times as many mistakes as the number prediction system, their still-low absolute error rate indicates that around 95% of agreement dependencies can be captured based solely on the sequence of nouns preceding the verb. This is perhaps unsurprising: sentences are often short and the verb is often directly adjacent to the sub- ject, making the identiï¬cation of the subject simple. To gain deeper insight into the syntactic capabilities of the model, then, the rest of this section investigates its performance on more challenging dependencies.8
Distance: We ï¬rst examine whether the network shows evidence of generalizing to dependencies where the subject and the verb are far apart. We focus in this analysis on simpler cases where no nouns in- tervened between the subject and the verb. As Figure 2a shows, performance did not degrade considerably when the distance between the subject and the verb grew up to 15 words (there were very few longer dependencies). This indicates that the network gen- eralized the dependency from the common distances of 0 and 1 to rare distances of 10 and more.
Agreement attractors: We next examine how the modelâs error rate was affected by nouns that inter- vened between the subject and the verb in the linear
8These properties of the dependencies were identiï¬ed by parsing the test sentences using the parser described in Goldberg and Nivre (2012).
(a) (b) (c) (d) (e) (f)
Figure 2: (a-d) Error rates of the LSTM number prediction model as a function of: (a) distance between the subject and the verb, in dependencies that have no intervening nouns; (b) presence and number of last intervening noun; (c) count of attractors in dependencies with homogeneous intervention; (d) presence of a relative clause with and without an overt relativizer in dependencies with homogeneous intervention and exactly one attractor. All error bars represent 95% binomial conï¬dence intervals.
(e-f) Additional plots: (e) count of attractors per dependency in the corpus (note that the y-axis is on a log scale); (f) embeddings of singular and plural nouns, projected onto their ï¬rst two principal components.
order of the sentence. We ï¬rst focus on whether or not there were any intervening nouns, and if there were, whether the number of the subject differed from the number of the last intervening nounâthe type of noun that would trip up the simple heuristic of agreeing with the most recent noun.
As Figure 2b shows, a last intervening noun of the same number as the subject increased error rates only moderately, from 0.4% to 0.7% in singular subjects and from 1% to 1.4% in plural subjects. On the other hand, when the last intervening noun was an agree- ment attractor, error rates increased by almost an order of magnitude (to 6.5% and 5.4% respectively). Note, however, that even an error rate of 6.5% is quite impressive considering uninformed strategies such as random guessing (50% error rate), always assigning the more common class label (32% error rate, since 32% of the subjects in our corpus are plu- ral) and the number-of-most-recent-noun heuristic (100% error rate). The noun-only LSTM baselines performed much worse in agreement attraction cases, with error rates of 46.4% (common nouns) and 40% (all nouns).
We next tested whether the effect of attractors is cumulative, by focusing on dependencies with multi- ple attractors. To avoid cases in which the effect of an attractor is offset by an intervening noun with the same number as the subject, we restricted our search to dependencies in which all of the intervening nouns had the same number, which we term dependencies with homogeneous intervention. For example, (9) has homogeneous intervention whereas (10) does not:
The roses in the vase by the door are red.
The roses in the vase by the chairs are red.
Figure 2c shows that error rates increased gradually as more attractors intervened between the subject and the verb. Performance degraded quite slowly, how- ever: even with four attractors the error rate was only 17.6%. As expected, the noun-only baselines per- formed signiï¬cantly worse in this setting, reaching an error rate of up to 84% (worse than chance) in the case of four attractors. This conï¬rms that syntactic cues are critical for solving the harder cases.
Relative clauses: We now look in greater detail into the networkâs performance when the words that intervened between the subject and verb contained a relative clause. Relative clauses with attractors are likely to be fairly challenging, for several rea- sons. They typically contain a verb that agrees with the attractor, reinforcing the misleading cue to noun number. The attractor is often itself a subject of an irrelevant verb, making a potential âagree with the most recent subjectâ strategy unreliable. Finally, the existence of a relative clause is sometimes not overtly indicated by a function word (relativizer), as in (11) (for comparison, see the minimally different (12)):
The landmarks this article lists here are also run-of-the-mill and not notable.
The landmarks that this article lists here are also run-of-the-mill and not notable.
For data sparsity reasons we restricted our attention to dependencies with a single attractor and no other intervening nouns. As Figure 2d shows, attraction errors were more frequent in dependencies with an overt relative clause (9.9% errors) than in dependen- cies without a relative clause (3.2%), and consider- ably more frequent when the relative clause was not introduced by an overt relativizer (25%). As in the case of multiple attractors, however, while the model struggled with the more difï¬cult dependencies, its performance was much better than random guessing, and slightly better than a majority-class strategy.
Word representations: We explored the 50- dimensional word representations acquired by the model by performing a principal component anal- ysis. We assigned a part-of-speech (POS) to each word based on the wordâs most common POS in the corpus. We only considered relatively ambiguous words, in which a single POS accounted for more than 90% of the wordâs occurrences in the corpus. Figure 2f shows that the ï¬rst principal component corresponded almost perfectly to the expected num- ber of the noun, suggesting that the model learned the number of speciï¬c words very well; recall that the model did not have access during training to noun number annotations or to morphological sufï¬xes such as -s that could be used to identify plurals.
Visualizing the networkâs activations: We start investigating the inner workings of the number pre- diction network by analyzing its activation in re- sponse to particular syntactic constructions. To sim- plify the analysis, we deviate from our practice in the rest of this paper and use constructed sentences.
We ï¬rst constructed sets of sentence preï¬xes based on the following patterns:
PP: The toy(s) of the boy(s)...
RC: The toy(s) that the boy(s)...
These patterns differ by exactly one function word, which determines the type of the modiï¬er of the main clause subject: a prepositional phrase (PP) in the ï¬rst sentence and a relative clause (RC) in the second. In PP sentences the correct number of the upcoming verb is determined by the main clause subject toy(s); in RC sentences it is determined by the embedded subject boy(s).
We generated all four versions of each pattern, and repeated the process ten times with different lexical items (the house(s) of/that the girl(s), the computer(s) of/that the student(s), etc.), for a total of 80 sentences. The network made correct number predictions for all 40 PP sentences, but made three errors in RC sen- tences. We averaged the word-by-word activations across all sets of ten sentences that had the same com- bination of modiï¬er (PP or RC), ï¬rst noun number and second noun number. Plots of the activation of all 50 units are provided in the Appendix (Figure 5). Figure 3a highlights a unit (Unit 1) that shows a particularly clear pattern: it tracks the number of the main clause subject throughout the PP modiï¬er, resets when it reaches the relativizer that which intro- duces the RC modiï¬er, and then switches to tracking the number of the embedded subject.
To explore how the network deals with dependen- cies spanning a larger number of words, we tracked its activation during the processing of the following two sentences:9
The houses of/that the man from the ofï¬ce across the street...
The network made the correct prediction for the PP
9We simpliï¬ed this experiment in light of the relative robust- ness of the ï¬rst experiment to lexical items and to whether each of the nouns was singular or plural.
(a)
# oy
05
0.0
0.5
# e
(b) (c)
Figure 3: Word-by-word visualization of LSTM activation: (a) a unit that correctly predicts the number of an upcoming verb. This number is determined by the ï¬rst noun (X) when the modiï¬er is a prepositional phrase (PP) and by the second noun (Y) when it is an object relative clause (RC); (b) the evolution of the predictions in the case of a longer modiï¬er: the predictions correctly diverge at the embedded noun, but then incorrectly converge again; (c) the activation of four representative units over the course of the same sentences.
but not the RC sentence (as before, the correct pre- dictions are PLURAL for PP and SINGULAR for RC). Figure 3b shows that the network begins by mak- ing the correct prediction for RC immediately after that, but then falters: as the sentence goes on, the resetting effect of that diminishes. The activation time courses shown in Figure 3c illustrate that Unit 1, which identiï¬ed the subject correctly when the preï¬x was short, gradually forgets that it is in an embedded clause as the preï¬x grows longer. By contrast, Unit 0 shows a stable capacity to remember the current embedding status. Additional representative units shown in Figure 3c are Unit 46, which consistently stores the number of the main clause subject, and Unit 27, which tracks the number of the most recent noun, resetting at noun phrase boundaries.
While the interpretability of these patterns is en- couraging, our analysis only scratches the surface of the rich possibilities of a linguistically-informed analysis of a neural network trained to perform a syntax-sensitive task; we leave a more extensive in- vestigation for future work.
# 5 Alternative Training Objectives
The number prediction task followed a fully super- vised objective, in which the network identiï¬es the number of an upcoming verb based only on the words preceding the verb. This section proposes three objec- tives that modify some of the goals and assumptions of the number prediction objective (see Table 1 for an overview).
Verb inï¬ection: This objective is similar to num- ber prediction, with one difference: the network re- ceives not only the words leading up to the verb, but also the singular form of the upcoming verb (e.g., writes). In practice, then, the network needs to decide between the singular and plural forms of a particular verb (writes or write). Having access to the semantics of the verb can help the network identify the noun that serves as its subject without using the syntactic subjecthood criteria. For example, in the following sentence:
People from the capital often eat pizza.
Sample input Training signal Prediction task Correct answer SINGULAR/PLURAL? SINGULAR/PLURAL? PLURAL PLURAL The keys to the cabinet [is/are] The keys to the cabinet are here. GRAMMATICAL GRAMMATICAL/UNGRAMMATICAL? GRAMMATICAL The keys to the cabinet PLURAL PLURAL P (are) > P (is)? are True
Table 1: Examples of the four training objectives and corresponding prediction tasks.
only people is a plausible subject for eat; the network can use this information to infer that the correct form of the verb is eat is rather than eats.
This objective is similar to the task that humans face during language production: after the speaker has decided to use a particular verb (e.g., write), he or she needs to decide whether its form will be write or writes (Levelt et al., 1999; Staub, 2009).
Grammaticality judgments: The previous objec- tives explicitly indicate the location in the sentence in which a verb can appear, giving the network a cue to syntactic clause boundaries. They also explicitly di- rect the networkâs attention to the number of the verb. As a form of weaker supervision, we experimented with a grammaticality judgment objective. In this sce- nario, the network is given a complete sentence, and is asked to judge whether or not it is grammatical.
attend to the number of the verb. In the network that implements this training scenario, RNN activation after each word is fed into a fully connected dense layer followed by a softmax layer over the entire vocabulary.
We evaluate the knowledge that the network has acquired about subject-verb noun agreement using a task similar to the verb inï¬ection task. To per- form the task, we compare the probabilities that the model assigns to the two forms of the verb that in fact occurred in the corpus (e.g., write and writes), and select the form with the higher probability.11 As this task is not part of the networkâs training objec- tive, and the model needs to allocate considerable resources to predicting each word in the sentence, we expect the LM to perform worse than the explicitly supervised objectives.
To train the network, we made half of the examples in our training corpus ungrammatical by ï¬ipping the number of the verb.10 The network read the entire sentence and received a supervision signal at the end. This task is modeled after a common human data col- lection technique in linguistics (Sch¨utze, 1996), al- though our training regime is of course very different to the training that humans are exposed to: humans rarely receive ungrammatical sentences labeled as such (Bowerman, 1988).
Language modeling (LM): Finally, we experi- mented with a word prediction objective, in which the model did not receive any grammatically relevant supervision (Elman, 1990; Elman, 1991). In this sce- nario, the goal of the network is to predict the next word at each point in every sentence. It receives un- labeled sentences and is not speciï¬cally instructed to
Results: When considering all agreement depen- dencies, all models achieved error rates below 7% (Figure 4a); as mentioned above, even the noun-only number prediction baselines achieved error rates be- low 5% on this task. At the same time, there were large differences in accuracy across training objec- tives. The verb inï¬ection network performed slightly but signiï¬cantly better than the number prediction one (0.8% compared to 0.83% errors), suggesting that the semantic information carried by the verb is moderately helpful. The grammaticality judgment objective performed somewhat worse, at 2.5% errors, but still outperformed the noun-only baselines by a large margin, showing the capacity of the LSTM ar- chitecture to learn syntactic dependencies even given fairly indirect evidence.
The worst performer was the language model. It
10In some sentences this will not in fact result in an ungram- matical sentence, e.g. with collective nouns such as group, which are compatible with both singular and plural verbs in some di- alects of English (Huddleston and Pullum, 2002); those cases appear to be rare.
11One could also imagine performing the equivalent of the number prediction task by aggregating LM probability mass over all plural verbs and all singular verbs. This approach may be more severely affected by part-of-speech ambiguous words than the one we adopted; we leave the exploration of this approach to future work.
(b) (d) (e)
(a)
(c)
.
Figure 4: Alternative tasks and additional experiments: (a) overall error rate across tasks (note that the y-axis ends in 10%); (b) effect of count of attractors in homogeneous dependencies across training objectives; (c) comparison of the Google LM (Jozefowicz et al., 2016) to our LM and one of our supervised verb inï¬ection systems, on a sample of sentences; (d) number prediction: effect of count of attractors using SRNs with standard training or LSTM with targeted training; (e) number prediction: difference in error rate between singular and plural subjects across RNN cell types. Error bars represent binomial 95% conï¬dence intervals.
made eight times as many errors as the original num- ber prediction network (6.78% compared to 0.83%), and did substantially worse than the noun-only base- lines (though recall that the noun-only baselines were still explicitly trained to predict verb number).
The differences across the networks are more strik- ing when we focus on dependencies with agreement attractors (Figure 4b). Here, the language model does worse than chance in the most difï¬cult cases, and only slightly better than the noun-only baselines. The worse-than-chance performance suggests that attractors actively confuse the networks rather than cause them to make a random decision. The other models degrade more gracefully with the number of agreement attractors; overall, the grammaticality judgment objective is somewhat more difï¬cult than the number prediction and verb inï¬ection ones. In summary, we conclude that while the LSTM is capa- ble of learning syntax-sensitive agreement dependen- cies under various objectives, the language-modeling objective alone is not sufï¬cient for learning such de- pendencies, and a more direct form of training signal
is required.
Comparison to a large-scale language model: One objection to our language modeling result is that our LM faced a much harder objective than our other modelsâpredicting a distribution over 10,000 vocabulary items is certainly harder than bi- nary classiï¬cationâbut was equipped with the same capacity (50-dimensional hidden state and word vec- tors). Would the performance gap between the LM and the explicitly supervised models close if we in- creased the capacity of the LM?
We address this question using a very large pub- licly available LM (Jozefowicz et al., 2016), which we refer to as the Google LM.12 The Google LM rep- resent the current state-of-the-art in language mod- eling: it is trained on a billion-word corpus (Chelba et al., 2013), with a vocabulary of 800,000 words. It is based on a two-layer LSTM with 8192 units in each layer, or more than 300 times as many units as our LM; at 1.04 billion parameters it has almost
12 https://github.com/tensorflow/models/ tree/master/lm_1b
# subj.
4
2000 times as many parameters. It is a ï¬ne-tuned language model that achieves impressive perplexity scores on common benchmarks, requires a massive infrastructure for training, and pushes the boundaries of whatâs feasible with current hardware.
We tested the Google LM with the methodology we used to test ours.13 Due to computational resource limitations, we did not evaluate it on the entire test set, but sampled a random selection of 500 sentences for each count of attractors (testing a single sentence under the Google LM takes around 5 seconds on average). The results are presented in Figure 4c, where they are compared to the performance of the supervised verb inï¬ection system. Despite having an order of magnitude more parameters and signiï¬cantly larger training data, the Google LM performed poorly compared to the supervised models; even a single attractor led to a sharp increase in error rate to 28.5%, almost as high as our small-scale LM (32.6% on the same sentences). While additional attractors caused milder degradation than in our LM, the performance of the Google LM on sentences with four attractors was still worse than always guessing the majority class (SINGULAR).
In summary, our experiments with the Google LM do not change our conclusions: the contrast between the poor performance of the LMs and the strong per- formance of the explicitly supervised objectives sug- gests that direct supervision has a dramatic effect on the modelâs ability to learn syntax-sensitive de- pendencies. Given that the Google LM was already trained on several hundred times more data than the number prediction system, it appears unlikely that its relatively poor performance was due to lack of training data.
# 6 Additional Experiments
recurrent networks: Comparison to simple How much of the success of the network is due to the LSTM cells? We repeated the number prediction experiment with a simple recurrent network (SRN) (Elman, 1990), with the same number of hidden units. The SRNâs performance was inferior to the LSTMâs, but the average performance for a given
13One technical exception was that we did not replace low- frequency words with their part-of-speech, since the Google LM is a large-vocabulary language model, and does not have parts-of-speech as part of its vocabulary.
number of agreement attractors does not suggest a qualitative difference between the cell types: the SRN makes about twice as many errors as the LSTM across the board (Figure 4d).
Training only on difï¬cult dependencies: Only a small proportion of the dependencies in the corpus had agreement attractors (Figure 2e). Would the network generalize better if dependencies with in- tervening nouns were emphasized during training? We repeated our number prediction experiment, this time training the model only on dependencies with at least one intervening noun (of any number). We doubled the proportion of training sentences to 20%, since the total size of the corpus was smaller (226K dependencies).
This training regime resulted in a 27% decrease in error rate on dependencies with exactly one attractor (from 4.1% to 3.0%). This decrease is statistically signiï¬cant, and encouraging given that total number of dependencies in training was much lower, which complicates the learning of word embeddings. Error rates mildly decreased in dependencies with more attractors as well, suggesting some generalization (Figure 4d). Surprisingly, a similar experiment us- ing the grammaticality judgment task led to a slight increase in error rate. While tentative at this point, these results suggest that oversampling difï¬cult train- ing cases may be beneï¬cial; a curriculum progressing from easier to harder dependencies (Elman, 1993) may provide additional gains.
# 7 Error Analysis
Singular vs. plural subjects: Most of the nouns in English are singular: in our corpus, the fraction of singular subjects is 68%. Agreement attraction errors in humans are much more common when the attractor is plural than when it is singular (Bock and Miller, 1991; Eberhard et al., 2005). Do our modelsâ error rates depend on the number of the subject?
As Figure 2b shows, our LSTM number prediction model makes somewhat more agreement attraction errors with plural than with singular attractors; the difference is statistically signiï¬cant, but the asymme- try is much less pronounced than in humans. Inter- estingly, the SRN version of the model does show a large asymmetry, especially as the count of attractors increases; with four plural attractors the error rate
reaches 60% (Figure 4e).
Qualitative analysis: We manually examined a sample of 200 cases in which the majority of the 20 runs of the number prediction network made the wrong prediction. There were only 8890 such depen- dencies (about 0.6%). Many of those were straight- forward agreement attraction errors; others were dif- ï¬cult to interpret. We mention here three classes of errors that can motivate future experiments.
The networks often misidentiï¬ed the heads of noun-noun compounds. In (17), for example, the models predict a singular verb even though the num- ber of the subject conservation refugees should be determined by its head refugees. This suggests that the networks didnât master the structure of English noun-noun compounds.14
Conservation refugees live in a world col- ored in shades of gray; limbo.
Information technology (IT) assets com- monly hold large volumes of conï¬dential data.
Some verbs that are ambiguous with plural nouns seem to have been misanalyzed as plural nouns and consequently act as attractors. The models predicted a plural verb in the following two sentences even though neither of them has any plural nouns, possibly because of the ambiguous verbs drives and lands:
The ship that the player drives has a very high speed.
It was also to be used to learn if the area where the lander lands is typical of the sur- rounding terrain.
Other errors appear to be due to difï¬culty not in identifying the subject but in determining whether it is plural or singular. In Example (22), in particular, there is very little information in the left context of the subject 5 paragraphs suggesting that the writer considers it to be singular:
Rabaul-based Japanese aircraft make three dive-bombing attacks.
14The dependencies are presented as they appeared in the corpus; the predicted number was the opposite of the correct one (e.g., singular in (17), where the original is plural).
The lead is also rather long; 5 paragraphs is pretty lengthy for a 62 kilobyte article.
The last errors point to a limitation of the number prediction task, which jointly evaluates the modelâs ability to identify the subject and its ability to assign the correct number to noun phrases.
# 8 Related Work
The majority of NLP work on neural networks eval- uates them on their performance in a task such as language modeling or machine translation (Sunder- meyer et al., 2012; Bahdanau et al., 2015). These evaluation setups average over many different syn- tactic constructions, making it difï¬cult to isolate the networkâs syntactic capabilities.
Other studies have tested the capabilities of RNNs to learn simple artiï¬cial languages. Gers and Schmid- huber (2001) showed that LSTMs can learn the context-free language anbn, generalizing to ns as high as 1000 even when trained only on n â {1, . . . , 10}. Simple recurrent networks struggled with this language (Rodriguez et al., 1999; Rodriguez, 2001). These results have been recently replicated and extended by Joulin and Mikolov (2015).
Elman (1991) tested an SRN on a miniature lan- guage that simulated English relative clauses, and found that the network was only able to learn the language under highly speciï¬c circumstances (El- man, 1993), though later work has called some of his conclusions into question (Rohde and Plaut, 1999; Cartling, 2008). Frank et al. (2013) studied the ac- quisition of anaphora coreference by SRNs, again in a miniature language. Recently, Bowman et al. (2015) tested the ability of LSTMs to learn an artiï¬- cial language based on propositional logic. As in our study, the performance of the network degraded as the complexity of the test sentences increased.
Karpathy et al. (2016) present analyses and visual- ization methods for character-level RNNs. K´ad´ar et al. (2016) and Li et al. (2016) suggest visualization techniques for word-level RNNs trained to perform tasks that arenât explicitly syntactic (image caption- ing and sentiment analysis).
Early work that used neural networks to model grammaticality judgments includes Allen and Sei- denberg (1999) and Lawrence et al. (1996). More re- cently, the connection between grammaticality judg-
ments and the probabilities assigned by a language model was explored by Clark et al. (2013) and Lau et al. (2015). Finally, arguments for evaluating NLP models on a strategically sampled set of dependency types rather than a random sample of sentences have been made in the parsing literature (Rimell et al., 2009; Nivre et al., 2010; Bender et al., 2011).
# 9 Discussion and Future Work
Neural network architectures are typically evaluated on random samples of naturally occurring sentences, e.g., using perplexity on held-out data in language modeling. Since the majority of natural language sen- tence are grammatically simple, models can achieve high overall accuracy using ï¬awed heuristics that fail on harder cases. This makes it difï¬cult to distin- guish simple but robust sequence models from more expressive architectures (Socher, 2014; Grefenstette et al., 2015; Joulin and Mikolov, 2015). Our work suggests an alternative strategyâevaluation on natu- rally occurring sentences that are sampled based on their grammatical complexityâwhich can provide more nuanced tests of language models (Rimell et al., 2009; Bender et al., 2011).
This approach can be extended to the training stage: neural networks can be encouraged to develop more sophisticated generalizations by oversampling grammatically challenging training sentences. We took a ï¬rst step in this direction when we trained the network only on dependencies with intervening nouns (Section 6). This training regime indeed im- proved the performance of the network; however, the improvement was quantitative rather than qualitative: there was limited generalization to dependencies that were even more difï¬cult than those encountered in training. Further experiments are needed to establish the efï¬cacy of this method.
A network that has acquired syntactic represen- tations sophisticated enough to handle subject-verb agreement is likely to show improved performance on other structure-sensitive dependencies, including pronoun coreference, quantiï¬er scope and negative polarity items. As such, neural models used in NLP applications may beneï¬t from grammatically sophis- ticated sentence representations developed in a multi- task learning setup (Caruana, 1998), where the model is trained concurrently on the task of interest and on
one of the tasks we proposed in this paper. Of course, grammatical phenomena differ from each other in many ways. The distribution of negative polarity items is highly sensitive to semantic factors (Gian- nakidou, 2011). Restrictions on unbounded depen- dencies (Ross, 1967) may require richer syntactic representations than those required for subject-verb dependencies. The extent to which the results of our study will generalize to other constructions and other languages, then, is a matter for empirical research.
Humans occasionally make agreement attraction mistakes during language production (Bock and Miller, 1991) and comprehension (Nicol et al., 1997). These errors persist in human acceptability judg- ments (Tanner et al., 2014), which parallel our gram- maticality judgment task. Cases of grammatical agreement with the nearest rather than structurally rel- evant constituent have been documented in languages such as Slovenian (MaruËsiËc et al., 2007), and have even been argued to be occasionally grammatical in English (Zwicky, 2005). In future work, explor- ing the relationship between these cases and neural network predictions can shed light on the cognitive plausibility of those networks.
# 10 Conclusion
LSTMs are sequence models; they do not have built- in hierarchical representations. We have investigated how well they can learn subject-verb agreement, a phenomenon that crucially depends on hierarchical syntactic structure. When provided explicit supervi- sion, LSTMs were able to learn to perform the verb- number agreement task in most cases, although their error rate increased on particularly difï¬cult sentences. We conclude that LSTMs can learn to approximate structure-sensitive dependencies fairly well given ex- plicit supervision, but more expressive architectures may be necessary to eliminate errors altogether. Fi- nally, our results provide evidence that the language modeling objective is not by itself sufï¬cient for learn- ing structure-sensitive dependencies, and suggest that a joint training objective can be used to supplement language models on tasks for which syntax-sensitive dependencies are important.
# Acknowledgments
We thank Marco Baroni, Grzegorz ChrupaÅa, Alexan- der Clark, Sol Lago, Paul Smolensky, Benjamin Spector and Roberto Zamparelli for comments and discussion. This research was supported by the European Research Council (grant ERC-2011-AdG 295810 BOOTPHON), the Agence Nationale pour la Recherche (grants ANR-10-IDEX-0001-02 PSL and ANR-10-LABX-0087 IEC) and the Israeli Science Foundation (grant number 1555/15).
# References
Joseph Allen and Mark S. Seidenberg. 1999. The emer- gence of grammaticality in connectionist networks. In Brian MacWhinney, editor, Emergentist approaches to language: Proceedings of the 28th Carnegie sym- posium on cognition, pages 115â151. Mahwah, NJ: Erlbaum.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference for Learning Representations.
Emily M. Bender, Dan Flickinger, Stephan Oepen, and Yi Zhang. 2011. Parser evaluation over local and non-local deep dependencies in a large corpus. In Pro- ceedings of EMNLP, pages 397â408.
Kathryn Bock and Carol A. Miller. 1991. Broken agree- ment. Cognitive Psychology, 23(1):45â93.
Melissa Bowerman. 1988. The âno negative evidenceâ problem: How do children avoid constructing an overly general grammar? In John A. Hawkins, editor, Explain- ing language universals, pages 73â101. Oxford: Basil Blackwell.
Samuel R. Bowman, Christopher D. Manning, and Christopher Potts. 2015. Tree-structured composi- tion in neural networks without tree-structured archi- tectures. In Proceedings of the NIPS Workshop on Cog- nitive Computation: Integrating Neural and Symbolic Approaches.
Bo Cartling. 2008. On the implicit acquisition of a context-free grammar by a simple recurrent neural net- work. Neurocomputing, 71(7):1527â1537.
Rich Caruana. 1998. Multitask learning. In Sebastian Thrun and Lorien Pratt, editors, Learning to learn, pages 95â133. Boston: Kluwer.
Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robin- son. 2013. One billion word benchmark for measur- ing progress in statistical language modeling. arXiv preprint arXiv:1312.3005.
Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase repre- sentations using RNN encoderâdecoder for statistical machine translation. In Proceedings of EMNLP, pages 1724â1734.
Noam Chomsky. 1965. Aspects of the Theory of Syntax. Cambridge, MA: MIT press.
Alexander Clark, Gianluca Giorgolo, and Shalom Lap- pin. 2013. Statistical representation of grammaticality judgements: The limits of n-gram models. In Proceed- ings of the Fourth Annual Workshop on Cognitive Mod- eling and Computational Linguistics (CMCL), pages 28â36.
Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and A. Noah Smith. 2016. Recurrent neural network gram- mars. In Proceedings of NAACL/HLT, pages 199â209. Kathleen M. Eberhard, J. Cooper Cutting, and Kathryn Bock. 2005. Making syntax of sense: Number agree- ment in sentence production. Psychological Review, 112(3):531â559.
Jeffrey L. Elman. 1990. Finding structure in time. Cogni- tive Science, 14(2):179â211.
Jeffrey L. Elman. 1991. Distributed representations, sim- ple recurrent networks, and grammatical structure. Ma- chine Learning, 7(2-3):195â225.
Jeffrey L. Elman. 1993. Learning and development in neu- ral networks: The importance of starting small. Cogni- tion, 48(1):71â99.
Martin B. H. Everaert, Marinus A. C. Huybregts, Noam Chomsky, Robert C. Berwick, and Johan J. Bolhuis. 2015. Structures, not strings: Linguistics as part of the cognitive sciences. Trends in Cognitive Sciences, 19(12):729â743.
Robert Frank, Donald Mathis, and William Badecker. 2013. The acquisition of anaphora by simple recur- rent networks. Language Acquisition, 20(3):181â227. Felix Gers and J¨urgen Schmidhuber. 2001. LSTM re- current networks learn simple context-free and context- sensitive languages. IEEE Transactions on Neural Net- works, 12(6):1333â1340.
Anastasia Giannakidou. 2011. Negative and positive polarity items: Variation, licensing, and compositional- ity. In Claudia Maienborn, Klaus von Heusinger, and Paul Portner, editors, Semantics: An international hand- book of natural language meaning. Berlin: Mouton de Gruyter.
Yoav Goldberg and Joakim Nivre. 2012. A dynamic ora- cle for arc-eager dependency parsing. In Proceedings of COLING 2012, pages 959â976.
Edward Grefenstette, Karl Moritz Hermann, Mustafa Su- leyman, and Phil Blunsom. 2015. Learning to trans- duce with unbounded memory. In Advances in Neural Information Processing Systems, pages 1828â1836.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735â 1780.
Rodney Huddleston and Geoffrey K. Pullum. 2002. The Cambridge Grammar of the English Language. Cam- bridge University Press, Cambridge.
Inferring algorithmic patterns with stack-augmented recurrent nets. In Advances in Neural Information Processing Systems, pages 190â198.
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Exploring arXiv preprint Shazeer, and Yonghui Wu. the limits of language modeling. arXiv:1602.02410. 2016.
´Akos K´ad´ar, Grzegorz ChrupaÅa, and Afra Alishahi. 2016. Representation of linguistic form and func- arXiv preprint tion in recurrent neural networks. arXiv:1602.08952.
Andrej Karpathy, Justin Johnson, and Fei-Fei Li. 2016. Visualizing and understanding recurrent networks. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Confer- ence for Learning Representations.
Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional lstm feature representations. Transactions of the Asso- ciation of Computational Linguistics, 4:313â327.
Jey Han Lau, Alexander Clark, and Shalom Lappin. 2015. Unsupervised prediction of acceptability judgements. In Proceedings of ACL/IJCNLP, pages 1618â1628. Steve Lawrence, Lee C. Giles, and Santliway Fong. 1996. Can recurrent neural networks learn natural language grammars? In IEEE International Conference on Neu- ral Networks, volume 4, pages 1853â1858.
Willem J. M. Levelt, Ardi Roelofs, and Antje S. Meyer. 1999. A theory of lexical access in speech production. Behavioral and Brain Sciences, 22(1):1â75.
Jiwei Li, Xinlei Chen, Eduard H. Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in NLP. In Proceedings of NAACL-HLT 2016, pages 681â691.
Franc MaruËsiËc, Andrew Nevins, and Amanda Saksida. 2007. Last-conjunct agreement in Slovenian. In An- nual Workshop on Formal Approaches to Slavic Lin- guistics, pages 210â227.
Tomas Mikolov, Martin Karaï¬Â´at, Lukas Burget, Jan Cer- nock`y, and Sanjeev Khudanpur. 2010. Recurrent neu- ral network based language model. In INTERSPEECH, pages 1045â1048.
Janet L. Nicol, Kenneth I. Forster, and Csaba Veres. 1997. Subjectâverb agreement processes in comprehension. Journal of Memory and Language, 36(4):569â587.
Joakim Nivre, Laura Rimell, Ryan McDonald, and Carlos Gomez-Rodriguez. 2010. Evaluation of dependency parsers on unbounded dependencies. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 833â841. Association for Computa- tional Linguistics.
Laura Rimell, Stephen Clark, and Mark Steedman. 2009. Unbounded dependency recovery for parser evaluation. In Proceedings of EMNLP, pages 813â821.
Paul Rodriguez, Janet Wiles, and Jeffrey L. Elman. 1999. A recurrent neural network that learns to count. Con- nection Science, 11(1):5â40.
Paul Rodriguez. 2001. Simple recurrent networks learn context-free and context-sensitive languages by count- ing. Neural Computation, 13(9):2093â2118.
Douglas L. T. Rohde and David C. Plaut. 1999. Language acquisition in the absence of explicit negative evidence: How important is starting small? Cognition, 72(1):67â 109.
John Robert Ross. 1967. Constraints on variables in syntax. Ph.D. thesis, MIT.
Carson T. Sch¨utze. 1996. The empirical base of linguis- tics: Grammaticality judgments and linguistic method- ology. Chicago, IL: University of Chicago Press. Richard Socher. 2014. Recursive Deep Learning for Natural Language Processing and Computer Vision. Ph.D. thesis, Stanford University.
Adrian Staub. 2009. On the interpretation of the number attraction effect: Response time evidence. Journal of Memory and Language, 60(2):308â327.
Martin Sundermeyer, Ralf Schl¨uter, and Hermann Ney. 2012. LSTM neural networks for language modeling. In INTERSPEECH.
Darren Tanner, Janet Nicol, and Laurel Brehm. 2014. The time-course of feature interference in agreement com- prehension: Multiple mechanisms and asymmetrical attraction. Journal of Memory and Language, 76:195â 215.
Oriol Vinyals, Åukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Advances in Neural Information Processing Systems, pages 2755â2763.
Arnold Zwicky. 2005. Agreement with nearest always http://itre.cis.upenn.edu/Ëmyl/ bad? languagelog/archives/001846.html.
# Unit 2: PP
Unit 0: PP,
# Unit 0: RC
# Unit 1: PP
# Unit 1: RC
# Unit 2: RC
# Unit3:PP
10 10 y 0.2 s,Â¥s 0.2 ss _ , SY Oa 0.4 0.2 0.2 0.0 0.0 05 05 02 0.2 0.0 Y 0.0 Y. 0.2 0.2 ve 1Y 00 âYs 0.0 Ys 0.2 SY 92 Bey 4 A 0.0 Ys 0.0 02 sÂ¥si0-2 YY -0.4 04 38 ve 38 05 08 Ary 24 ysis 38 -10 YY =10 vou ys" OB 08 â08 SYS-08 s.Â¥s FPF SCOP CELSO CPF CL fees SPF ECP COP EFEEO CLF ECO CPSELLM Unit 4: PP Unit 4: RC UnitS: PP. it5:RC yoy. Unit 6: PP Unit6: RC yoy. Unit 7: PP Unit 7: RC â , Ys 0.8 0.8 : 02 y y 34 0s cy °° 0s SYS 0.8 YS os os i ay Lf S=q XY 0.0 "0.0 SY 90 sy 0.0 04 04 ws AA v9.4 Y s,Y Ys 0.5 y 05 YY 05 05 gy 0.2 YY 0.2 ca wees SA âYS YS 40 YS a9 00 VSS 0.0 1S ee 4 s a) we ) se we a we 4 s we w mC) Ra w a we a) s a) we @ ee w a) we A) S a) we rt) Ka we ro Unit 8: PP Unit 8: RC Unit 9: PP Unit 9: RC Unit 10: PP Unit 10: RC Unit 11: PP Unit 11: RC s,Ys s,Ys 0.0 YS 0.0 YS 1.0 sy 10 s.YS o2 0.2 WON 0.8 0.8 ~02 ayvs-0.2 os SYS os ve Et hav 99 SONG et oe v3 a ay. 5 bo Â¥ 00 0.4 0.4 5 04 0.4 ~06 0.6 â , ooâ 0.6 0.6 0.2 0.2 ~038 SY 0.8 , os 05 Ss 8 VE -08 0.0 Y oo WY 10 -1.0 Y 10 -1.0 Y > ¢ Â¥ 2 & & FO F Cw FO F FO e O F CO eeeee FO Fo CO FO F SO Unit 12:PP Unit 12: RC y Unit 13: PP YS Unit 13: RC Unit 14: PP Unit 14: RC Unit 15: PP Unit 15: RC s, s, : 04 0.4 0.4 0.4 Ys 0.0 0.0 YS° 0.0 0.0 Ys -02 SY go 0.0 0.0 Y 05 YS os -0.2 vy. 22 ~04 y OA We0.5 Ys os SY âa0 Y 40 Y a4 Sg4 ~06 Ys 0.6 ey Ys FOF SP CP FEO CPF ESO CPF ECOP CPF CPO KC OFF O CPF CMH CP FEL Unit 16: PP Unit 16: RC Unit 17: PP Unit 17: RC vs Unit18:PP yo... Unit 18: RC Unit 19: PP Unit 19: RC a8 a8 sys 88 oa y 08 nv O4 SYS 04 SY pn" v0.6 0.6 .Ys oy 0.0 oo 93 SY 93 sy 02 YS âoa SYS 9 4 0.4 0.2 0.2 -06 0.6 5 S,YS-0.4 âys 04 08 YY 08 sy 0.2 ye 0.2 sy eeiee. COPECO CPF ECO CPSP CLF ECO ECP SEO CPF SCO CPE M Unit 20: PP Unit 20: RC Unit 21: PP Unit 21: RC Unit 22: PP Unit 22: RC Unit 23: PP Unit 23: RC xsY 08 08 SYS9,10 0.10 Ys 0.2 0.2 0.2 0.2 06 0.6 0.05 0.05 0.0 0.0 sy 0.1 a O21 y%,, 04 0.4 0.00 sifsao0 Ys 0.2 s,Â¥s-0.2 sys ° oe Bis 02 ow 82 60-08 07-005 4 Ys 0.4 Ys 0.2 BÂ¥5-0.2 â2 eps 02 0.15 YY ~0.15 FPF SQ CP SEO CPF EO CSCO CPF ECO EC PSFO CLE CO CPS EL Unit 24: PP Unit24:RC Unit 25: PP Unit 25: RC Unit 26: PP Unit 26: RC Unit 27: PP Unit 27: RC YY 1.0 yy (10 NY ya 97 Y 97 05 Y 05 0s ys 05 Ys 02 YS go y 08 05 0.0 Ys 0.0 Ys 0.0 s.Â¥s 0.0 SYS 00 0.0 Ys 93 $3 Y SYS 7 05 05 sYS95 SY 98 ~02 02 Sys 92 WS 02 Zi -1.0 SY 40 SY 04 Â¥s04 0.0 Xs,YS 0.0 siÂ¥s FPF CO CPE ECO ECPI ECO CPE CO CPF ECM CPS EO CLE SO CP ELL Unit 28: PP Unit 28: RC Unit 29: PP Unit 29: RC Unit 30: PP Unit 30: RC Unit 31: PP Unit 31: RC 0.0 S.YS 0.0 s\Ys 10 s,Ys 1.0 ss 9.9 $9 Ys 05 0.5 -0.2 SY 9.2 MS os SY 05 sy 02 Ys 02 0.0 Y oo YY 04 4 ; " 33 s,Â¥s0-3 Sys 05 SYS 95 i Oe Os v i Y: 38 * 38 oY 5 | A YS . . SÂ¥s-10 na -1.0 0S "Ys os 0.7 sÂ¥ 0.7 SPF SCOP CLEFECO CLI ECO CP ECO CPF CMH CPF CLI SO CLELL Unit 32: PP Unit 32: RC Unit 33: PP Unit 33: RC Unit 34: PP Unit 34: RC Unit 35: PP Unit 35:RC , YY og YY 08 aa s Sy i sv. Oe sY 05 05 06 0.6 02 ys 0? SYS ba Sys ba s.Ys SY 94 Ys 0.4 oa âS° 9.0 0.0 Ys 0.0 2 ; a3 ee te a wed ene 2 0.1 01 08 SÂ¥s05 ey Fs-02 SYS 33 04 y 24 y 0.0 0.0 -1.0 -1.0 â SPF ECP CP FEO CPF ECP CPF ECP CPF ESO COFSLP CLF LSM OF FO Unit 36: PP Unit 36: RC Unit 37: PP Unit 37: RC Unit 38: PP Unit 38: RC Unit 39: PP Unit 39: RC 03 YS 93 ys vs 10 Ys 10 1.0 1.0 92 Sys 93 Sys â Y oo 0s âY 05 05 y 05 Y of sy $9 SY 90 SYS gg See 00 oY 0.0 O< 0.0 S00 s,Y 0.2 02 : : 05 05 SYS 03 0.3 02 sy 0? ay SYS yes? 8 VPs we 4¢ $ we wv A) & vw a) wv rn) $ we wv << w aw w 4 $ Â¥ ro) we 4 & wv > vw nC) s ro) vw © se we a) Unit 40: PP Unit 40: RC Unit 41: PP Unit 41: RC Unit42:PP Unit 42: RC Unit 43: PP Unit 43: RC 04 YS o4 05 s,Ys 0.5 05 SY 05 SY Â¥s 02 Y 92 y 03 y 3 sys 04 0.4 05 sÂ¥s 0.5 0.0 y 20 0.2 *" 92 03 0.3 Ss, XY 0.2 SY 0.2 a 0.2 0.1 y 0.2 yy 0.0 0.0 OA s,Â¥s-0.4 0.0 YY 00 Â¥ ° oe a1 05 Y os 0.6 0.6 Ys 0-1 Ys 2 oa nts 0 ons? ye # OS FO FW SF CO FO F CW FWD SF FO FO F CO FO SF FO FW o SO FO SF FO Unit 44: PP Unit 44: RC Unit 45: PP Unit 45: RC Unit46:PP Unit 46: RC Unit 47: PP Unit 47: RC 1.0 Y 10 .Y .Y Â¥ SY 10 1.0 .Y 08 Ys 0.8 0.8 0.8 Bry 0s 0s s,s VS iG oe oe spb 0.6 0.6 Ys Ys 0.5 sy 05 ~ 33 33 02 oa os ae 28 ayaa ss , . Ys 04 05 05 ay hay âYS 00 s.Ys 0.0 BÂ¥s-10 SY 1.0 yy 08 os we rr) $ we wv A) & vw a) wv rn) $ we wv << w aw w 4 $ Â¥ ro) we 4 & wv > vw nC) s ro) vw © se we a) Unit 48: PP Unit 48: RC Unit 49: PP yoy. Unit 49: RC 0.4 0.4 0.0 5 0.0 e\Â¥s 0.2 0.2 Y 0.1 sy D2 0.0 0.0 s\Â¥s~0.2 0.2
2 4 06 0.8
# SPF
# EQ
sVÂ¥s-0.2 4 06 vy 0.8
3 33 sy 04 4 my Ve 37 7 COEF SO CLF ESD
# CLES
# Y
# s.Y
Figure 5: Activation plots for all units (see Figure 3a and text in p. 7).
# Unit 3: RC | {
"id": "1602.08952"
} |
1611.01224 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | 7 1 0 2
l u J 0 1 ] G L . s c [
2 v 4 2 2 1 0 . 1 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# SAMPLE EFFICIENT ACTOR-CRITIC WITH EXPERIENCE REPLAY
Ziyu Wang DeepMind ziyu@google.com
Victor Bapst DeepMind vbapst@google.com
Nicolas Heess DeepMind heess@google.com
Volodymyr Mnih DeepMind vmnih@google.com
Remi Munos DeepMind Munos@google.com
Koray Kavukcuoglu DeepMind korayk@google.com
Nando de Freitas DeepMind, CIFAR, Oxford University nandodefreitas@google.com
# ABSTRACT
This paper presents an actor-critic deep reinforcement learning agent with ex- perience replay that is stable, sample efï¬cient, and performs remarkably well on challenging environments, including the discrete 57-game Atari domain and several continuous control problems. To achieve this, the paper introduces several inno- vations, including truncated importance sampling with bias correction, stochastic dueling network architectures, and a new trust region policy optimization method.
# INTRODUCTION
Realistic simulated environments, where agents can be trained to learn a large repertoire of cognitive skills, are at the core of recent breakthroughs in AI (Bellemare et al., 2013; Mnih et al., 2015; Schulman et al., 2015a; Narasimhan et al., 2015; Mnih et al., 2016; Brockman et al., 2016; Oh et al., 2016). With richer realistic environments, the capabilities of our agents have increased and improved. Unfortunately, these advances have been accompanied by a substantial increase in the cost of simulation. In particular, every time an agent acts upon the environment, an expensive simulation step is conducted. Thus to reduce the cost of simulation, we need to reduce the number of simulation steps (i.e. samples of the environment). This need for sample efï¬ciency is even more compelling when agents are deployed in the real world.
Experience replay (Lin, 1992) has gained popularity in deep Q-learning (Mnih et al., 2015; Schaul et al., 2016; Wang et al., 2016; Narasimhan et al., 2015), where it is often motivated as a technique for reducing sample correlation. Replay is actually a valuable tool for improving sample efï¬ciency and, as we will see in our experiments, state-of-the-art deep Q-learning methods (Schaul et al., 2016; Wang et al., 2016) have been up to this point the most sample efï¬cient techniques on Atari by a signiï¬cant margin. However, we need to do better than deep Q-learning, because it has two important limitations. First, the deterministic nature of the optimal policy limits its use in adversarial domains. Second, ï¬nding the greedy action with respect to the Q function is costly for large action spaces.
Policy gradient methods have been at the heart of signiï¬cant advances in AI and robotics (Silver et al., 2014; Lillicrap et al., 2015; Silver et al., 2016; Levine et al., 2015; Mnih et al., 2016; Schulman et al., 2015a; Heess et al., 2015). Many of these methods are restricted to continuous domains or to very speciï¬c tasks such as playing Go. The existing variants applicable to both continuous and discrete domains, such as the on-policy asynchronous advantage actor critic (A3C) of Mnih et al. (2016), are sample inefï¬cient.
The design of stable, sample efï¬cient actor critic methods that apply to both continuous and discrete action spaces has been a long-standing hurdle of reinforcement learning (RL). We believe this paper
1
Published as a conference paper at ICLR 2017
is the ï¬rst to address this challenge successfully at scale. More speciï¬cally, we introduce an actor critic with experience replay (ACER) that nearly matches the state-of-the-art performance of deep Q-networks with prioritized replay on Atari, and substantially outperforms A3C in terms of sample efï¬ciency on both Atari and continuous control domains.
ACER capitalizes on recent advances in deep neural networks, variance reduction techniques, the off-policy Retrace algorithm (Munos et al., 2016) and parallel training of RL agents (Mnih et al., 2016). Yet, crucially, its success hinges on innovations advanced in this paper: truncated importance sampling with bias correction, stochastic dueling network architectures, and efï¬cient trust region policy optimization.
On the theoretical front, the paper proves that the Retrace operator can be rewritten from our proposed truncated importance sampling with bias correction technique.
# 2 BACKGROUND AND PROBLEM SETUP
Consider an agent interacting with its environment over discrete time steps. At time step t, the agent Rnx, chooses an action at according to a policy observes the nx-dimensional state vector xt â X â R produced by the environment. We will consider discrete xt) and observes a reward signal rt â Ï(a | Rna in Section 5. actions at â { 1, 2, . . . , Na} iâ¥0 γirt+i in expectation. The The goal of the agent is to maximize the discounted return Rt = discount factor γ [0, 1) trades-off the importance of immediate and future rewards. For an agent following policy Ï, we use the standard deï¬nitions of the state-action and state only value functions:
# in Sections 3 and 4, and continuous actions at â A â
QÏ(xt, at) = Ext+1:â,at+1:â [ Rt|
V Ï(xt) = Eat [QÏ(xt, at) |
xt, at] and xt] .
Here, the expectations are with respect to the observed environment states xt and the actions generated by the policy Ï, where xt+1:â denotes a state trajectory starting at time t + 1. We also need to deï¬ne the advantage function AÏ(xt, at) = QÏ(xt, at) relative measure of value of each action since Eat [AÏ(xt, at)] = 0. xt) can be updated using the discounted approxi- The parameters θ of the differentiable policy Ïθ(at| mation to the policy gradient (Sutton et al., 2000), which borrowing notation from Schulman et al. (2015b), is deï¬ned as:
AÏ(xt, at) âθ log Ïθ(at| xt) . (1)
9 = Exp.2 00.0 | t>0 ofjSchulman et
Following Proposition 1 of Schulman et al. (2015b), we can replace AÏ(xt, at) in the above expression with the state-action value QÏ(xt, at), the discounted return Rt, or the temporal difference residual V Ï(xt), without introducing bias. These choices will however have different rt + γV Ï(xt+1) variance. Moreover, in practice we will approximate these quantities with neural networks thus introducing additional approximation errors and biases. Typically, the policy gradient estimator using Rt will have higher variance and lower bias whereas the estimators using function approximation will have higher bias and lower variance. Combining Rt with the current value function approximation to minimize bias while maintaining bounded variance is one of the central design principles behind ACER.
To trade-off bias and variance, the asynchronous advantage actor critic (A3C) of Mnih et al. (2016) uses a single trajectory sample to obtain the following gradient approximation:
k-1 ge = > ((: vn + VG (14K) - vite) Vo log ma(a: |). (2) t>0 i=0
t>0
i=0
A3C combines both k-step returns and function approximation to trade-off variance and bias. We may think of V Ï Î¸v
In the following section, we will introduce the discrete-action version of ACER. ACER may be understood as the off-policy counterpart of the A3C method of Mnih et al. (2016). As such, ACER builds on all the engineering innovations of A3C, including efï¬cient parallel CPU computation.
2
Published as a conference paper at ICLR 2017
xt) and the value function ACER uses a single deep neural network to estimate the policy Ïθ(at| V Ï (xt). (For clarity and generality, we are using two different symbols to denote the parameters of θv the policy and value function, θ and θv, but most of these parameters are shared in the single neural network.) Our neural networks, though building on the networks used in A3C, will introduce several modiï¬cations and new modules.
# 3 DISCRETE ACTOR CRITIC WITH EXPERIENCE REPLAY
Off-policy learning with experience replay may appear to be an obvious strategy for improving the sample efï¬ciency of actor-critics. However, controlling the variance and stability of off-policy estimators is notoriously hard. Importance sampling is one of the most popular approaches for off- policy learning (Meuleau et al., 2000; Jie & Abbeel, 2010; Levine & Koltun, 2013). In our context, it , proceeds as follows. Suppose we retrieve a trajectory xk) } where the actions have been sampled according to the behavior policy µ, from our memory of experiences. Then, the importance weighted policy gradient is given by:
k k /k gm = (1 a) > (> a) Vo log mo(at|x2), (3) 1=0 t=0 \i=0
t=0
1=0
\i=0
where Ït = Ï(at|xt) µ(at|xt) denotes the importance weight. This estimator is unbiased, but it suffers from very high variance as it involves a product of many potentially unbounded importance weights. To prevent the product of importance weights from exploding, Wawrzy´nski (2009) truncates this product. Truncated importance sampling over entire trajectories, although bounded in variance, could suffer from signiï¬cant bias.
Recently, Degris et al. (2012) attacked this problem by using marginal value functions over the limiting distribution of the process to yield the following approximation of the gradient:
# gmarg = Extâ¼Î²,atâ¼Âµ [Ïtâθ log Ïθ(at|
xt)QÏ(xt, at)] , (4)
where Extâ¼Î²,atâ¼Âµ[ to the limiting distribution β(x) = ] · x0, µ) with behavior policy µ. To keep the notation succinct, we will replace limtââ P (xt = x | Extâ¼Î²,atâ¼Âµ[ ] with Extat[ · Two important facts about equation (4) must be highlighted. First, note that it depends on QÏ and not on Qµ, consequently we must be able to estimate QÏ. Second, we no longer have a product of importance weights, but instead only need to estimate the marginal importance weight Ït. Importance sampling in this lower dimensional space (over marginals as opposed to trajectories) is expected to exhibit lower variance.
] and ensure we remind readers of this when necessary. ·
Degris et al. (2012) estimate QÏ in equation (4) using lambda returns: Rλ λ)γV (xt+1) + λγÏt+1Rλ t+1. This estimator requires that we know how to choose λ ahead of time to trade off bias and variance. Moreover, when using small values of λ to reduce variance, occasional large importance weights can still cause instability.
In the following subsection, we adopt the Retrace algorithm of Munos et al. (2016) to estimate QÏ. Subsequently, we propose an importance weight truncation technique to improve the stability of the off-policy actor critic of Degris et al. (2012), and introduce a computationally efï¬cient trust region scheme for policy optimization. The formulation of ACER for continuous action spaces will require further innovations that are advanced in Section 5.
3.1 MULTI-STEP ESTIMATION OF THE STATE-ACTION VALUE FUNCTION
In this paper, we estimate QÏ(xt, at) using Retrace (Munos et al., 2016). (We also experimented with the related tree backup method of Precup et al. (2000) but found Retrace to perform better in practice.) Given a trajectory generated under the behavior policy µ, the Retrace estimator can be expressed recursively as follows1:
Qret(xt, at) = rt + γ ¯Ït+1[Qret(xt+1, at+1) Q(xt+1, at+1)] + γV (xt+1), (5)
â 1For ease of presentation, we consider only λ = 1 for Retrace.
3
Published as a conference paper at ICLR 2017
where ¯Ït is the truncated importance weight, ¯Ït = min µ(at|xt) , Q is the current value estimate of QÏ, and V (x) = Eaâ¼ÏQ(x, a). Retrace is an off-policy, return-based algorithm which has low variance and is proven to converge (in the tabular case) to the value function of the target policy for any behavior policy, see Munos et al. (2016).
The recursive Retrace equation depends on the estimate Q. To compute it, in discrete action spaces, we adopt a convolutional neural network with âtwo headsâ that outputs the estimate Qθv (xt, at), as xt). This neural representation is the same as in (Mnih et al., 2016), with the well as the policy Ïθ(at| exception that we output the vector Qθv (xt, at) instead of the scalar Vθv (xt). The estimate Vθv (xt) can be easily derived by taking the expectation of Qθv under Ïθ. To approximate the policy gradient gmarg, ACER uses Qret to estimate QÏ. As Retrace uses multi- step returns, it can signiï¬cantly reduce bias in the estimation of the policy gradient 2. To learn the critic Qθv (xt, at), we again use Qret(xt, at) as a target in a mean squared error loss and update its parameters θv with the following standard gradient:
(Qret(xt, at) Qθv (xt, at)) (6)
# âθv Qθv (xt, at)).
â
Because Retrace is return-based, it also enables faster learning of the critic. Thus the purpose of the multi-step estimator Qret in our setting is twofold: to reduce bias in the policy gradient, and to enable faster learning of the critic, hence further reducing bias.
IMPORTANCE WEIGHT TRUNCATION WITH BIAS CORRECTION
The marginal importance weights in Equation (4) can become large, thus causing instability. To safe-guard against high variance, we propose to truncate the importance weights and introduce a correction term via the following decomposition of gmarg: gmarg = Extat [Ïtâθlog Ïθ(at| Eat[¯Ïtâθlog Ïθ(at|
where ¯Ït = min Ït(a) = Ï(a|xt) expectations are with respect to the limiting state distribution under the behavior policy: xt â¼ at â¼ The clipping of the importance weight in the ï¬rst term of equation (7) ensures that the variance of the gradient estimate is bounded. The correction term (second term in equation (7)) ensures that our estimate is unbiased. Note that the correction term is only active for actions such that Ït(a) > c. In particular, if we choose a large value for c, the correction term only comes into effect when the variance of the original off-policy estimator of equation (4) is very high. When this happens, our decomposition has the nice property that the truncated weight in the ï¬rst term is at most c while the correction weight
Ït(a)âc Ït(a) in the second term is at most 1.
+
We model QÏ(xt, a) in the correction term with our neural network approximation Qθv (xt, at). This modiï¬cation results in what we call the truncation with bias correction trick, in this case applied to the function
# âθ log Ïθ(at|
Cc ge = #, | lpiVae To (ara )Q" (we, ay) | +E (Me Volog mo(alx1)Qo, (rt, a (8) +
Equation involves an expectation over the stationary distribution of the Markov process. We can however approximate it by sampling trajectories {x9, a0, 10, H(-|Z0),++* Tk, Ak, Th MC|Ze) f
x0, a0, r0, µ( {
} 2An alternative to Retrace here is Q(λ) with off-policy corrections (Harutyunyan et al., 2016) which we
|
· ·
discuss in more detail in Appendix B.
4
|
Published as a conference paper at ICLR 2017
xt) are the policy vectors. Given these
generated from the behavior policy µ. Here the terms µ( ·| trajectories, we can compute the off-policy ACER gradient: = ¯Ïtâθ log Ïθ(at| Ït(a) â Ït(a)
# gacer t
prVo log mo(arlae)[Qâ¢"(ae, ar) â Vo, (a2)] +E, (5] Vo log mo(a|21)[Qo, (1, a) â wo.ce) : (9)
In the above expression, we have subtracted the classical baseline Vθv (xt) to reduce variance. , (9) recovers (off-policy) policy gradient up to the use It is interesting to note that, when c = of Retrace. When c = 0, (9) recovers an actor critic update that depends entirely on Q estimates. In the continuous control domain, (9) also generalizes Stochastic Value Gradients if c = 0 and the reparametrization trick is used to estimate its second term (Heess et al., 2015).
3.3 EFFICIENT TRUST REGION POLICY OPTIMIZATION
The policy updates of actor-critic methods do often exhibit high variance. Hence, to ensure stability, we must limit the per-step changes to the policy. Simply using smaller learning rates is insufï¬cient as they cannot guard against the occasional large updates while maintaining a desired learning speed. Trust Region Policy Optimization (TRPO) (Schulman et al., 2015a) provides a more adequate solution.
Schulman et al. (2015a) approximately limit the difference between the updated policy and the current policy to ensure safety. Despite the effectiveness of their TRPO method, it requires repeated computation of Fisher-vector products for each update. This can prove to be prohibitively expensive in large domains.
In this section we introduce a new trust region policy optimization method that scales well to large problems. Instead of constraining the updated policy to be close to the current policy (as in TRPO), we propose to maintain an average policy network that represents a running average of past policies and forces the updated policy to not deviate far from this average.
We decompose our policy network in two parts: a distribution f , and a deep neural network that gen- erates the statistics Ïθ(x) of this distribution. That is, given f , the policy is completely characterized by the network Ïθ: Ï( Ïθ(x)). For example, in the discrete domain, we choose f to be the categorical distribution with a probability vector Ïθ(x) as its statistics. The probability vector is of course parameterised by θ.
We denote the average policy network as Ïθa and update its parameters θa âsoftlyâ after each update to the policy parameter θ: θa â Consider, for example, the ACER policy gradient as deï¬ned in Equation (9), but with respect to Ï: Ïθ(x))[Qret(xt, at)
â
# gacer t
= PtV o(a:) log f(aildo(x))[Qâ¢" (xe, at) _ Vo, (x1)] pila) ⢠5 ' +E | V g(a.) log f(at|b0(x)) (Qe, (4,4) â Vo,(xe)| }- 10) ann pr(a) +
Given the averaged policy network, our proposed trust region update involves two stages. In the ï¬rst stage, we solve the following optimization problem with a linearized KL divergence constraint:
Inimi 1 aacer 2 minimize <= -â2Z Ir 2 ls lla an) subject to Vee) Decx [F(-1o. (1))IIFClea(@e))]" 2 <6
Ïθa (xt)) ·| Since the constraint is linear, the overall optimization problem reduces to a simple quadratic program- ming problem, the solution of which can be easily derived in closed form using the KKT conditions. Letting k =
Vg(0,) Dx [f(-16o, (e+) || i at = Get â
|
i KT gacet _ 5 at = Get â max {0, ao" z \ (12) Walla
This transformation of the gradient has a very natural form. If the constraint is satisfied, there is no change to the gradient with respect to ¢g(x,). Otherwise, the update is scaled down in the direction
5
Published as a conference paper at ICLR 2017
1 on-policy + 0 replay (A3C) 1 on-policy + 1 replay (ACER) 1 on-policy + 4 replay (ACER) 1 on-policy + 8 replay (ACER) DON Prioritized Replay Median (in Human) Median (in Human) ~ Million Steps
Figure 1: ACER improvements in sample (LEFT) and computation (RIGHT) complexity on Atari. On each plot, the median of the human-normalized score across all 57 Atari games is presented for 4 ratios of replay with 0 replay corresponding to on-policy A3C. The colored solid and dashed lines represent ACER with and without trust region updating respectively. The environment steps are counted over all threads. The gray curve is the original DQN agent (Mnih et al., 2015) and the black curve is one of the Prioritized Double DQN agents from Schaul et al. (2016).
of k, thus effectively lowering rate of change between the activations of the current policy and the average policy network.
In the second stage, we take advantage of back-propagation. Speciï¬cally, the updated gradient with respect to Ïθ, that is zâ, is back-propagated through the network to compute the derivatives with respect to the parameters. The parameter updates for the policy network follow from the chain rule: âÏθ(x)
âθ zâ.
The trust region step is carried out in the space of the statistics of the distribution f , and not in the space of the policy parameters. This is done deliberately so as to avoid an additional back-propagation step through the policy network.
We would like to remark that the algorithm advanced in this section can be thought of as a general strategy for modifying the backward messages in back-propagation so as to stabilize the activations.
Instead of a trust region update, one could alternatively add an appropriately scaled KL cost to the objective function as proposed by Heess et al. (2015). This approach, however, is less robust to the choice of hyper-parameters in our experience.
The ACER algorithm results from a combination of the above ideas, with the precise pseudo-code appearing in Appendix A. A master algorithm (Algorithm 1) calls ACER on-policy to perform updates and propose trajectories. It then calls ACER off-policy component to conduct several replay steps. When on-policy, ACER effectively becomes a modiï¬ed version of A3C where Q instead of V baselines are employed and trust region optimization is used.
# 4 RESULTS ON ATARI
We use the Arcade Learning Environment of Bellemare et al. (2013) to conduct an extensive evaluation. We deploy one single algorithm and network architecture, with ï¬xed hyper-parameters, to learn to play 57 Atari games given only raw pixel observations and game rewards. This task is highly demanding because of the diversity of games, and high-dimensional pixel-level observations.
Our experimental setup uses 16 actor-learner threads running on a single machine with no GPUs. We adopt the same input pre-processing and network architecture as Mnih et al. (2015). Speciï¬cally, the network consists of a convolutional layer with 32 8 8 ï¬lters with stride 4 followed by another convolutional layer with 64 4 4 ï¬lters with stride 2, followed by a ï¬nal convolutional layer with 64 3 ï¬lters with stride 1, followed by a fully-connected layer of size 512. Each of the hidden layers 3 is followed by a rectiï¬er nonlinearity. The network outputs a softmax policy and Q values.
6
Published as a conference paper at ICLR 2017
When using replay, we add to each thread a replay memory that is up to 50 000 frames in size. The total amount of memory used across all threads is thus similar in size to that of DQN (Mnih et al., 2015). For all Atari experiments, we use a single learning rate adopted from an earlier implementation of A3C without further tuning. We do not anneal the learning rates over the course of training as in Mnih et al. (2016). We otherwise adopt the same optimization procedure as in Mnih et al. (2016). Speciï¬cally, we adopt entropy regularization with weight 0.001, discount the rewards with γ = 0.99, and perform updates every 20 steps (k = 20 in the notation of Section 2). In all our experiments with experience replay, we use importance weight truncation with c = 10. We consider training ACER both with and without trust region updating as described in Section 3.3. When trust region updating is used, we use δ = 1 and α = 0.99 for all experiments.
To compare different agents, we adopt as our metric the median of the human normalized score over all 57 games. The normalization is calculated such that, for each game, human scores and random scores are evaluated to 1, and 0 respectively. The normalized score for a given game at time t is computed as the average normalized score over the past 1 million consecutive frames encountered until time t. For each agent, we plot its cumulative maximum median score over time. The result is summarized in Figure 1.
The four colors in Figure 1 correspond to four replay ratios (0, 1, 4 and 8) with a ratio of 4 meaning that we use the off-policy component of ACER 4 times after using the on-policy component (A3C). That is, a replay ratio of 0 means that we are using A3C. The solid and dashed lines represent ACER with and without trust region updating respectively. The gray and black curves are the original DQN (Mnih et al., 2015) and Prioritized Replay agent of Schaul et al. (2016) agents respectively.
As shown on the left panel of Figure 1, replay signiï¬cantly increases data efï¬ciency. We observe that when using the trust region optimizer, the average reward as a function of the number of environmental steps increases with the ratio of replay. This increase has diminishing returns, but with enough replay, ACER can match the performance of the best DQN agents. Moreover, it is clear that the off-policy actor critics (ACER) are much more sample efï¬cient than their on-policy counterpart (A3C).
The right panel of Figure 1 shows that ACER agents perform similarly to A3C when measured by wall clock time. Thus, in this case, it is possible to achieve better data-efï¬ciency without necessarily compromising on computation time. In particular, ACER with a replay ratio of 4 is an appealing alternative to either the prioritized DQN agent or A3C.
# 5 CONTINUOUS ACTOR CRITIC WITH EXPERIENCE REPLAY
Retrace requires estimates of both Q and V , but we cannot easily integrate over Q to derive V in continuous action spaces. In this section, we propose a solution to this problem in the form of a novel representation for RL, as well as modiï¬cations necessary for trust region updating.
5.1 POLICY EVALUATION
Retrace provides a target for learning Qθv , but not for learning Vθv . We could use importance sampling to compute Vθv given Qθv , but this estimator has high variance. We propose a new architecture which we call Stochastic Dueling Networks (SDNs), inspired by the Dueling networks of Wang et al. (2016), which is designed to estimate both V Ï and QÏ off-policy while maintaining consistency between the two estimates. At each time step, an SDN outputs a Qθv of QÏ and a deterministic estimate Vθv of V Ï, such that stochastic estimate 1 n
# i=l
where n is a parameter, see Figure The two estimates are consistent in the sense that gwn(-es) [Eu nnn(-lee) (Q. (x2, a)) = Vo, (x:). Furthermore, we can learn about Vâ by learn- ing Qo: To see this, assume we have learned Q" perfectly such that E,,,.,, vr(-|x1) (Qs. (xe, a)) = Q⢠(at, ax), then Vo, (21) = Eqrn(-|x,) [ExinnmClee) (Q, («1,4)) = Eann(-je,) (Q7 (tt, @)] = V⢠(az). Therefore, a target on Qo, (xz, a) also provides an error signal for updating Vo, .
7
Published as a conference paper at ICLR 2017
Ag, ( % a [urs+++ 5 Un)
Figure 2: A schematic of the Stochastic Dueling Network. In the drawing, [u1, to be samples from Ïθ( real sizes of the networks used. , un] are assumed xt). This schematic illustrates the concept of SDNs but does not reï¬ect the · · · ·|
In addition to SDNs, however, we also construct the following novel target for estimating V Ï:
Veerset(,) = min {1 meee (O21. 40) â Qo.(e1.a)) +Va,(e0). (14) H(ae|) is also derived via the truncation and bias correction trick; for more details, see
The above target is also derived via the truncation and bias correction trick; for more details, see Appendix D. Finally, when estimating Qret in continuous domains, we implement a slightly different formulation
Finally, when estimating Q'*' in continuous domains, we implement a slightly different formulation T(ae|r4) H(ae|xe) the action space. Although not essential, we have found this formulation to lead to faster learning. 1 of the truncated importance weights p, = min {1 ( ) a \ where d is the dimensionality of
5.2 TRUST REGION UPDATING
To adopt the trust region updating scheme (Section 3.3) in the continuous control domain, one simply has to choose a distribution f and a gradient speciï¬cation Ëgacer suitable for continuous action spaces.
For the distribution f , we choose Gaussian distributions with ï¬xed diagonal covariance and mean Ïθ(x). To derive Ëgacer dueling network, but with respect to Ï:
git = B,, [Ee [Ponce 108 F(aildo(21))(QP"(er,a0) - %a.(e9)| +E ann âââ oo) (Qo. 9) â Vag #0) Votre) 108 Heute) - (1s)
In the above definition, we are using Q°P° instead of Qâ¢. Here, Q°P°(x;, az) is the same as Retrace with the exception that the truncated importance ratio is replaced with 1 (Harutyunyan et al.|/2016). Please refer to Appendix [B]an expanded discussion on this design choice. Given an observation x;, we can sample aj, ~ 79(-|x1) to obtain the following Monte Carlo approximation
xt) to obtain the following Monte Carlo approximation Ïθ(xt))(Qopc(xt, at)
~ 79(-|x1) following approximation = PrVoo(xr) log f (aelbo(a1))(QP* (a1, ar) â Vo, (21)) 4+ [BOP] Ge. (ara) â Vox (e1)) Fontes low Hat loete)). 16 pray) +
# Ëgacer t
+
Given f and Ëgacer
, we apply the same steps as detailed in Section 3.3 to complete the update.
# t
The precise pseudo-code of ACER algorithm for continuous spaces results is presented in Appendix A.
8
Published as a conference paper at ICLR 2017
Walker2d (9-DoF /6-dim. Actions) Fish (13-DoF/5-dim. Actions) Cartpole (2-DoF/I-dim. Actions) ) Milion Steps Million Steps Milion Steps Millon Steps Million Steps Episode Rewards Million Steps Humanoid (27-DoF /21-dim. Actionsâ Reacher3 (3-DoF /3-dim. Actions) Cheetah (9-DoF /6-dim. Actions) Episode Rewards
Figure 3: [TOP] Screen shots of the continuous control tasks. [BOTTOM] Performance of different methods on these tasks. ACER outperforms all other methods and shows clear gains for the higher- dimensionality tasks (humanoid, cheetah, walker and ï¬sh). The proposed trust region method by itself improves the two baselines (truncated importance sampling and A3C) signiï¬cantly.
# 6 RESULTS ON MUJOCO
We evaluate our algorithms on 6 continuous control tasks, all of which are simulated using the MuJoCo physics engine (Todorov et al., 2012). For descriptions of the tasks, please refer to Appendix E.1. Brieï¬y, the tasks with action dimensionality in brackets are: cartpole (1D), reacher (3D), cheetah (6D), ï¬sh (5D), walker (6D) and humanoid (21D). These tasks are illustrated in Figure 3.
To benchmark ACER for continuous control, we compare it to its on-policy counterpart both with and without trust region updating. We refer to these two baselines as A3C and Trust-A3C. Additionally, we also compare to a baseline with replay where we truncate the importance weights over trajectories as in (Wawrzy´nski, 2009). For a detailed description of this baseline, please refer to Appendix E. Again, we run this baseline both with and without trust region updating, and refer to these choices as Trust-TIS and TIS respectively. Last but not least, we refer to our proposed approach with SDN and trust region updating as simply ACER. All ï¬ve setups are implemented in the asynchronous A3C framework.
All the aforementioned setups share the same network architecture that computes the policy and state values. We maintain an additional small network that computes the stochastic A values in the case of ACER. We use n = 5 (using the notation in Equation (13)) in all SDNs. Instead of mixing on-policy and replay learning as done in the Atari domain, ACER for continuous actions is entirely off-policy, with experiences generated from the simulator (4 times on average). When using replay, we add to each thread a replay memory that is 5, 000 frames in size and perform updates every 50 steps (k = 50 in the notation of Section 2). The rate of the soft updating (α as in Section 3.3) is set to 0.995 in all setups involving trust region updating. The truncation threshold c is set to 5 for ACER.
9
Published as a conference paper at ICLR 2017
We use diagonal Gaussian policies with ï¬xed diagonal covariances where the diagonal standard deviation is set to 0.3. For all setups, we sample the learning rates log-uniformly in the range [10â4, 10â3.3]. For setups involving trust region updating, we also sample δ uniformly in the range [0.1, 2]. With all setups, we use 30 sampled hyper-parameter settings.
The empirical results for all continuous control tasks are shown Figure 3, where we show the mean and standard deviation of the best 5 out of 30 hyper-parameter settings over which we searched 3. For sensitivity analyses with respect to the hyper-parameters, please refer to Figures 5 and 6 in the Appendix.
In continuous control, ACER outperforms the A3C and truncated importance sampling baselines by a very signiï¬cant margin.
Here, we also ï¬nd that the proposed trust region optimization method can result in huge improvements over the baselines. The high-dimensional continuous action policies are much harder to optimize than the small discrete action policies in Atari, and hence we observe much higher gains for trust region optimization in the continuous control domains. In spite of the improvements brought in by trust region optimization, ACER still outperforms all other methods, specially in higher dimensions.
# 6.1 ABLATIONS
To further tease apart the contributions of the different components of ACER, we conduct an ablation analysis where we individually remove Retrace / Q(λ) off-policy correction, SDNs, trust region, and truncation with bias correction from the algorithm. As shown in Figure 4, Retrace and off- policy correction, SDNs, and trust region are critical: removing any one of them leads to a clear deterioration of the performance. Truncation with bias correction did not alter the results in the Fish and Walker2d tasks. However, in Humanoid, where the dimensionality of the action space is much higher, including truncation and bias correction brings a signiï¬cant boost which makes the originally kneeling humanoid stand. Presumably, the high dimensionality of the action space increases the variance of the importance weights which makes truncation with bias correction important. For more details on the experimental setup please see Appendix E.4.
# 7 THEORETICAL ANALYSIS
Retrace is a very recent development in reinforcement learning. In fact, this work is the ï¬rst to consider Retrace in the policy gradients setting. For this reason, and given the core role that Retrace plays in ACER, it is valuable to shed more light on this technique. In this section, we will prove that Retrace can be interpreted as an application of the importance weight truncation and bias correction trick advanced in this paper.
Consider the following equation:
QÏ(xt, at) = Ext+1at+1 [rt + γÏt+1QÏ(xt+1, at+1)] . (17)
If we apply the weight truncation and bias correction trick to the above equation we obtain
Q" (xt, a) = Be siares ret 7er4+1Q" (41,4141) +7 E [aa â â| Q* (x141,4) } | - ann pr41(a) +
By recursively expanding QÏ as in Equation (18), we can represent QÏ(x, a) as:
pusi(b) =e â¢(¢,a) =E yt pi\(retyE [| * (rp415b . 19 Q* (2,4) = Ey X17 (1) (: â, ( perild) 2 (2141, 0) (19)
â¢(¢,a) =E Q* (2,4) = Ey X17
The expectation Eµ is taken over trajectories starting from x with actions generated with respect to µ. When QÏ is not available, we can replace it with our current estimate Q to get a return-based
3 For videos of the policies learned with ACER, please see: https://www.youtube.com/watch?v= NmbeQYoVv5g&list=PLkmHIkhlFjiTlvwxEnsJMs3v7seR5HSP-.
10
(18)
Published as a conference paper at ICLR 2017
Fish Walker2d Humanoid n o i g e R t s u r T o N s N D S o N r o n e c a r t e R o N . r r o C y c i l o P - f f O n o i t a c n u r T o N . r r o C s a i B &
Figure 4: Ablation analysis evaluating the effect of different components of ACER. Each row compares ACER with and without one component. The columns represents three control tasks. Red lines, in all plots, represent ACER whereas green lines ACER with missing components. This study indicates that all 4 components studied improve performance where 3 are critical to success. Note that the ACER curve is of course the same in all rows.
esitmate of QÏ. This operation also deï¬nes an operator:
- (Ta) (1 pil) =e] BQ(z,a) =E, > ( [#) (n+, ([ pirild) ix) . (20) t>0 i=
BQ(z,a) =E, > t>0
# B
# i=
is a contraction operator with a unique ï¬xed point QÏ B
In the following proposition, we show that and that it is equivalent to the Retrace operator. Proposition 1. The operator and
# QÏ
Proposition 1. The operator B is a contraction operator such that \|BQ â Qâ¢\|oo < y|\|Q â Q* and B is equivalent to Retrace. loo
# B
# QÏ
The above proposition not only shows an alternative way of arriving at the same operator, but also provides a different proof of contraction for Retrace. Please refer to Appendix C for the regularization conditions and proof of the above proposition.
Ï and importance sampling. recovers importance sampling; see Finally, Speciï¬cally, when c = 0, Appendix C. , and therefore Retrace, generalizes both the Bellman operator B T Ï and when c = , = B T â B
11
Published as a conference paper at ICLR 2017
# 8 CONCLUDING REMARKS
We have introduced a stable off-policy actor critic that scales to both continuous and discrete action spaces. This approach integrates several recent advances in RL in a principle manner. In addition, it integrates three innovations advanced in this paper: truncated importance sampling with bias correction, stochastic dueling networks and an efï¬cient trust region policy optimization method.
We showed that the method not only matches the performance of the best known methods on Atari, but that it also outperforms popular techniques on several continuous control problems.
The efï¬cient trust region optimization method advanced in this paper performs remarkably well in continuous domains. It could prove very useful in other deep learning domains, where it is hard to stabilize the training process.
# ACKNOWLEDGMENTS
We are very thankful to Marc Bellemare, Jascha Sohl-Dickstein, and S´ebastien Racaniere for proof- reading and valuable suggestions.
# REFERENCES
M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. JAIR, 47:253â279, 2013.
G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. OpenAI Gym. arXiv preprint 1606.01540, 2016.
T. Degris, M. White, and R. S. Sutton. Off-policy actor-critic. In ICML, pp. 457â464, 2012.
Anna Harutyunyan, Marc G Bellemare, Tom Stepleton, and Remi Munos. Q (λ) with off-policy corrections. arXiv preprint arXiv:1602.04951, 2016.
N. Heess, G. Wayne, D. Silver, T. Lillicrap, T. Erez, and Y. Tassa. Learning continuous control policies by stochastic value gradients. In NIPS, 2015.
T. Jie and P. Abbeel. On a connection between importance sampling and the likelihood ratio policy gradient. In NIPS, pp. 1000â1008, 2010.
S. Levine and V. Koltun. Guided policy search. In ICML, 2013.
S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015.
T. Lillicrap, J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. arXiv:1509.02971, 2015.
L.J. Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine learning, 8(3):293â321, 1992.
N. Meuleau, L. Peshkin, L. P. Kaelbling, and K. Kim. Off-policy policy search. Technical report, MIT AI Lab, 2000.
V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540): 529â533, 2015.
V. Mnih, A. Puigdom`enech Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv:1602.01783, 2016.
R. Munos, T. Stepleton, A. Harutyunyan, and M. G. Bellemare. Safe and efï¬cient off-policy reinforcement learning. arXiv preprint arXiv:1606.02647, 2016.
K. Narasimhan, T. Kulkarni, and R. Barzilay. Language understanding for text-based games using deep reinforcement learning. In EMNLP, 2015.
12
Published as a conference paper at ICLR 2017
J. Oh, V. Chockalingam, S. P. Singh, and H. Lee. Control of memory, active perception, and action in Minecraft. In ICML, 2016.
D. Precup, R. S. Sutton, and S. Singh. Eligibility traces for off-policy policy evaluation. In ICML, pp. 759â766, 2000.
T. Schaul, J. Quan, I. Antonoglou, and D. Silver. Prioritized experience replay. In ICLR, 2016.
J. Schulman, S. Levine, P. Abbeel, M. I. Jordan, and P. Moritz. Trust region policy optimization. In ICML, 2015a.
J. Schulman, P. Moritz, S. Levine, M. I. Jordan, and P. Abbeel. High-dimensional continuous control using generalized advantage estimation. arXiv:1506.02438, 2015b.
D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller. Deterministic policy gradient algorithms. In ICML, 2014.
D. Silver, A. Huang, C.J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016.
R. S. Sutton, D. Mcallester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In NIPS, pp. 1057â1063, 2000.
E. Todorov, T. Erez, and Y. Tassa. MuJoCo: A physics engine for model-based control. In International Conference on Intelligent Robots and Systems, pp. 5026â5033, 2012.
Z. Wang, T. Schaul, M. Hessel, H. van Hasselt, M. Lanctot, and N. de Freitas. Dueling network architectures for deep reinforcement learning. In ICML, 2016.
P. Wawrzy´nski. Real-time reinforcement learning by sequential actorâcritics and experience replay. Neural Networks, 22(10):1484â1497, 2009.
13
Published as a conference paper at ICLR 2017
# A ACER PSEUDO-CODE FOR DISCRETE ACTIONS
# Algorithm 1 ACER for discrete actions (master algorithm)
// Assume global shared parameter vectors θ and θv. // Assume ratio of replay r. repeat Call ACER on-policy, Algorithm 2. n â Possion(r) for i â {1, · · · , n} do Call ACER off-policy, Algorithm 2. end for until Max iteration or time reached.
Algorithm 2 ACER for discrete actions Reset gradients d@ < 0 and d6, < 0. Initialize parameters 6â < @ and 6â, < Oy. if not On-Policy then Sample the trajectory {xo, a0, 70, H(-|Zo),++* , Lk, Ak, Tk, H(-|ex)} from the replay memory. else Get state xo end if fori ⬠{0,--- ,k} do Compute f(-/doâ(#:)), Qoz (ws,-) and f(-\do, (#:)). if On-Policy then Perform a; according to f(-|d9(xi)) Receive reward r; and new state 741 H(i) â f(-|bor(2:)) end if ~ . Flaildor (wid) 6: = min {1 ee}. end for e. 0 for terminal x, Q et ae Ma Qo, (xk, a) f(alper(xe)) otherwise fori ¢ {kâ1,--- ,0}do Qret er +7Qre@ Vi â Ya Qar, (xi, a) f (algo (xi) Computing quantities needed for trust region updating: g = min fe, pi(ai)} Voq,(e;) 108 f(aildo (2i))(Q⢠â Vi) +X [P= SEF] fale Woy cay tw falter ()) (Qe, 20.08) ~ Ve) platy ke Vo q (ei) Pvce [FC /b00 (ea) IF C1Go" (#2) a Accumulate gradients wrt 6â: d0â < d6â + 2G ares) (9 - max { : sae} r) 2 Accumulate gradients wrt 0/,: dO. <â d@y + Vor, (qr â Qor (zi, a))? Update Retrace target: Q" © p; (qr â Qo, (xi, ai)) + Vi end for
end for Perform asynchronous update of θ using dθ and of θv using dθv. Updating the average policy network: θa â αθa + (1 â α)θ
# B Q(λ) WITH OFF-POLICY CORRECTIONS
Given a trajectory generated under the behavior policy µ, the Q(λ) with off-policy corrections estimator (Harutyunyan et al., 2016) can be expressed recursively as follows:
# Qopc(xt, at) = rt + γ[Qopc(xt+1, at+1)
(21) Notice that Qopc(xt, at) is the same as Retrace with the exception that the truncated importance ratio is replaced with 1.
â
14
Published as a conference paper at ICLR 2017
Algorithm 3 ACER for Continuous Actions
Reset gradients d@ < 0 and d6, < 0. Initialize parameters 0â ~ 6 and 6â, + Oy. Sample the trajectory {xo, ao, To, H(-|o), +++ , Tk, Ak, Tk, L(-|2~)} from the replay memory. fori ⬠{0,--- ,k} do
# v â θv.
Compute f(-[éo(:)). Vor Sample aj ~ f(-|d97(xi)) flailegs (wi) ia (ai lei) and p;
(ai lei)
# 1 d
{1, (o:)*}.
# ci â min
1, (Ïi)
.
# (ws). Quy
# v
# (xi, ai), and f (·|Ïθa (xi)).
# F(ai|bor(@i)) n(at |x; )
# end for
°. 0 for terminal x, ret ee Q Vor (wx) otherwise Qe ee qr fori ¢ {kâ1,--- ,0}do Qrt eri +7Qr Qe ar HQ Computing quantities needed for trust region updating:
# for terminal xk
# Qret â
# (xk) otherwise
g © min {c, pi} Vsy,(xi) log f(ailbor (xi) (Q°° (ai, a2) â Vor, (a2)) + : - <| (Qor, (wi, a4) â Vor, (i) Vos (wi) log F (ailbo" (:)) Pils i ke Vb (ei DKx [F (1600 (2) NF ldo" (xi)
Accumulate gradients wrt 6: d@ ~â dO + Oe or (aa) (9 â max {0, saa} k) ~ Welz Accumulate gradients wrt 6/,: d0, <â dO, + (Qr⢠â Qor, (i, ai)) Vor, Qor, (i, ai) dB â dB. + min {1, pi} (Q"*(ae,as) â Quy (we, a)) Vor, Vo, (xs) Update Retrace target: Qâ < c; (Ce - Qo, (xi, a)) + Vor (xi) Update Retrace target: Q°?° â (Qâ" _ Qor, (xi, ai)) + Vor (xi) end for Perform asynchronous update of 6 using d@ and of 6, using d6,. Updating the average policy network: 0, < a0, + (1âa)@
Because of the lack of the truncated importance ratio, the operator deï¬ned by Qopc is only a contraction if the target and behavior policies are close to each other (Harutyunyan et al., 2016). Q(λ) with off-policy corrections is therefore less stable compared to Retrace and unsafe for policy evaluation. Qopc, however, could better utilize the returns as the traces are not cut by the truncated importance weights. As a result, Qopc could be used efï¬ciently to estimate QÏ in policy gradient (e.g. in Equation (16)). In our continuous control experiments, we have found that Qopc leads to faster learning.
C RETRACE AS TRUNCATED IMPORTANCE SAMPLING WITH BIAS CORRECTION
For the purpose of proving proposition 1, we assume our environment to be a Markov Decision to be a ï¬nite state space. For notational simplicity, we also Process ( restrict deï¬nes the state transition probabilities and , r :
15
Published as a conference paper at ICLR 2017
Proof of proposition 1. First we show that is a contraction operator.
# B
< E& < BQ(x, a) â Q"(2,a)| pr+1(0) â â| pr+i(b) + pesa(b) â â| | 4 pr+i(b) (Q(we41,b) â Q* (rsa, »)) Q(t141,0) â Q* (@141, ni)| a) (a â Pry) sup 1Q (241, 0) â Q* (@141, 0) (22)
< E& < Where P;+1 1_E baw due to Hélderâs inequality.
Where P;+1 1_E a baw due to Hélderâs inequality. +41 (0) | iE [Pt41(b)]. The last inequality in the above equation is + bw
(22) IA su su p xb Q(e,b) Q(e,b) Q(e,b) Q(e,b) ~ Q* (x, b) E, ~ Q"(«,b)|E, ~ Q"(«,b)|E, â Q*(x,b)
where C = 37157 (Mn 71). Since C > D}_9 7 (Mn 7) = 1, we have that yCâ(C'â1) < y. Therefore, we have shown that B is a contraction operator.
shown that B is a contraction operator. is the same as Retrace. By apply the trunction and bias correction trick, we have
# B
Now we show that B
B [Q(xt+1, b)] = E bâ¼Âµ
JE ((oes2.0)] =F [esr 0)Qee41, 0) +E, (222) ee ») - 2) prsil
By adding and subtracting the two sides of Equation (23) inside the summand of Equation (20), we have
# B
BQ(x, a) & t>0 ~~, [Pri (b)Q(x141,b) ss: t>0 al t>0 al t>0 z(t) i=1 (te i=1 (I i=1 Me Me Me b)-c m+7E (a ) bun pr+i( (E354) ae] Q(at41, Q(a141 Q(aH41, 1))â 9, Bees 0438) »d)) â yPr+1Q(@e41, z=) b)| - Q(erar)) + Q(z, a) = RQ(x,
16
# Q(x, a)
Published as a conference paper at ICLR 2017
In the remainder of this appendix, we show that importance sampling. First, we reproduce the deï¬nition of generalizes both the Bellman operator and B :
# B Ït+1(b)
t b)-c BQ(x,a) = Ey yy ( a) (: +7E (Gece Q(t41, »)) 120 je bv pr+i(b) +
BQ(x,a) = Ey yy 120 When c = 0, we have that p; = 0
je bv pr+i(b) + Vi. Therefore only the first summand of the sum remains:
When c = 0, we have that p; = 0 Vi. Therefore only the first summand of the sum remains:
â
Q(x, a) = Eµ .
E + 1,âE.
((oesa.0)|
# B . When c =
In this case =
# i: , the compensation term disappears and ¯Ïi = Ïi â γt
BQ(2,a) =E, | 07 20 In this case B is the
â
BQ(2,a) =E, | 07 (1 a) G +E (0x Q(t, »)) =E, |>07' (1 a) rm t>0 i=l 20 i=1 In this case B is the same operator defined by importance sampling.
# B
# D DERIVATION OF V target
By using the truncation and bias correction trick, we can derive the following:
: Tala, a)-1 v(o) =, [min {1 HIE Qreena)] +B (JAP) orterssa)). any (ala) ann pla) |,
We, however, cannot use the above equation as a target as we do not have access to QÏ. To derive a target, we can take a Monte Carlo approximation of the ï¬rst expectation in the RHS of the above equation and replace the ï¬rst occurrence of QÏ with Qret and the second with our current neural net approximation Qθv (xt,
Vine" (ae) = min {1 wee | QM (a,a:) + E (23) . Qo, (@, ») - (24) * p(ar|re) ~T pr(a)
Through the truncation and bias correction trick again, we have the following identity:
B,eolov0l= 2 [rin nce} eteoa)] +, [Ama], tera) 9
Adding and subtracting both sides of Equation (25) to the RHS of (24) while taking a Monte Carlo approximation, we arrive at V target(xt): Ï(at| µ(at|
E CONTINUOUS CONTROL EXPERIMENTS
E.1 DESCRIPTION OF THE CONTINUOUS CONTROL PROBLEMS
Our continuous control tasks were simulated using the MuJoCo physics engine (Todorov et al. (2012)). For all experiments we considered an episodic setup with an episode length of T = 500 steps and a discount factor of 0.99.
Cartpole swingup This is an instance of the classic cart-pole swing-up task. It consists of a pole attached to a cart running on a ï¬nite track. The agent is required to balance the pole near the center of the track by applying a force to the cart only. An episode starts with the pole at a random angle and zero velocity. A reward zero is given except when the pole is approximately upright (within 0.05) for a track length of 2.4. ± The observations include position and velocity of the cart, angle and angular velocity of the pole. a sine/cosine of the angle, the position of the tip of the pole, and Cartesian velocities of the pole. The dimension of the action space is 1.
17
Published as a conference paper at ICLR 2017
Reacher3 The agent needs to control a planar 3-link robotic arm in order to minimize the distance between the end effector of the arm and a target. Both arm and target position are chosen randomly at the beginning of each episode. The reward is zero except when the tip of the arm is within 0.05 of the target, where it is one. The 8-dimensional observation consists of the angles and angular velocity of all joints as well as the displacement between target and the end effector of the arm. The 3-dimensional action are the torques applied to the joints.
Cheetah The Half-Cheetah (Wawrzyriski| (2009); [Heess et al.|(2015)) is a planar locomotion task where the agent is required to control a 9-DoF cheetah-like body (in the vertical plane) to move in the direction of the x-axis as quickly as possible. The reward is given by the velocity along the x-axis and a control cost: r = vz + 0.1||al|â. The observation vector consists of the z-position of the torso and its x, z velocities as well as the joint angles and angular velocities. The action dimension is 6.
Fish The goal of this task is to control a 13-DoF ï¬sh-like body to swim to a random target in 3D space. The reward is given by the distance between the head of the ï¬sh and the target, a small penalty for the body not being upright, and a control cost. At the beginning of an episode the ï¬sh is initialized facing in a random direction relative to the target. The 24-dimensional observation is given by the displacement between the ï¬sh and the target projected onto the torso coordinate frame, the joint angles and velocities, the cosine of the angle between the z-axis of the torso and the world z-axis, and the velocities of the torso in the torso coordinate frame. The 5-dimensional actions control the position of the side ï¬ns and the tail.
Walker The 9-DoF planar walker is inspired by (Schulman et al. (2015a)) and is required to move forward along the x-axis as quickly as possible without falling. The reward consists of the x-velocity of the torso, a quadratic control cost, and terms that penalize deviations of the torso from the preferred height and orientation (i.e. terms that encourage the walker to stay standing and upright). The 24-dimensional observation includes the torso height, velocities of all DoFs, as well as sines and cosines of all body orientations in the x-z plane. The 6-dimensional action controls the torques applied at the joints. Episodes are terminated early with a negative reward when the torso exceeds upper and lower limits on its height and orientation.
Humanoid The humanoid is a 27 degrees-of-freedom body with 21 actuators (21 action dimen- sions). It is initialized lying on the ground in a random conï¬guration and the task requires it to achieve a standing position. The reward function penalizes deviations from the height of the head when standing, and includes additional terms that encourage upright standing, as well as a quadratic action penalty. The 94 dimensional observation contains information about joint angles and velocities and several derived features reï¬ecting the bodyâs pose.
E.2 UPDATE EQUATIONS OF THE BASELINE TIS
The baseline TIS follows the following update equations,
k-1 k-1 updates to the policy: min {5 (11 rs) \ » y'risi +7 Vo, (@h4t) â voce Vo log m6(at|at), i=0 i=0 k=l k=l updates to the value: min {s (11 nus) \ » yr t a Vo, (K-41) â Vo, (x2)| Vo, Vo, (xe). i=0 i=0
updates to the value: min {s (11 nus) \ » yr t a Vo, (K-41) â Vo, (x2)| Vo, Vo, (xe). i=0 i=0 The baseline Trust-TIS is appropriately modified according to the trust region update described in SectionB.3]
i=0
E.3 SENSITIVITY ANALYSIS
In this section, we assess the sensitivity of ACER to hyper-parameters. In Figures 5 and 6, we show, for each game, the ï¬nal performance of our ACER agent versus the choice of learning rates, and the trust region constraint δ respectively.
Note, as we are doing random hyper-parameter search, each learning rate is associated with a random δ and vice versa. It is therefore difï¬cult to tease out the effect of either hyper-parameter independently.
18
xt),
Published as a conference paper at ICLR 2017
We observe, however, that ACER is not very sensitive to the hyper-parameters overall. In addition, smaller δâs do not seem to adversely affect the ï¬nal performance while larger δâs do in domains of higher action dimensionality. Similarly, smaller learning rates perform well while bigger learning rates tend to hurt ï¬nal performance in domains of higher action dimensionality.
Fish Walker2D Cheetah Cumulative Reward Cumulative Reward Cumulative Reward "Log Learning Rate Cartpole ? âLog Leaming Rate Reacher3 âLog Learning Rate Humanoid Cumulative Reward rary Cumulative Reward Cumulative Reward Log Learning Rate Log Leaming Rate Log Learning Rate
Figure 5: Log learning rate vs. cumulative rewards in all the continuous control tasks for ACER. The plots show the ï¬nal performance after training for all 30 log learning rates considered. Note that each learning rate is associated with a different δ as a consequence of random search over hyper-parameters.
Fish Walker2D. Cumulative Reward Cumulative Reward Trust Region Constraint (3) Trust Region Constraint (6) Humanoid . Cheetah Be 3 @ Zao E 8: . Trust Region Constraint (6) Cartpole ofa a eee Bux 3 gx $ wx z 8 wx Cumulative Reward Reacher3 Cumulative Reward Trust Region Constraint () Trust Region Constraint (4) Trust Region Constraint (6)
Figure 6: Trust region constraint (δ) vs. cumulative rewards in all the continuous control tasks for ACER. The plots show the ï¬nal performance after training for all 30 trust region constraints (δ) searched over. Note that each δ is associated with a different learning rate as a consequence of random search over hyper-parameters.
# E.4 EXPERIMENTAL SETUP OF ABLATION ANALYSIS
For the ablation analysis, we use the same experimental setup as in the continuous control experiments while removing one component at a time.
19
Published as a conference paper at ICLR 2017
To evaluate the effectiveness of Retrace/Q(λ) with off-policy correction, we replace both with importance sampling based estimates (following Degris et al. (2012)) which can be expressed recursively: Rt = rt + Ït+1Rt+1.
To evaluate the Stochastic Dueling Networks, we replace it with two separate networks: one comput- ing the state values and the other Q values. Given Qret(xt, at), the naive way of estimating the state values is to use the following update rule:
ÏtQret(xt, at) Vθv (xt)
# âθv Vθv (xt).
â
The above update rule, however, suffers from high variance. We consider instead the following update rule:
# Qret(xt, at)
Ït Vθv (xt)
# âθv Vθv (xt)
â
which has markedly lower variance. We update our Q estimates as before.
To evaluate the effects of the truncation and bias correction trick, we change our c parameter (see Equation (16)) to
â
20 | {
"id": "1602.01783"
} |
1611.01211 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | 8 1 0 2
r a M 3 1 ] G L . s c [
8 v 1 1 2 1 0 . 1 1 6 1 : v i X r a
# Combating Reinforcement Learningâs Sisyphean Curse with Intrinsic Fear
Zachary C. Lipton1,2,3, Kamyar Azizzadenesheli4, Abhishek Kumar3, Lihong Li5, Jianfeng Gao6, Li Deng7
Carnegie Mellon University1, Amazon AI2, University of California, San Diego3, Univerisity of California, Irvine4, Google5, Microsoft Research6, Citadel7 zlipton@cmu.edu, kazizzad@uci.edu, abkumar@ucsd.edu { lihongli, jfgao, deng } @microsoft.com
# March 1, 2022
# Abstract
Many practical environments contain catastrophic states that an optimal agent would visit infrequently or never. Even on toy problems, Deep Reinforcement Learning (DRL) agents tend to periodically revisit these states upon forgetting their existence under a new policy. We introduce intrinsic fear (IF), a learned reward shaping that guards DRL agents against periodic catastrophes. IF agents possess a fear model trained to predict the probability of imminent catastrophe. This score is then used to penalize the Q- learning objective. Our theoretical analysis bounds the reduction in average return due to learning on the perturbed objective. We also prove robustness to classification errors. As a bonus, IF models tend to learn faster, owing to reward shaping. Experiments demonstrate that intrinsic-fear DQNs solve otherwise pathological environments and improve on several Atari games.
# Introduction
Following the success of deep reinforcement learning (DRL) on Atari games [22] and the board game of Go [29], researchers are increasingly exploring practical applications. Some investigated applications include robotics [17], dialogue systems [9, 19], energy management [25], and self-driving cars [27]. Amid this push to apply DRL, we might ask, can we trust these agents in the wild? Agents acting society may cause harm. A self-driving car might hit pedestrians and a domestic robot might injure a child. Agents might also cause self-injury, and while Atari lives lost are inconsequential, robots are expensive.
Unfortunately, it may not be feasible to prevent all catastrophes without requiring extensive prior knowledge [10]. Moreover, for typical DQNs, providing large negative rewards does not solve the problem: as soon as the catastrophic trajectories are flushed from the replay buffer, the updated Q-function ceases to discourage revisiting these states.
In this paper, we define avoidable catastrophes as states that prior knowledge dictates an optimal policy should visit rarely or never. Additionally, we define danger statesâthose from which a catastrophic state can
1
be reached in a small number of steps, and assume that the optimal policy does visit the danger states rarely or never. The notion of a danger state might seem odd absent any assumptions about the transition function. With a fully-connected transition matrix, all states are danger states. However, physical environments are not fully connected. A car cannot be parked this second, underwater one second later.
This work primarily addresses how we might prevent DRL agents from perpetually making the same mistakes. As a bonus, we show that the prior knowledge knowledge that catastrophic states should be avoided accelerates learning. Our experiments show that even on simple toy problems, the classic deep Q-network (DQN) algorithm fails badly, repeatedly visiting catastrophic states so long as they continue to learn. This poses a formidable obstacle to using DQNs in the real world. How can we trust a DRL-based agent that was doomed to periodically experience catastrophes, just to remember that they exist? Imagine a self-driving car that had to periodically hit a few pedestrians to remember that it is undesirable.
In the tabular setting, an RL agent never forgets the learned dynamics of its environment, even as its policy evolves. Moreover, when the Markovian assumption holds, convergence to a globally optimal policy is guaranteed. However, the tabular approach becomes infeasible in high-dimensional, continuous state spaces. The trouble for DQNs owes to the use of function approximation [24]. When training a DQN, we successively update a neural network based on experiences. These experiences might be sampled in an online fashion, from a trailing window (experience replay buffer), or uniformly from all past experiences. Regardless of which mode we use to train the network, eventually, states that a learned policy never encounters will come to form an infinitesimally small region of the training distribution. At such times, our networks suffer the well-known problem of catastrophic forgetting [21, 20]. Nothing prevents the DQNâs policy from drifting back towards one that revisits forgotten catastrophic mistakes.
We illustrate the brittleness of modern DRL algorithms with a simple pathological problem called Adventure Seeker. This problem consists of a one-dimensional continuous state, two actions, simple dynamics, and admits an analytic solution. Nevertheless, the DQN fails. We then show that similar dynamics exist in the classic RL environment Cart-Pole.
To combat these problems, we propose the intrinsic fear (IF) algorithm. In this approach, we train a supervised fear model that predicts which states are likely to lead to a catastrophe within kr steps. The output of the fear model (a probability), scaled by a fear factor penalizes the Q-learning target. Crucially, the fear model maintains buffers of both safe and danger states. This model never forgets danger states, which is possible due to the infrequency of catastrophes.
We validate the approach both empirically and theoretically. Our experiments address Adventure Seeker, Cartpole, and several Atari games. In these environments, we label every lost life as a catastrophe. On the toy environments, IF agents learns to avoid catastrophe indefinitely. In Seaquest experiments, the IF agent achieves higher reward and in Asteroids, the IF agent achieves both higher reward and fewer catastrophes. The improvement on Freeway is most dramatic.
We also make the following theoretical contributions: First, we prove that when the reward is bounded and the optimal policy rarely visits the danger states, an optimal policy learned on the perturbed reward function has approximately the same return as the optimal policy learned on the original value function. Second, we prove that our method is robust to noise in the danger model.
2
# Intrinsic fear
An agent interacts with its environment via a Markov decision process, or MDP, (S, A,7,R, y). At each step t, the agent observes a state s ⬠S and then chooses an action a ⬠A according to its policy 7. The environment then transitions to state s;,, ⬠S according to transition dynamics 7 (s;+1|s;, ay) and generates a reward r; with expectation R(s, a). This cycle continues until each episode terminates. An agent seeks to maximize the cumulative discounted return _, y'r;. Temporal-differences methods [31] like Q-learning [33] model the Q-function, which gives the optimal discounted total reward of a state-action pair. Problems of practical interest tend to have large state spaces, thus the Q-function is typically approximated by parametric models such as neural networks.
In Q-learning with function approximation, an agent collects experiences by acting greedily with respect to Q(s, a; θQ ) and updates its parameters θQ . Updates proceed as follows. For a given experience (st , at , rt , st +1), we minimize the squared Bellman error:
L = (Q(st , at ; θQ ) â yt )2 (1)
for ys = ry + y - maxgâ Q(S;41, 4â; OQ). Traditionally, the parameterised Q(s, a; @) is trained by stochastic approximation, estimating the loss on each experience as it is encountered, yielding the update:
θt +1 âθt + α(yt â Q(st , at ; θt ))âQ(st , at ; θt ) . (2)
Q-learning methods also require an exploration strategy for action selection. For simplicity, we consider only the ϵ-greedy heuristic. A few tricks help to stabilize Q-learning with function approximation. Notably, with experience replay [18], the RL agent maintains a buffer of experiences, of experience to update the Q-function.
We propose a new formulation: Suppose there exists a subset C C S of known catastrophe states/ And assume that for a given environment, the optimal policy rarely enters from which catastrophe states are reachable in a short number of steps. We define the distance d(s;, s;) to be length N of the smallest sequence of transitions {(s;, a;,1;, Sra}, that traverses state space from s; to s;.' Definition 2.1. Suppose a priori knowledge that acting according to the optimal policy z*, an agent rarely encounters states s ⬠S that lie within distance d(s,c) < k; for any catastrophe state c ⬠C. Then each state s for which Hc ⬠C s.t. d(s,c) < k; is a danger state. In Algorithm 1, the agent maintains both a DQN and a separate, supervised fear model F : S + [0,1]. F provides an auxiliary source of reward, penalizing the Q-learner for entering likely danger states. In our case, we use a neural network of the same architecture as the DON (but for the output layer). While one could sharing weights between the two networks, such tricks are not relevant to this paperâs contribution.
We train the fear model to predict the probability that any state will lead to catastrophe within k moves. Over the course of training, our agent adds each experience (s, a, r,sâ) to its experience replay buffer. Whenever a catastrophe is reached at, say, the n;p turn of an episode, we add the preceding k, (fear radius) states to a danger buffer. We add the first n â k, states of that episode to a safe buffer. When n < k,, all states for that episode are added to the list of danger states. Then after each turn, in addition to updating the Q-network, we update the fear model, sampling 50% of states from the danger buffer, assigning them label 1, and the remaining 50% from the safe buffer, assigning them label 0.
1In the stochastic dynamics setting, the distance is the minimum mean passing time between the states.
3
Algorithm 1 Training DQN with Intrinsic Fear
1: Input: Q (DQN), F (fear model), fear factor λ, fear phase-in length kλ, fear radius kr 2: Output: Learned parameters θQ and θF 3: Initialize parameters θQ and θF randomly 4: Initialize replay buffer D, danger state buffer DD , and safe state buffer DS 5: Start per-episode turn counter ne 6: for t in 1:T do 7:
With probability ϵ select random action at Otherwise, select a greedy action at = arg maxa Q(st , a; θQ ) Execute action at in environment, observing reward rt and successor state st +1 Store transition (st , at , rt , st +1) in D if st +1 is a catastrophe state then
8: 9: 10: 11: 12:
Add states st âkr through st to DD
else
13: 14:
Add states st âne through st âkr â1 to DS
14: Add states s;_y, through s;-,,-1 to Ds
Sample a random mini-batch of transitions (sÏ , aÏ , rÏ , sÏ +1) from D Î»Ï â min(λ, λ ·t kλ
15: 16:
# ) for terminal sÏ +1 : rÏ â λÏ
16: A; â min(A, e)
for terminal s,4; : Tr â Ar 17: yr â 4 for non-terminal s,+; : rp + maxg Q(s;41,.4â; Og)â A+ F(sc+13 OF) 18: 09 â 09-9 Vog(Yr â Ose, 475 OQ)? 19: Sample random mini-batch s; with 50% of examples from Dp and 50% from Ds 1, fors;⬠Dp yi 0, fors; ⬠Ds 21: Or â Or â 1 - Vo, lossr(y;, F(s;; OF) 20:
For each update to the DON, we perturb the TD target y;. Instead of updating Q(s;, a;;69) towards r; + maxq QO(s;41, 4â; 99), we modify the target by subtracting the intrinsic fear:
yy are max Q(Sri, aâ; 09) â A+ F(se413 OF) (3)
where F (s; θF ) is the fear model and λ is a fear factor determining the scale of the impact of intrinsic fear on the Q-function update.
# 3 Analysis
Note that IF perturbs the objective function. Thus, one might be concerned that the perturbed reward might lead to a sub-optimal policy. Fortunately, as we will show formally, if the labeled catastrophe states and danger zone do not violate our assumptions, and if the fear model reaches arbitrarily high accuracy, then this will not happen. For an MDP, M = (S,A,7,R,y), with 0 < y < 1, the average reward return is as follows:
4
limyâoo #2m| yr rela if y=1 qo (1) = a= yEm| EP y'nla| if 0<y<1
The optimal policy Ï â of the model M is the policy which maximizes the average reward return, Ï â = maxÏ â P η(Ï ) where P is a set of stationary polices. Theorem 1. For a given MDP, M, with γ â [0, 1] and a catastrophe detector f , let Ï â denote any optimal policy of M, and ËÏ denote an optimal policy of M equipped with fear model F , and λ, environment (M, F ). If the probability Ï â visits the states in the danger zone is at most ϵ, and 0 ⤠R(s, a) ⤠1, then
ηM (Ï â) ⥠ηM ( ËÏ ) ⥠ηM, F ( ËÏ ) ⥠ηM (Ï â) â λϵ . (4)
In other words, ËÏ is λϵ-optimal in the original MDP.
Proof. The policy Ï â visits the fear zone with probability at most ϵ. Therefore, applying Ï â on the envi- ronment with intrinsic fear (M, F ), provides a expected return of at least ηM (Ï â) â ϵλ. Since there exists a policy with this expected return on (M, F ), therefore, the optimal policy of (M, F ), must result in an expected return of at least ηM (Ï â) â ϵλ on (M, F ), i.e. ηM, F ( ËÏ ) ⥠ηM (Ï â) â ϵλ. The expected return ηM, F ( ËÏ ) decomposes into two parts: (i) the expected return from original environment M, ηM ( ËÏ ), (ii) the expected return from the fear model. If ËÏ visits the fear zone with probability at most Ëϵ, then ηM, F ( ËÏ ) ⥠ηM ( ËÏ ) â λ Ëϵ. Therefore, applying ËÏ on M promises an expected return of at least ηM (Ï â) â ϵλ + Ëϵλ, lower bounded by ηM (Ï â) â ϵλ.
It is worth noting that the theorem holds for any optimal policy of M. If one of them does not visit the fear zone at all (i.e., ϵ = 0), then ηM (Ï â) = ηM, F ( ËÏ ) and the fear signal can boost up the process of learning the optimal policy.
Since we empirically learn the fear model F using collected data of some finite sample size N , our RL agent has access to an imperfect fear model ËF , and therefore, computes the optimal policy based on ËF . In this case, the RL agent trains with intrinsic fear generated by ËF , learning a different value function than the RL agent with perfect F . To show the robustness against errors in ËF , we are interested in the average deviation in the value functions of the two agents.
Our second main theoretical result, given in Theorem 2, allows the RL agent to use a smaller discount factor, denoted γpl an, than the actual one (γpl an ⤠γ ), to reduce the planning horizon and computation cost. Moreover, when an estimated model of the environment is used, Jiang et al. [2015] shows that using a smaller discount factor for planning may prevent over-fitting to the estimated model. Our result demonstrates that using a smaller discount factor for planning can reduce reduction of expected return when an estimated fear model is used.
Ï â F1,γ1
F2,γ2
(s), s â S, denote Specifically, for a given environment, with fear model F1 and discount factor γ1, let V the state value function under the optimal policy of an environment with fear model F2 and the discount factor γ2. In the same environment, let ÏÏ (s) denote the visitation distribution over states under policy Ï . We are interested in the average reduction on expected return caused by an imperfect classifier; this
5
reduction, denoted L(F, F, Y>Yplan), is defined as
(1-y) I. ora vf â(s)-Vp sneco a
Theorem 2. Suppose Ypian < y, and é ⬠(0, 1). Let F be the fear model in Â¥ with minimum empirical risk on N samples. For a given MDP model, the average reduction on expected return, L(F,F,Y,Ypian), vanishes as N increase: with probability at least 1â 6,
L = O λ 1 â γ 1 â γpl an VC(F ) + log 1 δ N + (γ â γpl an) 1 â γpl an , (5)
where VC(F ) is the VC dimension of the hypothesis class F .
Proof. In order to analyze [ir - Vey en 0}. which is always non-negative, we decompose it as follows:
(6) F.Yplan (veer - VER) + [v rey (=v nan)
The first term is the difference in the expected returns of Ï â from s: F,γ under two different discount factors, starting
ELD! = Yptan)Â¥els0 = sy Fl . (7) t=0
γ âγpl an (1âγpl an )(1âγ ) .
Since rt ⤠1, ât, using the geometric series, Eq. 7 is upper bounded by 1
=
# 1 1âγpl an
1âγ â
1 Y-Yplan ce Since r; < 1, Vt, using the geometric series, Eq. 7 is upper bounded by ry -
is an optimal policy of an PF, . The second term is upper bounded by V, F, Pte (s) âV,. yee (s) since 7, Yol , sYÂ¥plan environment equipped with (F, Yplan)- Furthermore, as Ypian S y andr; > 0, we have Ve. vee (s) = F.Yplan F.Ypla the deviation of the value function under two different close policies. Since F and F are close, we expect that this deviation to be small. With one more decomposition step a a V,, plan (s). Therefore, the second term of Eq. 6 is upper bounded by Vay ete (s )-V, â "nn ), which is sÂ¥plan
+
Yplan Poi _ plan F.Yptan Vea lee OW) = [Vet (= Vp ra") F.Ypian +(v. Prptan (gy v, Pptan(s)) 4 Yplan F.Yplan Yplan F.Yplan s[efinera- rf)
.
Since the middle term in this equation is non-positive, we can ignore it for the purpose of upper-bounding the left-hand side. The upper bound is sum of the remaining two terms which is also upper bounded by 2
6
times of the maximum of them;
2 max vz (s)- VE me{ap, ty | FeYptan âYplan *Yplanâ F.Ypian (s)} ,
which is the deviation in values of different domains. The value functions satisfy the Bellman equation for any Ï :
Vin Ypta (8) =R(s, 2(s)) + AF(s) +Yptan [T(S8. 2 VE yyy 9048 VE vptan (s) Ris, x) + AF(s) © +Yplan | T(s'|s, n(s))VZ (sâ)ds ° seS >Yplan
which can be solved using iterative updates of dynamic programing. Let V Ï i (s) respectably denote the iâth iteration of the dynamic programmings corresponding to the first and second equalities in Eq. 8. Therefore, for any state
Vi(8)-Vi"(s) = A/F(s) - AâF(s) +Yptan J T(s'5.2(9)) (Vi-us") â Hits?) ds ay YptanT")" (F~F) (3), (10)
where (J 7)â is a kernel and denotes the transition operator applied i times to itself. The classification error ~ con ~ |Fs) - Fis) is the zero-one loss of binary classifier, therefore, its expectation hes w@ "plan (s) |F«s) - Fis ds is bounded by 3200 Lo Hoes linear operator, with probability at least 1 â 6 [32, 12]. As long as the operator (J)! is a
3200 VC(F) + log 5 fer Fyptan (§) |v7(s) - Vi (s)\ds < y_ 2200 NOG) NES )+}e8 5 : (11) seS âYplan N
Therefore, L£(F, F, Y:Yplan) is bounded by (1 â y) times of sum of Eq. 11 and ~~, with probability at least 1â 6. 7
Theorem 2 holds for both finite and continuous state-action MDPs. Over the course of our experiments, we discovered the following pattern: Intrinsic fear models are more effective when the fear radius kr is large enough that the model can experience danger states at a safe distance and correct the policy, without experiencing many catastrophes. When the fear radius is too small, the danger probability is only nonzero at states from which catastrophes are inevitable anyway and intrinsic fear seems not to help. We also found that wider fear factors train more stably when phased in over the course of many episodes. So, in all of our experiments we gradually phase in the fear factor from 0 to λ reaching full strength at predetermined time step kλ.
7
# 4 Environments
We demonstrate our algorithms on the following environments: (i) Adventure Seeker, a toy pathological environment that we designed to demonstrate catastrophic forgetting; (ii) Cartpole, a classic RL environment; and (ii) the Atari games Seaquest, Asteroids, and Freeway [3].
Adventure Seeker We imagine a player placed on a hill, sloping upward to the right (Figure 1(a)). At each turn, the player can move to the right (up the hill) or left (down the hill). The environment adjusts the playerâs position accordingly, adding some random noise. Between the left and right edges of the hill, the player gets more reward for spending time higher on the hill. But if the player goes too far to the right, she will fall off, terminating the episode (catastrophe). Formally, the state is single continuous variable s â [0, 1.0], denoting the playerâs position. The starting position for each episode is chosen uniformly at random in the interval [.25, .75]. The available actions consist only of {â1, +1} (left and right). Given an action at in state st , T (st +1|st , at ) the successor state is produced st +1 â st + .01·at +η where η â¼ N (0, .012). The reward at each turn is st (proportional to height). The player falls off the hill, entering the catastrophic terminating state, whenever st +1 > 1.0 or st +1 < 0.0.
This game should be easy to solve. There exists a threshold above which the agent should always choose to go left and below which it should always go right. And yet a DQN agent will periodically die. Initially, the DQN quickly learns a good policy and avoids the catastrophe, but over the course of continued training, the agent, owing to the shape of the reward function, collapses to a policy which always moves right, regardless of the state. We might critically ask in what real-world scenario, we could depend upon a system that cannot solve Adventure Seeker.
Cart-Pole In this classic RL environment, an agent balances a pole atop a cart (Figure 1(b)). Qualitatively, the game exhibits four distinct catastrophe modes. The pole could fall down to the right or fall down to the left. Additionally, the cart could run off the right boundary of the screen or run off the left. Formally, at each time, the agent observes a four-dimensional state vector (x, v, θ, Ï) consisting respectively of the cart position, cart velocity, pole angle, and the poleâs angular velocity. At each time step, the agent chooses an action, applying a force of either â1 or +1. For every time step that the pole remains upright and the cart remains on the screen, the agent receives a reward of 1. If the pole falls, the episode terminates, giving a return of 0 from the penultimate state. In experiments, we use the implementation CartPole-v0 contained in the openAI gym [6]. Like Adventure Seeker, this problem admits an analytic solution. A perfect policy should never drop the pole. But, as with Adventure Seeker, a DQN converges to a constant rate of catastrophes per turn.
Atari games In addition to these pathological cases, we address Freeway, Asteroids, and Seaquest, games from the Atari Learning Environment. In Freeway, the agent controls a chicken with a goal of crossing the road while dodging traffic. The chicken loses a life and starts from the original location if hit by a car. Points are only rewarded for successfully crossing the road. In Asteroids, the agent pilots a ship and gains points from shooting the asteroids. She must avoid colliding with asteroids which cost it lives. In Seaquest, a player swims under water. Periodically, as the oxygen gets low, she must rise to the surface for oxygen. Additionally, fishes swim across the screen. The player gains points each time she shoots a fish. Colliding
8
(a) Adventure Seeker
# (b) Cart-Pole
(c) Seaquest
(d) Asteroids
(e) Freeway
Figure 1: In experiments, we consider two toy environments (a,b) and the Atari games Seaquest (c), Asteroids (d), and Freeway (e)
with a fish or running out of oxygen result in death. In all three games, the agent has 3 lives, and the final death is a terminal state. We label each loss of a life as a catastrophe state.
# 5 Experiments
First, on the toy examples, We evaluate standard DQNs and intrinsic fear DQNs using multilayer perceptrons (MLPs) with a single hidden layer and 128 hidden nodes. We train all MLPs by stochastic gradient descent using the Adam optimizer [16].
In Adventure Seeker, an agent can escape from danger with only a few time steps of notice, so we set the fear radius kr to 5. We phase in the fear factor quickly, reaching full strength in just 1000 steps. On this
9
(a) Seaquest (b) Asteroids (c) Freeway (d) Seaquest (e) Asteroids (f) Freeway
Figure 2: Catastrophes (first row) and reward/episode (second row) for DQNs and Intrinsic Fear. On Adventure Seeker, all Intrinsic Fear models cease to âdieâ within 14 runs, giving unbounded (unplottable) reward thereafter. On Seaquest, the IF model achieves a similar catastrophe rate but significantly higher total reward. On Asteroids, the IF model outperforms DQN. For Freeway, a randomly exploring DQN (under our time limit) never gets reward but IF model learns successfully.
problem we set the fear factor λ to 40. For Cart-Pole, we set a wider fear radius of kr = 20. We initially tried training this model with a short fear radius but made the following observation: One some runs, IF-DQN would surviving for millions of experiences, while on other runs, it might experience many catastrophes. Manually examining fear model output on successful vs unsuccessful runs, we noticed that on the bad runs, the fear model outputs non-zero probability of danger for precisely the 5 moves before a catastrophe. In Cart-Pole, by that time, it is too to correct course. On the more successful runs, the fear model often outputs predictions in the range .1 â .5. We suspect that the gradation between mildly dangerous states and those with certain danger provides a richer reward signal to the DQN.
On both the Adventure Seeker and Cart-Pole environments, DQNs augmented by intrinsic fear far out- perform their otherwise identical counterparts. We also compared IF to some traditional approaches for mitigating catastrophic forgetting. For example, we tried a memory-based method in which we preferentially sample the catastrophic states for updating the model, but they did not improve over the DQN. It seems that the notion of a danger zone is necessary here.
For Seaquest, Asteroids, and Freeway, we use a fear radius of 5 and a fear factor of .5. For all Atari games, the IF models outperform their DQN counterparts. Interestingly while for all games, the IF models achieve higher reward, on Seaquest, IF-DQNs have similar catastrophe rates (Figure 2). Perhaps the IF-DQN enters a region of policy space with a strong incentives to exchange catastrophes for higher reward. This result suggests an interplay between the various reward signals that warrants further exploration. For Asteroids and Freeway, the improvements are more dramatic. Over just a few thousand episodes of Freeway, a randomly exploring DQN achieves zero reward. However, the reward shaping of intrinsic fear leads to rapid improvement.
10
# 6 Related work
The paper studies safety in RL, intrinsically motivated RL, and the stability of Q-learning with function approximation under distributional shift. Our work also has some connection to reward shaping. We attempt to highlight the most relevant papers here. Several papers address safety in RL. Garcıa and Fernández [2015] provide a thorough review on the topic, identifying two main classes of methods: those that perturb the objective function and those that use external knowledge to improve the safety of exploration.
While a typical reinforcement learner optimizes expected return, some papers suggest that a safely acting agent should also minimize risk. Hans et al. [2008] defines a fatality as any return below some threshold Ï . They propose a solution comprised of a safety function, which identifies unsafe states, and a backup model, which navigates away from those states. Their work, which only addresses the tabular setting, suggests that an agent should minimize the probability of fatality instead of maximizing the expected return. Heger [1994] suggests an alternative Q-learning objective concerned with the minimum (vs. expected) return. Other papers suggest modifying the objective to penalize policies with high-variance returns [10, 8]. Maximizing expected returns while minimizing their variance is a classic problem in finance, where a common objective is the ratio of expected return to its standard deviation [28]. Moreover, Azizzadenesheli et al. [2018] suggests to learn the variance over the returns in order to make safe decisions at each decision step. Moldovan and Abbeel [2012] give a definition of safety based on ergodicity. They consider a fatality to be a state from which one cannot return to the start state. Shalev-Shwartz et al. [2016] theoretically analyzes how strong a penalty should be to discourage accidents. They also consider hard constraints to ensure safety. None of the above works address the case where distributional shift dooms an agent to perpetually revisit known catastrophic failure modes. Other papers incorporate external knowledge into the exploration process. Typically, this requires access to an oracle or extensive prior knowledge of the environment. In the extreme case, some papers suggest confining the policy search to a known subset of safe policies. For reasonably complex environments or classes of policies, this seems infeasible.
The potential oscillatory or divergent behavior of Q-learners with function approximation has been previ- ously identified [5, 2, 11]. Outside of RL, the problem of covariate shift has been extensively studied [30]. Murata and Ozawa [2005] addresses the problem of catastrophic forgetting owing to distributional shift in RL with function approximation, proposing a memory-based solution. Many papers address intrinsic rewards, which are internally assigned, vs the standard (extrinsic) reward. Typically, intrinsic rewards are used to encourage exploration [26, 4] and to acquire a modular set of skills [7]. Some papers refer to the intrinsic reward for discovery as curiosity. Like classic work on intrinsic motivation, our methods perturb the reward function. But instead of assigning bonuses to encourage discovery of novel transitions, we assign penalties to discourage catastrophic transitions.
Key differences In this paper, we undertake a novel treatment of safe reinforcement learning, While the literature offers several notions of safety in reinforcement learning, we see the following problem: Existing safety research that perturbs the reward function requires little foreknowledge, but fundamentally changes the objective globally. On the other hand, processes relying on expert knowledge may presume an unreasonable level of foreknowledge. Moreover, little of the prior work on safe reinforcement learning, to the best of our knowledge, specifically addresses the problem of catastrophic forgetting. This paper proposes a new class of algorithms for avoiding catastrophic states and a theoretical analysis supporting its robustness.
11
# 7 Conclusions
Our experiments demonstrate that DQNs are susceptible to periodically repeating mistakes, however bad, raising questions about their real-world utility when harm can come of actions. While it is easy to visualize these problems on toy examples, similar dynamics are embedded in more complex domains. Consider a domestic robot acting as a barber. The robot might receive positive feedback for giving a closer shave. This reward encourages closer contact at a steeper angle. Of course, the shape of this reward function belies the catastrophe lurking just past the optimal shave. Similar dynamics might be imagines in a vehicle that is rewarded for traveling faster but could risk an accident with excessive speed. Our results with the intrinsic fear model suggest that with only a small amount of prior knowledge (the ability to recognize catastrophe states after the fact), we can simultaneously accelerate learning and avoid catastrophic states. This work is a step towards combating DRLâs tendency to revisit catastrophic states due to catastrophic forgetting.
# References
[1] Kamyar Azizzadenesheli, Emma Brunskill, and Animashree Anandkumar. Efficient exploration through bayesian deep q-networks. arXiv preprint arXiv:1802.04412, 2018.
[2] Leemon Baird. Residual algorithms: Reinforcement learning with function approximation. 1995.
[3] Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. J. Artif. Intell. Res.(JAIR), 2013.
[4] Marc G Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos. Unifying count-based exploration and intrinsic motivation. In NIPS, 2016.
[5] Justin Boyan and Andrew W Moore. Generalization in reinforcement learning: Safely approximating the value function. In NIPS, 1995.
[6] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI gym, 2016. arxiv.org/abs/1606.01540.
[7] Nuttapong Chentanez, Andrew G Barto, and Satinder P Singh. Intrinsically motivated reinforcement learning. In NIPS, 2004.
[8] Yinlam Chow, Aviv Tamar, Shie Mannor, and Marco Pavone. Risk-sensitive and robust decision-making: A CVaR optimization approach. In NIPS, 2015.
[9] Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, and Kaheer Suleman. Policy networks with two-stage training for dialogue systems. In SIGDIAL, 2016.
[10] Javier Garcıa and Fernando Fernández. A comprehensive survey on safe reinforcement learning. JMLR, 2015.
[11] Geoffrey J Gordon. Chattering in SARSA(λ). Technical report, CMU, 1996.
[12] Steve Hanneke. The optimal sample complexity of PAC learning. JMLR, 2016.
[13] Alexander Hans, Daniel SchneegaÃ, Anton Maximilian Schäfer, and Steffen Udluft. Safe exploration for reinforcement learning. In ESANN, 2008.
12
[14] Matthias Heger. Consideration of risk in reinforcement learning. In Machine Learning, 1994.
[15] Nan Jiang, Alex Kulesza, Satinder Singh, and Richard Lewis. The dependence of effective planning horizon on model accuracy. In International Conference on Autonomous Agents and Multiagent Systems, 2015.
[16] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
[17] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. JMLR, 2016.
[18] Long-Ji Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine learning, 1992.
[19] Zachary C Lipton, Jianfeng Gao, Lihong Li, Xiujun Li, Faisal Ahmed, and Li Deng. Efficient exploration for dialogue policy learning with bbq networks & replay buffer spiking. In AAAI, 2018.
[20] James L McClelland, Bruce L McNaughton, and Randall C OâReilly. Why there are complemen- tary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory. Psychological Review, 1995.
[21] Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. Psychology of learning and motivation, 1989.
[22] Volodymyr Mnih et al. Human-level control through deep reinforcement learning. Nature, 2015.
[23] Teodor Mihai Moldovan and Pieter Abbeel. Safe exploration in Markov decision processes. In ICML, 2012.
[24] Makoto Murata and Seiichi Ozawa. A memory-based reinforcement learning model utilizing macro- actions. In Adaptive and Natural Computing Algorithms. 2005.
[25] Will Night. The AI that cut googleâs energy bill could soon help you. MIT Tech Review, 2016.
[26] Jurgen Schmidhuber. A possibility for implementing curiosity and boredom in model-building neural controllers. In From animals to animats: SAB90, 1991.
[27] Shai Shalev-Shwartz, Shaked Shammah, and Amnon Shashua. Safe, multi-agent, reinforcement learning for autonomous driving. 2016.
[28] William F Sharpe. Mutual fund performance. The Journal of Business, 1966.
[29] David Silver et al. Mastering the game of go with deep neural networks and tree search. Nature, 2016.
[30] Masashi Sugiyama and Motoaki Kawanabe. Machine learning in non-stationary environments: Intro- duction to covariate shift adaptation. MIT Press, 2012.
[31] Richard S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 1988.
[32] Vladimir Vapnik. The nature of statistical learning theory. Springer science & business media, 2013.
[33] Christopher J.C.H. Watkins and Peter Dayan. Q-learning. Machine Learning, 1992.
13
# An extension to the Theorem 2
In practice, we gradually learn and i improve F where the difference between learned F after two consecrative updates, F, and Frat, consequently, @ Fr, Yplan and w om Yplan decrease. While Frari is learned through using the samples drawn from w âFr Yptan , with high probability
a = VC(F) + log § [ co FYptan(s) |Fts) - Fisx(s} ds < 3200 007) * 83 seS N
ms ~ But in the final bound in Theorem 2, we interested in hes @ "plan (s)|F(s) â Fr44(s) ds. Via decomposing in into two terms
mn ~ me ⢠[ wo M%et0n(s)|F(3) â Fraa(s)| ds + / |e Trtotan (s) = M%otan(s)|ds seS seS
â«
a Therefore, an extra term of Aaa hes lo Fest yptan (s) â @ **%plan(s)|ds appears in the final bound of âplan Theorem 2.
# V C(F)+log 1 δ N
Regarding the choice of γpl an, if λ is less than one, then the best choice of γpl an is γ . Other wise, V C(F)+log 1 δ N V C(F)+log 1 δ N if is equal to exact error in the model estimation, and is greater than 1, then the best γpl an is 0. Since, γpl an is not recommended, and a choice of γpl an ⤠γ is preferred. is an upper bound, not an exact error, on the model estimation, the choice of zero for
14 | {
"id": "1802.04412"
} |
1611.01144 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | 7 1 0 2
g u A 5 ] L M . t a t s [ 5 v 4 4 1 1 0 . 1 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# CATEGORICAL REPARAMETERIZATION WITH GUMBEL-SOFTMAX
Eric Jang Google Brain ejang@google.com
Shixiang Guâ University of Cambridge MPI T¨ubingen sg717@cam.ac.uk
Ben Pooleâ Stanford University poole@cs.stanford.edu
# ABSTRACT
Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this work, we present an efï¬cient gradient estimator that replaces the non-differentiable sample from a cat- egorical distribution with a differentiable sample from a novel Gumbel-Softmax distribution. This distribution has the essential property that it can be smoothly annealed into a categorical distribution. We show that our Gumbel-Softmax esti- mator outperforms state-of-the-art gradient estimators on structured output predic- tion and unsupervised generative modeling tasks with categorical latent variables, and enables large speedups on semi-supervised classiï¬cation.
# INTRODUCTION
Stochastic neural networks with discrete random variables are a powerful technique for representing distributions encountered in unsupervised learning, language modeling, attention mechanisms, and reinforcement learning domains. For example, discrete variables have been used to learn probabilis- tic latent representations that correspond to distinct semantic classes (Kingma et al., 2014), image regions (Xu et al., 2015), and memory locations (Graves et al., 2014; Graves et al., 2016). Discrete representations are often more interpretable (Chen et al., 2016) and more computationally efï¬cient (Rae et al., 2016) than their continuous analogues.
However, stochastic networks with discrete variables are difï¬cult to train because the backprop- agation algorithm â while permitting efï¬cient computation of parameter gradients â cannot be applied to non-differentiable layers. Prior work on stochastic gradient estimation has traditionally focused on either score function estimators augmented with Monte Carlo variance reduction tech- niques (Paisley et al., 2012; Mnih & Gregor, 2014; Gu et al., 2016; Gregor et al., 2013), or biased path derivative estimators for Bernoulli variables (Bengio et al., 2013). However, no existing gra- dient estimator has been formulated speciï¬cally for categorical variables. The contributions of this work are threefold:
1. We introduce Gumbel-Softmax, a continuous distribution on the simplex that can approx- imate categorical samples, and whose parameter gradients can be easily computed via the reparameterization trick.
2. We show experimentally that Gumbel-Softmax outperforms all single-sample gradient es- timators on both Bernoulli variables and categorical variables.
3. We show that this estimator can be used to efï¬ciently train semi-supervised models (e.g. Kingma et al. (2014)) without costly marginalization over unobserved categorical latent variables.
The practical outcome of this paper is a simple, differentiable approximate sampling mechanism for categorical variables that can be integrated into neural networks and trained using standard back- propagation.
âWork done during an internship at Google Brain.
1
Published as a conference paper at ICLR 2017
2 THE GUMBEL-SOFTMAX DISTRIBUTION
We begin by deï¬ning the Gumbel-Softmax distribution, a continuous distribution over the simplex that can approximate samples from a categorical distribution. Let z be a categorical variable with class probabilities Ï1, Ï2, ...Ïk. For the remainder of this paper we assume categorical samples are encoded as k-dimensional one-hot vectors lying on the corners of the (k â 1)-dimensional simplex, âkâ1. This allows us to deï¬ne quantities such as the element-wise mean Ep[z] = [Ï1, ..., Ïk] of these vectors.
The Gumbel-Max trick (Gumbel, 1954; Maddison et al., 2014) provides a simple and efï¬cient way to draw samples z from a categorical distribution with class probabilities Ï:
z = one_hot arg max i [gi + log Ïi] (1)
where g1...gk are i.i.d samples drawn from Gumbel(0, 1)1. We use the softmax function as a continu- ous, differentiable approximation to arg max, and generate k-dimensional sample vectors y â âkâ1 where
exp((log(m) + 9:)/7) Yi E fori = 1,... (2) Yj-1 exp((log(7j) + 9;)/T)
The density of the Gumbel-Softmax distribution (derived in Appendix B) is:
k ok y Prr(Yis--s Ya) = E(k) (> nist) Tl) 3) i=l i=l
This distribution was independently discovered by Maddison et al. (2016), where it is referred to as the concrete distribution. As the softmax temperature Ï approaches 0, samples from the Gumbel- Softmax distribution become one-hot and the Gumbel-Softmax distribution becomes identical to the categorical distribution p(z).
a) 5 Categorical 7T=1.0 = 10.0 i a a la a __. b) i | | L | L L â_ category
Figure 1: The Gumbel-Softmax distribution interpolates between discrete one-hot-encoded categor- ical distributions and continuous categorical densities. (a) For low temperatures (Ï = 0.1, Ï = 0.5), the expected value of a Gumbel-Softmax random variable approaches the expected value of a cate- gorical random variable with the same logits. As the temperature increases (Ï = 1.0, Ï = 10.0), the expected value converges to a uniform distribution over the categories. (b) Samples from Gumbel- Softmax distributions are identical to samples from a categorical distribution as Ï â 0. At higher temperatures, Gumbel-Softmax samples are no longer one-hot, and become uniform as Ï â â.
2.1 GUMBEL-SOFTMAX ESTIMATOR
The Gumbel-Softmax distribution is smooth for Ï > 0, and therefore has a well-deï¬ned gradi- ent ây/âÏ with respect to the parameters Ï. Thus, by replacing categorical samples with Gumbel- Softmax samples we can use backpropagation to compute gradients (see Section 3.1). We denote
1The Gumbel(0, 1) distribution can be sampled using inverse transform sampling by drawing u â¼ Uniform(0, 1) and computing g = â log(â log(u)).
2
Published as a conference paper at ICLR 2017
this procedure of replacing non-differentiable categorical samples with a differentiable approxima- tion during training as the Gumbel-Softmax estimator.
While Gumbel-Softmax samples are differentiable, they are not identical to samples from the corre- sponding categorical distribution for non-zero temperature. For learning, there is a tradeoff between small temperatures, where samples are close to one-hot but the variance of the gradients is large, and large temperatures, where samples are smooth but the variance of the gradients is small (Figure 1). In practice, we start at a high temperature and anneal to a small but non-zero temperature.
In our experiments, we ï¬nd that the softmax temperature Ï can be annealed according to a variety of schedules and still perform well. If Ï is a learned parameter (rather than annealed via a ï¬xed schedule), this scheme can be interpreted as entropy regularization (Szegedy et al., 2015; Pereyra et al., 2016), where the Gumbel-Softmax distribution can adaptively adjust the âconï¬denceâ of proposed samples during the training process.
2.2 STRAIGHT-THROUGH GUMBEL-SOFTMAX ESTIMATOR
Continuous relaxations of one-hot vectors are suitable for problems such as learning hidden repre- sentations and sequence modeling. For scenarios in which we are constrained to sampling discrete values (e.g. from a discrete action space for reinforcement learning, or quantized compression), we discretize y using arg max but use our continuous approximation in the backward pass by approxi- mating âθz â âθy. We call this the Straight-Through (ST) Gumbel Estimator, as it is reminiscent of the biased path derivative estimator described in Bengio et al. (2013). ST Gumbel-Softmax allows samples to be sparse even when the temperature Ï is high.
# 3 RELATED WORK
In this section we review existing stochastic gradient estimation techniques for discrete variables (illustrated in Figure 2). Consider a stochastic computation graph (Schulman et al., 2015) with discrete random variable z whose distribution depends on parameter θ, and cost function f (z). The objective is to minimize the expected cost L(θ) = Ezâ¼pθ(z)[f (z)] via gradient descent, which requires us to estimate âθEzâ¼pθ(z)[f (z)].
3.1 PATH DERIVATIVE GRADIENT ESTIMATORS
For distributions that are reparameterizable, we can compute the sample z as a deterministic function g of the parameters 6 and an independent random variable ¢, so that z = g(0,¢). The path-wise gradients from f to @ can then be computed without encountering any stochastic nodes:
0 (a) Of Og âE.~ z))| = âE,. 0,â¬))] =Eenp, | 4 SpEsr LF) = Fy C2] = Bony, [SE )
For example, the normal distribution z â¼ N (µ, Ï) can be re-written as µ + Ï Â· N (0, 1), making it trivial to compute âz/âµ and âz/âÏ. This reparameterization trick is commonly applied to train- ing variational autooencoders with continuous latent variables using backpropagation (Kingma & Welling, 2013; Rezende et al., 2014b). As shown in Figure 2, we exploit such a trick in the con- struction of the Gumbel-Softmax estimator.
Biased path derivative estimators can be utilized even when z is not reparameterizable. In general, we can approximate âθz â âθm(θ), where m is a differentiable proxy for the stochastic sample. For Bernoulli variables with mean parameter θ, the Straight-Through (ST) estimator (Bengio et al., 2013) approximates m = µθ(z), implying âθm = 1. For k = 2 (Bernoulli), ST Gumbel-Softmax is similar to the slope-annealed Straight-Through estimator proposed by Chung et al. (2016), but uses a softmax instead of a hard sigmoid to determine the slope. Rolfe (2016) considers an al- ternative approach where each binary latent variable parameterizes a continuous mixture model. Reparameterization gradients are obtained by backpropagating through the continuous variables and marginalizing out the binary variables.
One limitation of the ST estimator is that backpropagating with respect to the sample-independent mean may cause discrepancies between the forward and backward pass, leading to higher variance.
3
Published as a conference paper at ICLR 2017
6) os" aly < 6: detent, differentiable node Siochastie node Forward pass alogPy(Â¥) a0 Backpropagation
# a)
<>
# CE ' J
Figure 2: Gradient estimation in stochastic computation graphs. (1) âθf (x) can be computed via backpropagation if x(θ) is deterministic and differentiable. (2) The presence of stochastic node z precludes backpropagation as the sampler function does not have a well-deï¬ned gradient. (3) The score function estimator and its variants (NVIL, DARN, MuProp, VIMCO) obtain an unbiased estimate of âθf (x) by backpropagating along a surrogate loss Ëf log pθ(z), where Ëf = f (x) â b and b is a baseline for variance reduction. (4) The Straight-Through estimator, developed primarily for Bernoulli variables, approximates âθz â 1. (5) Gumbel-Softmax is a path derivative estimator for a continuous distribution y that approximates z. Reparameterization allows gradients to ï¬ow from f (y) to θ. y can be annealed to one-hot categorical variables over the course of training.
Gumbel-Softmax avoids this problem because each sample y is a differentiable proxy of the corre- sponding discrete sample z.
3.2 SCORE FUNCTION-BASED GRADIENT ESTIMATORS
The score function estimator (SF, also referred to as REINFORCE (Williams, 1992) and likelihood ratio estimator (Glynn, 1990)) uses the identity âθpθ(z) = pθ(z)âθ log pθ(z) to derive the follow- ing unbiased estimator:
âθEz [f (z)] = Ez [f (z)âθ log pθ(z)] (5)
SF only requires that pθ(z) is continuous in θ, and does not require backpropagating through f or the sample z. However, SF suffers from high variance and is consequently slow to converge. In particular, the variance of SF scales linearly with the number of dimensions of the sample vector (Rezende et al., 2014a), making it especially challenging to use for categorical distributions.
The variance of a score function estimator can be reduced by subtracting a control variate b(z) from the learning signal f , and adding back its analytical expectation µb = Ez [b(z)âθ log pθ(z)] to keep the estimator unbiased:
âθEz [f (z)] = Ez [f (z)âθ log pθ(z) + (b(z)âθ log pθ(z) â b(z)âθ log pθ(z))] = Ez [(f (z) â b(z))âθ log pθ(z)] + µb (6) (7)
We brieï¬y summarize recent stochastic gradient estimators that utilize control variates. We direct the reader to Gu et al. (2016) for further detail on these techniques.
⢠NVIL (Mnih & Gregor, 2014) uses two baselines: (1) a moving average ¯f of f to center the learning signal, and (2) an input-dependent baseline computed by a 1-layer neural network
4
Published as a conference paper at ICLR 2017
ï¬tted to f â ¯f (a control variate for the centered learning signal itself). Finally, variance normalization divides the learning signal by max(1, Ïf ), where Ï2 f is a moving average of Var[f ].
e DARN (Gregor et al.| 2013) uses b = f(z) + fâ(Z)(z â 2), where the baseline corre- sponds to the first-order Taylor approximation of f(z) from f(z). z is chosen to be 1/2 for Bernoulli variables, which makes the estimator biased for non-quadratic f, since it ignores the correction term jy in the estimator expression.
e@ MuProp (Gu et al.||2016) also models the baseline as a first-order Taylor expansion: b = f(2) + f'(@)G = 2Z) and py = f'(Z)VoEz [z]. To overcome backpropagation through discrete sampling, a mean-field approximation fy7r(j19(z)) is used in place of f(z) to compute the baseline and derive the relevant gradients.
e VIMCO (Mnih & Rezende}|2016) is a gradient estimator for multi-sample objectives that uses the mean of other samples 6 = 1/m Vii f (z;) to construct a baseline for each sample 24 © Z1zm. We exclude VIMCO from our experiments because we are comparing estimators for single-sample objectives, although Gumbel-Softmax can be easily extended to multi- sample objectives.
3.3 SEMI-SUPERVISED GENERATIVE MODELS
Semi-supervised learning considers the problem of learning from both labeled data (x, y) â¼ DL and unlabeled data x â¼ DU , where x are observations (i.e. images) and y are corresponding labels (e.g. semantic class). For semi-supervised classiï¬cation, Kingma et al. (2014) propose a variational autoencoder (VAE) whose latent state is the joint distribution over a Gaussian âstyleâ variable z and a categorical âsemantic classâ variable y (Figure 6, Appendix). The VAE objective trains a discriminative network qÏ(y|x), inference network qÏ(z|x, y), and generative network pθ(x|y, z) end-to-end by maximizing a variational lower bound on the log-likelihood of the observation under the generative model. For labeled data, the class y is observed, so inference is only done on z â¼ q(z|x, y). The variational lower bound on labeled data is given by:
log pθ(x, y) ⥠âL(x, y) = Ezâ¼qÏ(z|x,y) [log pθ(x|y, z)] â KL[q(z|x, y)||pθ(y)p(z)]
For unlabeled data, difï¬culties arise because the categorical distribution is not reparameterizable. Kingma et al. (2014) approach this by marginalizing out y over all classes, so that for unlabeled data, inference is still on qÏ(z|x, y) for each y. The lower bound on unlabeled data is:
log po() 2 âU(x) = Eznqg(y,z|x) [log pa(zly, z) + log po(y) + log p(z) â ga(y,2|z)] (9) = YE aolyle)(-L(w,y) + H(ag(ula))) (10) y
The full maximization objective is:
J = E(x,y)â¼DL [âL(x, y)] + Exâ¼DU [âU(x)] + α · E(x,y)â¼DL[log qÏ(y|x)] (11)
where α is the scalar trade-off between the generative and discriminative objectives.
One limitation of this approach is that marginalization over all k class values becomes prohibitively expensive for models with a large number of classes. If D, I, G are the computational cost of sam- pling from qÏ(y|x), qÏ(z|x, y), and pθ(x|y, z) respectively, then training the unsupervised objective requires O(D + k(I + G)) for each forward/backward step. In contrast, Gumbel-Softmax allows us to backpropagate through y â¼ qÏ(y|x) for single sample gradient estimation, and achieves a cost of O(D + I + G) per training step. Experimental comparisons in training speed are shown in Figure 5.
# 4 EXPERIMENTAL RESULTS
In our ï¬rst set of experiments, we compare Gumbel-Softmax and ST Gumbel-Softmax to other stochastic gradient estimators: Score-Function (SF), DARN, MuProp, Straight-Through (ST), and
5
(8)
Published as a conference paper at ICLR 2017
Slope-Annealed ST. Each estimator is evaluated on two tasks: (1) structured output prediction and (2) variational training of generative models. We use the MNIST dataset with ï¬xed binarization for training and evaluation, which is common practice for evaluating stochastic gradient estimators (Salakhutdinov & Murray, 2008; Larochelle & Murray, 2011).
Learning rates are chosen from {3eâ5, 1eâ5, 3eâ4, 1eâ4, 3eâ3, 1eâ3}; we select the best learn- ing rate for each estimator using the MNIST validation set, and report performance on the test set. Samples drawn from the Gumbel-Softmax distribution are continuous during training, but are discretized to one-hot vectors during evaluation. We also found that variance normalization was nec- essary to obtain competitive performance for SF, DARN, and MuProp. We used sigmoid activation functions for binary (Bernoulli) neural networks and softmax activations for categorical variables. Models were trained using stochastic gradient descent with momentum 0.9.
4.1 STRUCTURED OUTPUT PREDICTION WITH STOCHASTIC BINARY NETWORKS
The objective of structured output prediction is to predict the lower half of a 28 x 28 MNIST digit given the top half of the image (14 x 28). This is acommon benchmark for training stochastic binary networks (SBN) (Raiko et al.| 2014} Gu et al.| 2016} Mnih & Rezende| 2016). The minimization objective for this conditional generative model is an importance-sampled estimate of the likelihood objective, Ej, po (is|:rper) [2 2, log po (aiower|ti)], where m = 1 is used for training and m = 1000 is used for evaluation.
We trained a SBN with two hidden layers of 200 units each. This corresponds to either 200 Bernoulli variables (denoted as 392-200-200-392) or 20 categorical variables (each with 10 classes) with bi- narized activations (denoted as 392-(20 Ã 10)-(20 Ã 10)-392).
As shown in Figure 3, ST Gumbel-Softmax is on par with the other estimators for Bernoulli vari- ables and outperforms on categorical variables. Meanwhile, Gumbel-Softmax outperforms other estimators on both Bernoulli and Categorical variables. We found that it was not necessary to anneal the softmax temperature for this task, and used a ï¬xed Ï = 1.
(a) (b)
Figure 3: Test loss (negative log-likelihood) on the structured output prediction task with binarized MNIST using a stochastic binary network with (a) Bernoulli latent variables (392-200-200-392) and (b) categorical latent variables (392-(20 Ã 10)-(20 Ã 10)-392).
4.2 GENERATIVE MODELING WITH VARIATIONAL AUTOENCODERS
We train variational autoencoders (Kingma & Welling, 2013), where the objective is to learn a gener- ative model of binary MNIST images. In our experiments, we modeled the latent variable as a single hidden layer with 200 Bernoulli variables or 20 categorical variables (20Ã10). We use a learned cat- egorical prior rather than a Gumbel-Softmax prior in the training objective. Thus, the minimization objective during training is no longer a variational bound if the samples are not discrete. In practice,
6
Published as a conference paper at ICLR 2017
we ï¬nd that optimizing this objective in combination with temperature annealing still minimizes actual variational bounds on validation and test sets. Like the structured output prediction task, we use a multi-sample bound for evaluation with m = 1000.
The temperature is annealed using the schedule Ï = max(0.5, exp(ârt)) of the global training step t, where Ï is updated every N steps. N â {500, 1000} and r â {1eâ5, 1eâ4} are hyperparameters for which we select the best-performing estimator on the validation set and report test performance.
As shown in Figure 4, ST Gumbel-Softmax outperforms other estimators for Categorical variables, and Gumbel-Softmax drastically outperforms other estimators in both Bernoulli and Categorical variables.
# Bound (nats)
(a) (b)
Figure 4: Test loss (negative variational lower bound) on binarized MNIST VAE with (a) Bernoulli latent variables (784 â 200 â 784) and (b) categorical latent variables (784 â (20 Ã 10) â 200).
Table 1: The Gumbel-Softmax estimator outperforms other estimators on Bernoulli and Categorical latent variables. For the structured output prediction (SBN) task, numbers correspond to negative log-likelihoods (nats) of input images (lower is better). For the VAE task, numbers correspond to negative variational lower bounds (nats) on the log-likelihood (lower is better).
SBN (Bern.) SBN (Cat.) VAE (Bern.) VAE (Cat.) SF 72.0 73.1 112.2 110.6 DARN MuProp 59.7 67.9 110.9 128.8 58.9 63.0 109.7 107.0 ST 58.9 61.8 116.0 110.9 Annealed ST Gumbel-S. 58.7 61.1 111.5 107.8 58.5 59.0 105.0 101.5
4.3 GENERATIVE SEMI-SUPERVISED CLASSIFICATION
We apply the Gumbel-Softmax estimator to semi-supervised classiï¬cation on the binary MNIST dataset. We compare the original marginalization-based inference approach (Kingma et al., 2014) to single-sample inference with Gumbel-Softmax and ST Gumbel-Softmax.
We trained on a dataset consisting of 100 labeled examples (distributed evenly among each of the 10 classes) and 50,000 unlabeled examples, with dynamic binarization of the unlabeled examples for each minibatch. The discriminative model qÏ(y|x) and inference model qÏ(z|x, y) are each im- plemented as 3-layer convolutional neural networks with ReLU activation functions. The generative model pθ(x|y, z) is a 4-layer convolutional-transpose network with ReLU activations. Experimental details are provided in Appendix A.
Estimators were trained and evaluated against several values of α = {0.1, 0.2, 0.3, 0.8, 1.0} and the best unlabeled classiï¬cation results for test sets were selected for each estimator and reported
7
Published as a conference paper at ICLR 2017
in Table 2. We used an annealing schedule of Ï = max(0.5, exp(â3eâ5 · t)), updated every 2000 steps.
In Kingma et al. (2014), inference over the latent state is done by marginalizing out y and using the reparameterization trick for sampling from qÏ(z|x, y). However, this approach has a computational cost that scales linearly with the number of classes. Gumbel-Softmax allows us to backpropagate directly through single samples from the joint qÏ(y, z|x), achieving drastic speedups in training without compromising generative or classiï¬cation performance. (Table 2, Figure 5).
Table 2: Marginalizing over y and single-sample variational inference perform equally well when applied to image classiï¬cation on the binarized MNIST dataset (Larochelle & Murray, 2011). We report variational lower bounds and image classiï¬cation accuracy for unlabeled data in the test set.
Marginalization Gumbel ST Gumbel-Softmax 92.6% 92.4% 93.6%
In Figure 5, we show how Gumbel-Softmax versus marginalization scales with the number of cat- egorical classes. For these experiments, we use MNIST images with randomly generated labels. Training the model with the Gumbel-Softmax estimator is 2Ã as fast for 10 classes and 9.9Ã as fast for 100 classes.
(Oth 3% BGS AS O/J2AZÂ¥-SbIIP OTZBYS e729 "OLF EF E97 BF OLIPI3RBYEODEFG Ol23 4567989 y ~
(a) (b)
Figure 5: Gumbel-Softmax allows us to backpropagate through samples from the posterior g4(y|), providing a scalable method for semi-supervised learning for tasks with a large number of classes. (a) Comparison of training speed (steps/sec) between Gumbel-Softmax and marginaliza- tion on a semi-supervised VAE. Evaluations were performed on a GTX Titan X® GPU. (6) Visualization of MNIST analogies generated by varying style variable z across each row and class variable y across each column.
# 5 DISCUSSION
The primary contribution of this work is the reparameterizable Gumbel-Softmax distribution, whose corresponding estimator affords low-variance path derivative gradients for the categorical distri- bution. We show that Gumbel-Softmax and Straight-Through Gumbel-Softmax are effective on structured output prediction and variational autoencoder tasks, outperforming existing stochastic gradient estimators for both Bernoulli and categorical latent variables. Finally, Gumbel-Softmax enables dramatic speedups in inference over discrete latent variables.
# ACKNOWLEDGMENTS
We sincerely thank Luke Vilnis, Vincent Vanhoucke, Luke Metz, David Ha, Laurent Dinh, George Tucker, and Subhaneil Lahiri for helpful discussions and feedback.
8
Published as a conference paper at ICLR 2017
# REFERENCES
Y. Bengio, N. L´eonard, and A. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
Info- gan: Interpretable representation learning by information maximizing generative adversarial nets. CoRR, abs/1606.03657, 2016.
J. Chung, S. Ahn, and Y. Bengio. Hierarchical multiscale recurrent neural networks. arXiv preprint arXiv:1609.01704, 2016.
P. W Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of the ACM, 33(10):75â84, 1990.
A. Graves, G. Wayne, M. Reynolds, T. Harley, I. Danihelka, A. Grabska-Barwi´nska, S. G. Col- menarejo, E. Grefenstette, T. Ramalho, J. Agapiou, et al. Hybrid computing using a neural net- work with dynamic external memory. Nature, 538(7626):471â476, 2016.
Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, abs/1410.5401, 2014.
K. Gregor, I. Danihelka, A. Mnih, C. Blundell, and D. Wierstra. Deep autoregressive networks. arXiv preprint arXiv:1310.8499, 2013.
S. Gu, S. Levine, I. Sutskever, and A Mnih. MuProp: Unbiased Backpropagation for Stochastic Neural Networks. ICLR, 2016.
E. J. Gumbel. Statistical theory of extreme values and some practical applications: a series of lectures. Number 33. US Govt. Print. Ofï¬ce, 1954.
D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pp. 3581â3589, 2014.
H. Larochelle and I. Murray. The neural autoregressive distribution estimator. In AISTATS, volume 1, pp. 2, 2011.
C. J. Maddison, D. Tarlow, and T. Minka. A* sampling. In Advances in Neural Information Pro- cessing Systems, pp. 3086â3094, 2014.
C. J. Maddison, A. Mnih, and Y. Whye Teh. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables. ArXiv e-prints, November 2016.
A. Mnih and K. Gregor. Neural variational inference and learning in belief networks. ICML, 31, 2014.
A. Mnih and D. J. Rezende. Variational inference for monte carlo objectives. arXiv preprint arXiv:1602.06725, 2016.
J. Paisley, D. Blei, and M. Jordan. Variational Bayesian Inference with Stochastic Search. ArXiv e-prints, June 2012.
Gabriel Pereyra, Geoffrey Hinton, George Tucker, and Lukasz Kaiser. Regularizing neural networks by penalizing conï¬dent output distributions. 2016.
J. W Rae, J. J Hunt, T. Harley, I. Danihelka, A. Senior, G. Wayne, A. Graves, and T. P Lillicrap. Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes. ArXiv e-prints, October 2016.
T. Raiko, M. Berglund, G. Alain, and L. Dinh. Techniques for learning binary stochastic feedforward neural networks. arXiv preprint arXiv:1406.2989, 2014.
9
Published as a conference paper at ICLR 2017
D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate infer- ence in deep generative models. arXiv preprint arXiv:1401.4082, 2014a.
D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate infer- ence in deep generative models. In Proceedings of The 31st International Conference on Machine Learning, pp. 1278â1286, 2014b.
J. T. Rolfe. Discrete Variational Autoencoders. ArXiv e-prints, September 2016.
R. Salakhutdinov and I. Murray. On the quantitative analysis of deep belief networks. In Proceedings of the 25th international conference on Machine learning, pp. 872â879. ACM, 2008.
J. Schulman, N. Heess, T. Weber, and P. Abbeel. Gradient estimation using stochastic computation graphs. In Advances in Neural Information Processing Systems, pp. 3528â3536, 2015.
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015.
R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
K. Xu, J. Ba, R. Kiros, K. Cho, A. C. Courville, R. Salakhutdinov, R. S. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. CoRR, abs/1502.03044, 2015.
A SEMI-SUPERVISED CLASSIFICATION MODEL
Figures 6 and 7 describe the architecture used in our experiments for semi-supervised classiï¬cation (Section 4.3).
3 : < beterminisi, differentiable node O Stochastic node
Figure 6: Semi-supervised generative model proposed by Kingma et al. (2014). (a) Generative model pθ(x|y, z) synthesizes images from latent Gaussian âstyleâ variable z and categorical class variable y. (b) Inference model qÏ(y, z|x) samples latent state y, z given x. Gaussian z can be differentiated with respect to its parameters because it is reparameterizable. In previous work, when y is not observed, training the VAE objective requires marginalizing over all values of y. (c) Gumbel- Softmax reparameterizes y so that backpropagation is also possible through y without encountering stochastic nodes.
# B DERIVING THE DENSITY OF THE GUMBEL-SOFTMAX DISTRIBUTION
Here we derive the probability density function of the Gumbel-Softmax distribution with proba- bilities Ï1, ..., Ïk and temperature Ï . We ï¬rst deï¬ne the logits xi = log Ïi, and Gumbel samples
10
Published as a conference paper at ICLR 2017
(a) conv2 conv2 conv2 5x5 5x5 5x5 FC X [>| stride=2 |) stride=2 >} stride=2 >) 157) de(y | x) N=32 N=64 N=128 ReLU ReLU ReLU (b) conv2 conv2 conv2 5x5 5x5 5x5 Fo [x, y] >} stride=2 [> stride=2 |») stride=2 >) 457} de(z | x) N=32 N=64 N=128 ReLU ReLU ReLU () conv2_T conv2_T conv2_T conv2_T FC| 3x3 3x3 3x3 3x3 [¥2] ->16q7>) stride=2 |} stride=-2 |) stride=2 [>} stride=2 [>]FC] >) Po N=128 N=64 N=32 N=32
y y.2)
Figure 7: Network architecture for (a) classiï¬cation qÏ(y|x) (b) inference qÏ(z|x, y), and (c) gen- erative pθ(x|y, z) models. The output of these networks parameterize Categorical, Gaussian, and Bernoulli distributions which we sample from.
g1, ..., gk, where gi â¼ Gumbel(0, 1). A sample from the Gumbel-Softmax can then be computed as:
exp ((ai + gi)/T) va exp ((xj + 9;)/T) Yi fori =1,...,k (12)
B.1 CENTERED GUMBEL DENSITY
The mapping from the Gumbel samples g to the Gumbel-Softmax sample y is not invertible as the normalization of the softmax operation removes one degree of freedom. To compensate for this, we deï¬ne an equivalent sampling process that subtracts off the last element, (xk + gk)/Ï before the softmax:
ye = OP Mi + 9 = (e+ 98))/7) fori =1,..,k (13) Dar xP ((ay + 9 â (we + 9e))/7)
To derive the density of this equivalent sampling process, we ï¬rst derive the density for the âcen- teredâ multivariate Gumbel density corresponding to:
ui = xi + gi â (xk + gk) for i = 1, ..., k â 1 (14)
where gi â¼ Gumbel(0, 1). Note the probability density of a Gumbel distribution with scale param- eter β = 1 and mean µ at z is: f (z, µ) = eµâzâeµâz . We can now compute the density of this distribution by marginalizing out the last Gumbel sample, gk:
# oo
oo Pitta) = [ dgy p(uy, ---, Uk|Gx)P( Ie) -_ oo k-1 = [dav (ax) T[ oui) 7 i=1 oo k-1 = [dae F010) TY flee + eas â ws) ied i=l oo k-1 = dg, e~9*-© ** erin Ui kG EME ET Ie [. I
11
Published as a conference paper at ICLR 2017
We perform a change of variables with v = eâgk , so dv = âeâgk dgk and dgk = âdv egk = dv/v, and deï¬ne uk = 0 to simplify notation:
k-1 p(U1,--;Uk,-1) = stun =0) | dy â A yet âThee uj â a, âve 2pâujâep, (15)
exp (+ . vee Ui ) G => we") T(k) (16) i=l
=T(k) oo (35 (aj â Ui ) (> e=")) (7)
=T(k) (loo exp (¢ =) ( Dex (i - «) (18) i=l
B.2 TRANSFORMING TO A GUMBEL-SOFTMAX
Given samples u1, ..., uk,â1 from the centered Gumbel distribution, we can apply a deterministic transformation h to yield the ï¬rst k â 1 coordinates of the sample from the Gumbel-Softmax:
exp(ui/T) 1+ D2) exp(uj/7) Yirkâ1 = A(ur:k-1), hi (t1:kâ-1)
Note that the final coordinate probability y;, is fixed given the first k â 1, as ean
i=1 yi = 1:
-1 k=l k-1 Ye = {1+ exp(u;/T) =1- Ss Uj (20) j=l j=l
We can thus compute the probability of a sample from the Gumbel-Softmax using the change of variables formula on only the ï¬rst k â 1 variables:
ho" (yin P(Yi:k) = P(A" (yr:eâ1)) det (oe) (21) Yi:k-1
Thus we need to compute two more pieces: the inverse of h and its Jacobian determinant. The inverse of h is:
k-1 bh (yie1) =7 x | logy: â log {1â Sy; | | =7 x (ogy: â log yx) (22) j=l
with Jacobian
1 1 1 1 wtoe ow uR Oh-"(y ) 1 1 a a+ +. + C Yl:ik=-1) _ Tx (cise ( ) + +) _ ue y2 : Yk ue (23) OYt:k-1 Yuck-1 Uk : : me : 1 1 1 ua Uk UR 7 yeaa" Uk
Next, we compute the determinant of the Jacobian: -1
-1 det (ae wn) = 7*det ((r A cc diag (vs-1)) («ise ( , ))) (24) OYtK=1 Yk Yurk-1 1-4 k-1 â 7h (1 4 ) yy (25) Yeo J yy at I yy! (26)
12
Published as a conference paper at ICLR 2017
where e is a k â 1 dimensional vector of ones, and weâve used the identities: det(AB) = det(A)det(B), det(diag(x)) = [], xi, and det(J + uv?) = 1+ u7v. We can then plug into the change of variables formula (Eq. using the density of the centered Gumbel (Eq{15}, the inverse of h (Eq. [22) and its Jacobian determinant (Eq. [26):
k , k yt P(Yis + Yk) =T(K) (1 exp (xi) tt) (> exp (x;) tt) i=1 i=l v Kk k ph-l I Q7) i=l =T(k)re} (x exp (2) iw) J] (e @a) /y7"") (28) i=l i=1
13 | {
"id": "1602.06725"
} |
1611.00712 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | 7 1 0 2
r a M 5 ] G L . s c [
3 v 2 1 7 0 0 . 1 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
THE CONCRETE DISTRIBUTION: A CONTINUOUS RELAXATION OF DISCRETE RANDOM VARIABLES
Chris J. Maddison1,2, Andriy Mnih1, & Yee Whye Teh1 1DeepMind, London, United Kingdom 2University of Oxford, Oxford, United Kingdom cmaddis@stats.ox.ac.uk
# ABSTRACT
The reparameterization trick enables optimizing large scale stochastic computa- tion graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random vari- able with ï¬xed distribution. After refactoring, the gradients of the loss propa- gated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack useful reparame- terizations due to the discontinuous nature of discrete states. In this work we introduce CONCRETE random variablesâCONtinuous relaxations of disCRETE random variables. The Concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit rep- resentation that is treated continuously, Concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objec- tives (including objectives that depend on the log-probability of latent stochastic nodes) on the corresponding discrete graph. We demonstrate the effectiveness of Concrete relaxations on density estimation and structured prediction tasks using neural networks.
# INTRODUCTION
Software libraries for automatic differentiation (AD) (Abadi et al., 2015; Theano Development Team, 2016) are enjoying broad use, spurred on by the success of neural networks on some of the most challenging problems of machine learning. The dominant mode of development in these libraries is to deï¬ne a forward parametric computation, in the form of a directed acyclic graph, that computes the desired objective. If the components of the graph are differentiable, then a backwards computation for the gradient of the objective can be derived automatically with the chain rule. The ease of use and unreasonable effectiveness of gradient descent has led to an explosion in the di- versity of architectures and objective functions. Thus, expanding the range of useful continuous operations can have an outsized impact on the development of new models. For example, a topic of recent attention has been the optimization of stochastic computation graphs from samples of their states. Here, the observation that AD âjust worksâ when stochastic nodes1 can be reparameterized into deterministic functions of their parameters and a ï¬xed noise distribution (Kingma & Welling, 2013; Rezende et al., 2014), has liberated researchers in the development of large complex stochastic architectures (e.g. Gregor et al., 2015).
Computing with discrete stochastic nodes still poses a signiï¬cant challenge for AD libraries. Deter- ministic discreteness can be relaxed and approximated reasonably well with sigmoidal functions or the softmax (see e.g., Grefenstette et al., 2015; Graves et al., 2016), but, if a distribution over discrete states is needed, there is no clear solution. There are well known unbiased estimators for the gradi-
1For our purposes a stochastic node of a computation graph is just a random variable whose distribution depends in some deterministic way on the values of the parent nodes.
1
Published as a conference paper at ICLR 2017
ents of the parameters of a discrete stochastic node from samples. While these can be made to work with AD, they involve special casing and deï¬ning surrogate objectives (Schulman et al., 2015), and even then they can have high variance. Still, reasoning about discrete computation comes naturally to humans, and so, despite the difï¬culty associated, many modern architectures incorporate discrete stochasticity (Mnih et al., 2014; Xu et al., 2015; KoËcisk´y et al., 2016).
This work is inspired by the observation that many architectures treat discrete nodes continuously, and gradients rich with counterfactual information are available for each of their possible states. We introduce a CONtinuous relaxation of disCRETE random variables, CONCRETE for short, which allow gradients to ï¬ow through their states. The Concrete distribution is a new parametric family of continuous distributions on the simplex with closed form densities. Sampling from the Concrete distribution is as simple as taking the softmax of logits perturbed by ï¬xed additive noise. This reparameterization means that Concrete stochastic nodes are quick to implement in a way that âjust worksâ with AD. Crucially, every discrete random variable corresponds to the zero temperature limit of a Concrete one. In this view optimizing an objective over an architecture with discrete stochastic nodes can be accomplished by gradient descent on the samples of the corresponding Concrete relaxation. When the objective depends, as in variational inference, on the log-probability of discrete nodes, the Concrete density is used during training in place of the discrete mass. At test time, the graph with discrete nodes is evaluated.
The paper is organized as follows. We provide a background on stochastic computation graphs and their optimization in Section 2. Section 3 reviews a reparameterization for discrete random vari- ables, introduces the Concrete distribution, and discusses its application as a relaxation. Section 4 reviews related work. In Section 5 we present results on a density estimation task and a structured prediction task on the MNIST and Omniglot datasets. In Appendices C and F we provide details on the practical implementation and use of Concrete random variables. When comparing the effec- tiveness of gradients obtained via Concrete relaxations to a state-of-the-art-method (VIMCO, Mnih & Rezende, 2016), we ï¬nd that they are competitiveâoccasionally outperforming and occasionally underperformingâall the while being implemented in an AD library without special casing.
2 BACKGROUND
2.1 OPTIMIZING STOCHASTIC COMPUTATION GRAPHS
Stochastic computation graphs (SCGs) provide a formalism for specifying input-output mappings, potentially stochastic, with learnable parameters using directed acyclic graphs (see Schulman et al. (2015) for a review). The state of each non-input node in such a graph is obtained from the states of its parent nodes by either evaluating a deterministic function or sampling from a conditional distribution. Many training objectives in supervised, unsupervised, and reinforcement learning can be expressed in terms of SCGs.
To optimize an objective represented as a SCG, we need estimates of its parameter gradients. We will concentrate on graphs with some stochastic nodes (backpropagation covers the rest). For simplicity, we restrict our attention to graphs with a single stochastic node X. We can interpret the forward pass in the graph as ï¬rst sampling X from the conditional distribution pÏ(x) of the stochastic node given its parents, then evaluating a deterministic function fθ(x) at X. We can think of fθ(X) as a noisy objective, and we are interested in optimizing its expected value L(θ, Ï) = E Xâ¼pÏ(x)[fθ(X)] w.r.t. parameters θ, Ï.
In general, both the objective and its gradients are intractable. We will side-step this issue by esti- mating them with samples from pÏ(x). The gradient w.r.t. to the parameters θ has the form
# âθE
Xâ¼pÏ(x)[fθ(X)] = E
Xâ¼pÏ(x)[ (1)
âθL(θ, Ï) =
# âθfθ(X)]
and can be easily estimated using Monte Carlo sampling:
f 1 8 8 VoL(0,8) ~ =) _, Volo(X*), (2)
where X* ~ p(x) iid. The more challenging task is to compute the gradient @ of pg(x). The expression obtained by differentiating the expected objective,
# w.r.t. the parameters
âÏL(θ, Ï) = âÏ pÏ(x)fθ(x) dx = fθ(x) âÏpÏ(x) dx, (3)
2
Published as a conference paper at ICLR 2017
does not have the form of an expectation w.r.t. x and thus does not directly lead to a Monte Carlo gradient estimator. However, there are two ways of getting around this difï¬culty which lead to the two classes of estimators we will now discuss.
2.2 SCORE FUNCTION ESTIMATORS
The score function estimator (SFE, Fu, 2006), also known as the REINFORCE (Williams, 1992) or likelihood-ratio estimator (Glynn, 1990), is based on the identity âÏ log pÏ(x), which allows the gradient in Eq. 3 to be written as an expectation: âÏL(θ, Ï) = E âÏ log pÏ(X)] . Estimating this expectation using naive Monte Carlo gives the estimator
Vol(,8) ~ =~, fol X*)Vg low po(X*), ()
where X s pÏ(x) i.i.d. This is a very general estimator that is applicable whenever log pÏ(x) is differentiable w.r.t. Ï. As it does not require fθ(x) to be differentiable or even continuous as a function of x, the SFE can be used with both discrete and continuous random variables.
Though the basic version of the estimator can suffer from high variance, various variance reduc- tion techniques can be used to make the estimator much more effective (Greensmith et al., 2004). Baselines are the most important and widely used of these techniques (Williams, 1992). A number of score function estimators have been developed in machine learning (Paisley et al., 2012; Gregor et al., 2013; Ranganath et al., 2014; Mnih & Gregor, 2014; Titsias & L´azaro-Gredilla, 2015; Gu et al., 2016), which differ primarily in the variance reduction techniques used.
2.3 REPARAMETERIZATION TRICK
In many cases we can sample from pÏ(x) by ï¬rst sampling Z from some ï¬xed distribution q(z) and then transforming the sample using some function gÏ(z). For example, a sample from Normal(µ, Ï2) can be obtained by sampling Z from the standard form of the distribution Normal(0, 1) and then transforming it using gµ,Ï(Z) = µ + ÏZ. This two-stage reformulation of the sampling process, called the reparameterization trick, allows us to transfer the dependence on Ï from p into f by writing fθ(x) = fθ(gÏ(z)) for x = gÏ(z), making it possible to reduce the problem of estimating the gradient w.r.t. parameters of a distribution to the simpler problem of estimating the gradient w.r.t. parameters of a deterministic function.
Having reparameterized pÏ(x), we can now express the objective as an expectation w.r.t. q(z): Xâ¼pÏ(x)[fθ(X)] = E
As q(z) does not depend on Ï, we can estimate the gradient w.r.t. Ï in exactly the same way we estimated the gradient w.r.t. θ in Eq. 1. Assuming differentiability of fθ(x) w.r.t. x and of gÏ(z) w.r.t. Ï and using the chain rule gives âÏL(θ, Ï) = E
# âÏfθ(gÏ(Z))] = E
âÏgÏ(Z)] .
The reparameterization trick, introduced in the context of variational inference independently by Kingma & Welling (2014), Rezende et al. (2014), and Titsias & L´azaro-Gredilla (2014), is usu- ally the estimator of choice when it is applicable. For continuous latent variables which are not directly reparameterizable, new hybrid estimators have also been developed, by combining partial reparameterizations with score function estimators (Ruiz et al., 2016; Naesseth et al., 2016).
2.4 APPLICATION: VARIATIONAL TRAINING OF LATENT VARIABLE MODELS
We will now see how the task of training latent variable models can be formulated in the SCG framework. Such models assume that each observation x is obtained by first sampling a vector of latent variables Z from the prior pg(z) before sampling the observation itself from pg(x | z). Thus the probability of observation x is pg(x) = 3), po(z)pe(x | z). Maximum likelihood train- ing of such models is infeasible, because the log-likelihood (LL) objective L(@) = log pe(x) =
3
(6)
Published as a conference paper at ICLR 2017
(a) Discrete(α) (b) Concrete(α, λ)
Discrete(α) and 3-ary Con- Figure 1: Visualization of sampling graphs for 3-ary discrete D crete X Concrete(α, λ). White operations are deterministic, blue are stochastic, rounded are continuous, square discrete. The top node is an example state; brightness indicates a value in [0,1].
log E expectation being inside the log. The multi-sample variational objective (Burda et al., 2016),
1 po(Z', x) log | â âââ_]]. (8) (2 dX do(Z" |) Ln(0,¢)=. E Zingy (2|e)
provides a convenient alternative which has precisely the form we considered in Section 2.1. This ap- x) with its own parameters, which serves proach relies on introducing an auxiliary distribution qÏ(z as approximation to the intractable posterior pθ(z x). The model is trained by jointly maximizing | the objective w.r.t. to the parameters of p and q. The number of samples used inside the objective m allows trading off the computational cost against the tightness of the bound. For m = 1, Lm(θ, Ï) becomes is the widely used evidence lower bound (ELBO, Hoffman et al., 2013) on log pθ(x), while for m > 1, it is known as the importance weighted bound (Burda et al., 2016).
3 THE CONCRETE DISTRIBUTION
3.1 DISCRETE RANDOM VARIABLES AND THE GUMBEL-MAX TRICK
To motivate the construction of Concrete random variables, we review a method for sampling from discrete distributions called the Gumbel-Max trick (Luce, 1959; Yellott, 1977; Papandreou & Yuille, 2011; Hazan & Jaakkola, 2012; Maddison et al., 2014). We restrict ourselves to a representation of discrete states as vectors d k=1 dk = 1. This is a ï¬exible representation in a computation graph; to achieve an integral representation take the inner product of d with (1, . . . , n), and to achieve a point mass representation in Rm take W d where W RmÃn. Consider an unnormalized parameterization (α1, . . . , αn) where αk â tion D â¼ Max trick proceeds as follows: sample Uk â¼ log Uk) log αk â {
â
, set Dk = 1 and the remaining Di = 0 for i }
â
_ Ok Vie Gs (9)
In other words, the sampling of a discrete random variable can be refactored into a deterministic functionâcomponentwise addition followed by argmaxâof the parameters log αk and ï¬xed dis- tribution
â
â
The apparently arbitrary choice of noise gives the trick its name, as log U ) has a Gumbel distribution. This distribution features in extreme value theory (Gumbel, 1954) where it plays a central role similar to the Normal distribution: the Gumbel distribution is stable under max opera- tions, and for some distributions, the order statistics (suitably normalized) of i.i.d. draws approach the Gumbel in distribution. The Gumbel can also be recognized as a log-transformed exponen- tial random variable. So, the correctness of (9) also reduces to a well known result regarding the argmin of exponential random variables. See (Hazan et al., 2016) for a collection of related work, and particularly the chapter (Maddison, 2016) for a proof and generalization of this trick.
4
Published as a conference paper at ICLR 2017
(a) λ = 0 (b) λ = 1/2 (c) λ = 1 (d) λ = 2
Figure 2: A discrete distribution with unnormalized probabilities (α1, α2, α3) = (2, 0.5, 1) and three corresponding Concrete densities at increasing temperatures λ. Each triangle represents the set of points (y1, y2, y3) in the simplex â2 = . For λ = 0 the size of white circles represents the mass assigned to each vertex of the simplex under the the intensity of the shading represents the value of pα,λ(y). discrete distribution. For λ
2, 1, 0.5 }
â {
3.2 CONCRETE RANDOM VARIABLES
The derivative of the argmax is 0 everywhere except at the boundary of state changes, where it is undefined. For this reason the Gumbel-Max trick is not a suitable reparameterization for use in SCGs with AD. Here we introduce the Concrete distribution motivated by considering a graph, which is the same as Figure[Ialup to a continuous relaxation of the argmax computation, see Figure[Ib] This will ultimately allow the optimization of parameters a, via gradients. The argmax computation returns states on the vertices of the simplex Aâ-! = {x ⬠R" | x, ⬠(0, 1], \o¢_, ve = 1}. The idea behind Concrete random variables is to relax the state of a discrete variable from the vertices into the interior where it is a random probability vectorâa vector of numbers between 0 and | that sum to 1. To sample a Concrete random variable X ⬠A"! at temperature \ ⬠(0,00) with parameters a, ⬠(0, 00), sample G,, ~ Gumbel i.i.d. and set
# Rn
â
â
) with parameters αk â Xk =
# ), sample Gk â¼ .
# â exp((log αk + Gk)/λ) i=1 exp((log αi + Gi)/λ)
exp((log ag + Gx)/A) YUL, exp((log a; + Gi)/d) Xk (10)
The softmax computation of (10) smoothly approaches the discrete argmax computation as λ 0 while preserving the relative order of the Gumbels log αk + Gk. So, imagine making a series of forward passes on the graphs of Figure 1. Both graphs return a stochastic value for each forward pass, but for smaller temperatures the outputs of Figure 1b become more discrete and eventually indistinguishable from a typical forward pass of Figure 1a.
The distribution of X sampled via (10) has a closed form density on the simplex. Because there may be other ways to sample a Concrete random variable, we take the density to be its deï¬nition. Deï¬nition 1 (Concrete Random Variables). Let α Concrete distribution X
â¼
Po.A(t) = (n= 1)!" TT (=) ; an k=1 a py VT;
Proposition 1 lists a few properties of the Concrete distribution. (a) is conï¬rmation that our def- inition corresponds to the sampling routine (10). (b) conï¬rms that rounding a Concrete random variable results in the discrete random variable whose distribution is described by the logits log αk, (c) conï¬rms that taking the zero temperature limit of a Concrete random variable is the same as rounding. Finally, (d) is a convexity result on the density. We prove these results in Appendix A. Proposition 1 (Some Properties of Concrete Random Variables). Let X location parameters α
(a) (Reparameterization) If Gy, ~ Gumbel i.i.d., then (b) (Rounding) P(X, > X; fori #k) = ax/(X7}_, (c) (Zero temperature) P (limy.9 Xz = 1) = ax/(S0j_1
â
â
â
d= exp((log αk+Gk)/λ)
# Gumbel i.i.d., then Xk
i=1 exp((log αi+Gi)/λ) ,
Gumbel i.i.d., then Xk n
#k) = ax/(X7}_,
i=1 αi),
i=1 αi),
5
Published as a conference paper at ICLR 2017
(a) λ = 0 (b) λ = 1/2 (c) λ = 1 (d) λ = 2
Figure 3: A visualization of the binary special case. (a) shows the discrete trick, which works by passing a noisy logit through the unit step function. (b), (c), (d) show Concrete relaxations; the horizontal blue densities show the density of the input distribution and the vertical densities show the corresponding Binary Concrete density on (0, 1) for varying λ.
(d) (Convex eventually) If λ (n 1)â1, then pα,λ(x) is log-convex in x.
â¤
â
The binary case of the Gumbel-Max trick simpliï¬es to passing additive noise through a step func- tion. The corresponding Concrete relaxation is implemented by passing additive noise through a sigmoidâsee Figure 3. We cover this more thoroughly in Appendix B, along with a cheat sheet (Appendix F) on the density and implementation of all the random variables discussed in this work.
3.3 CONCRETE RELAXATIONS
Concrete random variables may have some intrinsic value, but we investigate them simply as surro- gates for optimizing a SCG with discrete nodes. When it is computationally feasible to integrate over the discreteness, that will always be a better choice. Thus, we consider the use case of optimizing a large graph with discrete stochastic nodes from samples.
First, we outline our proposal for how to use Concrete relaxations by considering a variational autoencoder with a single discrete latent variable. Let P,(d) be the mass function of some n- dimensional one-hot discrete random variable with unnormalized probabilities a ⬠(0,00)â and po(z|d) some distribution over a data point x given d ⬠(0, 1)" one-hot. The generative model is then po ,a(x,d) = po(2|d)P.(d). Let Qa(d|2) be an approximating posterior over d ⬠(0, 1)" one- hot whose unnormalized probabilities a(x) ⬠(0,00)" depend on x. All together the variational lowerbound we care about stochastically optimizing is
pθ(x D)Pa(D) x) | E Dâ¼Qα(d|x) L1(θ, a, α) = log | Qα(D , (12)
with respect to θ, a, and any parameters of α. First, we relax the stochastic computation D Concrete(α(x), λ1) 12 will re- with density qα,λ1(z sult in a non-interpretable objective, which does not necessarily lowerbound log p(x), because E x)/Pa(Z)] is not a KL divergence. Thus we propose ârelaxingâ the terms Pa(d) and Qα(d
x) to reï¬ect the true sampling distribution. Thus, the relaxed objective is: | pθ(x L1(θ, a, α)
|
where pa,λ2(z) is a Concrete density with location a and temperature λ2. At test time we evaluate the discrete lowerbound L1(θ, a, α). Naively implementing Eq. 13 will result in numerical issues. We discuss this and other details in Appendix C.
Thus, the basic paradigm we propose is the following: during training replace every discrete node with a Concrete node at some ï¬xed temperature (or with an annealing schedule). The graphs are identical up to the softmax / argmax computations, so the parameters of the relaxed graph and discrete graph are the same. When an objective depends on the log-probability of discrete variables in the SCG, as the variational lowerbound does, we propose that the log-probability terms are also ârelaxedâ to represent the true distribution of the relaxed node. At test time the original discrete loss is evaluated. This is possible, because the discretization of any Concrete distribution has a closed form mass function, and the relaxation of any discrete distribution into a Concrete distribution has a closed form density. This is not always possible. For example, the multinomial probit modelâthe Gumbel-Max trick with Gaussians replacing Gumbelsâdoes not have a closed form mass.
The success of Concrete relaxations will depend on the choice of temperature during training. It is important that the relaxed nodes are not able to represent a precise real valued mode in the interior
6
(13)
Published as a conference paper at ICLR 2017
of the simplex as in Figure 2d. If this is the case, it is possible for the relaxed random variable to communicate much more than log2(n) bits of information about its α parameters. This might lead the relaxation to prefer the interior of the simplex to the vertices, and as a result there will be a large integrality gap in the overall performance of the discrete graph. Therefore Proposition 1 (d) is a conservative guideline for generic n-ary Concrete relaxations; at temperatures lower than )n. We discuss (n the subtleties of choosing the temperatures in more detail in Appendix C. Ultimately the best choice of λ and the performance of the relaxation for any speciï¬c n will be an empirical question.
# 4 RELATED WORK
Perhaps the most common distribution over the simplex is the Dirichlet with density pa(x) « hel rest on z ⬠Aâ~!. The Dirichlet can be characterized by strong independence proper- ties, and a great deal of work has been done to generalize it [1985] {1994} Favaro et al.|[2011). Of note is the Logistic Normal distribution (Atchison & Shen]]1980), which can be simulated by taking the softmax of n â 1 normal random variables and an nth logit that is deterministically zero. The Logistic Normal is an important dis- tribution, because it can effectively model correlations within the simplex (Blei & Lafferty] 2006). To our knowledge the Concrete distribution does not fall completely into any family of distribu- tions previously described. For A < 1 the Concrete is in a class of normalized infinitely divisible distributions (S. Favaro, personal communication), and the results of [Favaro et al.|(2011) apply.
The idea of using a softmax of Gumbels as a relaxation for a discrete random variable was concur- rently considered by (Jang et al., 2016), where it was called the Gumbel-Softmax. They do not use the density in the relaxed objective, opting instead to compute all aspects of the graph, including discrete log-probability computations, with the relaxed stochastic state of the graph. In the case of variational inference, this relaxed objective is not a lower bound on the marginal likelihood of the observations, and care needs to be taken when optimizing it. The idea of using sigmoidal functions with additive input noise to approximate discreteness is also not a new idea. (Frey, 1997) introduced nonlinear Gaussian units which computed their activation by passing Gaussian noise with the mean and variance speciï¬ed by the input to the unit through a nonlinearity, such as the logistic function. Salakhutdinov & Hinton (2009) binarized real-valued codes of an autoencoder by adding (Gaussian) noise to the logits before passing them through the logistic function. Most recently, to avoid the dif- ï¬culty associated with likelihood-ratio methods (KoËcisk´y et al., 2016) relaxed the discrete sampling operation by sampling a vector of Gaussians instead and passing those through a softmax.
There is another family of gradient estimators that have been studied in the context of training neural networks with discrete units. These are usually collected under the umbrella of straight- through estimators (Bengio et al., 2013; Raiko et al., 2014). The basic idea they use is passing forward discrete values, but taking gradients through the expected value. They have good empirical performance, but have not been shown to be the estimators of any loss function. This is in contrast to gradients from Concrete relaxations, which are biased with respect to the discrete graph, but unbiased with respect to the continuous one.
# 5 EXPERIMENTS
5.1 PROTOCOL
The aim of our experiments was to evaluate the effectiveness of the gradients of Concrete relax- ations for optimizing SCGs with discrete nodes. We considered the tasks in (Mnih & Rezende, 2016): structured output prediction and density estimation. Both tasks are difï¬cult optimization problems involving ï¬tting probability distributions with hundreds of latent discrete nodes. We compared the performance of Concrete reparameterizations to two state-of-the-art score function estimators: VIMCO (Mnih & Rezende, 2016) for optimizing the multisample variational objec- tive (m > 1) and NVIL (Mnih & Gregor, 2014) for optimizing the single-sample one (m = 1). We performed the experiments using the MNIST and Omniglot datasets. These are datasets of 28 images of handwritten digits (MNIST) or letters (Omniglot). For MNIST we used the ï¬xed 28 binarization of Salakhutdinov & Murray (2008) and the standard 50,000/10,000/10,000 split into
7
Published as a conference paper at ICLR 2017
MNIST NLL Omniglot NLL binary model (200H â 784V) Test Train Test Train m Concrete VIMCO Concrete VIMCO Concrete VIMCO Concrete VIMCO 1 5 50 107.3 104.9 104.3 104.4 101.9 98.8 107.5 104.9 104.2 104.2 101.5 98.3 118.7 118.0 118.9 115.7 113.5 113.0 117.0 115.8 115.8 112.2 110.8 110.0 (200H â 200H â 784V) 1 5 50 102.1 99.9 99.5 92.9 91.7 90.7 102.3 100.0 99.4 91.7 90.8 89.7 116.3 116.0 117.0 109.2 107.5 108.1 114.4 113.5 113.9 104.8 103.6 103.6 (200H â¼784V) 1 5 50 92.1 89.5 88.5 93.8 91.4 89.3 91.2 88.1 86.4 91.5 88.6 86.5 108.4 107.5 108.1 116.4 118.2 116.0 103.6 101.4 100.5 110.3 102.3 100.8 (200H â¼200H â¼784V) 1 5 50 87.9 86.3 85.7 88.4 86.4 85.5 86.5 84.1 83.1 85.8 82.5 81.8 105.9 105.8 106.8 111.7 108.2 113.2 100.2 98.6 97.5 105.7 101.1 95.2
Table 1: Density estimation with binary latent variables. When m = 1, VIMCO stands for NVIL.
training/validation/testing sets. For Omniglot we sampled a ï¬xed binarization and used the stan- dard 24,345/8,070 split into training/testing sets. We report the negative log-likelihood (NLL) of the discrete graph on the test data as the performance metric.
All of our models were neural networks with layers of n-ary discrete stochastic nodes with values log2(n). The distributions were parameterized by n real val- on the corners of the hypercube } {â ues log αk â Discrete(α) with n states. Model descriptions are of the form â(200Vâ200H 784V)â, read from left to right. This describes the order of conditional sampling, again from left to right, with each integer repre- senting the number of stochastic units in a layer. The letters V and H represent observed and latent variables, respectively. If the leftmost layer is H, then it was sampled unconditionally from some parameters. Conditioning functions are described by , where âââ means a linear function of the previous layer and â â means a non-linear function. A âlayerâ of these units is simply the concatenation of some number of independent nodes whose parameters are determined as a function 240 the previous layer. For example a 240 binary layer is a factored distribution over the } hypercube. Whereas a 240 8-ary layer can be seen as a distribution over the same hypercube where each of the 80 triples of units are sampled independently from an 8 way discrete distribution over 3. All models were initialized with the heuristic of Glorot & Bengio (2010) and optimized {â } using Adam (Kingma & Ba, 2014). All temperatures were ï¬xed throughout training. Appendix D for hyperparameter details.
5.2 DENSITY ESTIMATION
Density estimation, or generative modelling, is the problem of ï¬tting the distribution of data. We took the latent variable approach described in Section 2.4 and trained the models by optimizing the Lm(θ, Ï) given by Eq. 8 averaged uniformly over minibatches of data points variational objective x) were parameterized x. Both our generative models pθ(z, x) and variational distributions qÏ(z with neural networks as described above. We trained models with and approximated the NLL with
â { L50,000(θ, Ï) averaged uniformly over the whole dataset.
The results are shown in Table 1. In general, VIMCO outperformed Concrete relaxations for linear models and Concrete relaxations outperformed VIMCO for non-linear models. We also tested the effectiveness of Concrete relaxations on generative models with n-ary layers on the L5(θ, Ï) ob- jective. The best 4-ary model achieved test/train NLL 86.7/83.3, the best 8-ary achieved 87.4/84.6 with Concrete relaxations, more complete results in Appendix E. The relatively poor performance of the 8-ary model may be because moving from 4 to 8 results in a more difï¬cult objective without much added capacity. As a control we trained n-ary models using logistic normals as relaxations of discrete distributions (with retuned temperature hyperparameters). Because the discrete zero tem- perature limit of logistic Normals is a multinomial probit whose mass function is not known, we evaluated the discrete model by sampling from the discrete distribution parameterized by the logits
8
Published as a conference paper at ICLR 2017
binary model (392Vâ240H â240Hâ392V) Test NLL Train NLL m Concrete VIMCO Concrete VIMCO 1 5 50 58.5 54.3 53.4 61.4 54.5 51.8 54.2 49.2 48.2 59.3 52.7 49.6 (392Vâ240H â240Hâ240H â392V) 1 5 50 56.3 52.7 52.0 59.7 53.5 50.2 51.6 46.9 45.9 58.4 51.6 47.9
# Xr
Figure 4: Results for structured prediction on MNIST comparing Concrete relaxations to VIMCO. When m = 1 VIMCO stands for NVIL. The plot on the right shows the objective (lower is better) for the continuous and discrete graph trained at temperatures λ. In the shaded region, units prefer to communicate real values in the interior of (
â
learned during training. The best 4-ary model achieved test/train NLL of 88.7/85.0, the best 8-ary model achieved 89.1/85.1.
5.3 STRUCTURED OUTPUT PREDICTION
Structured output prediction is concerned with modelling the high-dimensional distribution of the observation given a context and can be seen as conditional density estimation. We considered the task of predicting the bottom half x1 of an image of an MNIST digit given its top half x2, as introduced by Raiko et al. (2014). We followed Raiko et al. (2014) in using a model with layers of discrete stochastic units between the context and the observation. Conditioned on the top half x2 the network samples from a distribution pÏ(z x2) over layers of stochastic units z then predicts x1 by sampling from a distribution pθ(x1 | SP m (θ, Ï) =
; 1 LEP (0,d)=, E log { â x |Z)}|. OO) = fle (Gp Deol | 20)
1 m Lm(θ, Ï) (Eq. 8) where we use the prior pÏ(z
This objective is a special case of distribution. Thus, the objective is a lower bound on log pθ,Ï(x1 | averaged uniformly over mini- We trained the models by optimizing 1, 5, 50 SP 100(θ, Ï) averaged uniformly over the entire dataset. The batches and evaluated them by computing results are shown in Figure 4. Concrete relaxations more uniformly outperformed VIMCO in this instance. We also trained n-ary (392Vâ240Hâ240Hâ240Hâ392V) models on the (θ, Ï) objec- tive using the best temperature hyperparameters from density estimation. 4-ary achieved a test/train NLL of 55.4/46.0 and 8-ary achieved 54.7/44.8. As opposed to density estimation, increasing arity uniformly improved the models. We also investigated the hypothesis that for higher temperatures Concrete relaxations might prefer the interior of the interval to the boundary points . Figure 1, 1 } (θ, Ï). 4 was generated with binary (392Vâ240Hâ240Hâ240Hâ392V) model trained on
# L
# 6 CONCLUSION
We introduced the Concrete distribution, a continuous relaxation of discrete random variables. The Concrete distribution is a new distribution on the simplex with a closed form density parameterized by a vector of positive location parameters and a positive temperature. Crucially, the zero temper- ature limit of every Concrete distribution corresponds to a discrete distribution, and any discrete distribution can be seen as the discretization of a Concrete one. The application we considered was training stochastic computation graphs with discrete stochastic nodes. The gradients of Concrete relaxations are biased with respect to the original discrete objective, but they are low variance un- biased estimators of a continuous surrogate objective. We showed in a series of experiments that stochastic nodes with Concrete distributions can be used effectively to optimize the parameters of a stochastic computation graph with discrete stochastic nodes. We did not ï¬nd that annealing or automatically tuning the temperature was important for these experiments, but it remains interesting and possibly valuable future work.
9
Published as a conference paper at ICLR 2017
ACKNOWLEDGMENTS
We thank Jimmy Ba for the excitement and ideas in the early days, Stefano Favarro for some analysis of the distribution. We also thank Gabriel Barth-Maron and Roger Grosse. REFERENCES
Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin- cent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Watten- berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http://tensorflow.org/. Software available from tensorï¬ow.org.
J Aitchison. A general class of distributions on the simplex. Journal of the Royal Statistical Society. Series B (Methodological), pp. 136â146, 1985.
J Atchison and Sheng M Shen. Logistic-normal distributions: Some properties and uses. Biometrika, 67(2):261â272, 1980.
Yoshua Bengio, Nicholas L´eonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
David Blei and John Lafferty. Correlated topic models. 2006. Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. ICLR,
2016.
Robert J Connor and James E Mosimann. Concepts of independence for proportions with a gener- alization of the dirichlet distribution. Journal of the American Statistical Association, 64(325): 194â206, 1969.
Stefano Favaro, Georgia Hadjicharalambous, and Igor Pr¨unster. On a class of distributions on the simplex. Journal of Statistical Planning and Inference, 141(9):2987 â 3004, 2011.
Brendan Frey. Continuous sigmoidal belief networks trained using slice sampling. In NIPS, 1997. Michael C Fu. Gradient estimation. Handbooks in operations research and management science,
13:575â616, 2006.
Xavier Glorot and Yoshua Bengio. Understanding the difï¬culty of training deep feedforward neural networks. In Aistats, volume 9, pp. 249â256, 2010.
Peter W Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of the ACM, 33(10):75â84, 1990.
Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska- Barwi´nska, Sergio G´omez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538 (7626):471â476, 2016.
Evan Greensmith, Peter L. Bartlett, and Jonathan Baxter. Variance reduction techniques for gradient estimates in reinforcement learning. JMLR, 5, 2004.
Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to transduce with unbounded memory. In Advances in Neural Information Processing Systems, pp. 1828â1836, 2015.
Karol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, and Daan Wierstra. Deep autoregres- sive networks. arXiv preprint arXiv:1310.8499, 2013.
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015.
Shixiang Gu, Sergey Levine, Ilya Sutskever, and Andriy Mnih. MuProp: Unbiased backpropagation for stochastic neural networks. ICLR, 2016.
Emil Julius Gumbel. Statistical theory of extreme values and some practical applications: a series of lectures. Number 33. US Govt. Print. Ofï¬ce, 1954.
Tamir Hazan and Tommi Jaakkola. On the partition function and random maximum a-posteriori perturbations. In ICML, 2012.
10
Published as a conference paper at ICLR 2017
Tamir Hazan, George Papandreou, and Daniel Tarlow. Perturbation, Optimization, and Statistics. MIT Press, 2016.
Matthew D Hoffman, David M Blei, Chong Wang, and John William Paisley. Stochastic variational inference. JMLR, 14(1):1303â1347, 2013.
E. Jang, S. Gu, and B. Poole. Categorical Reparameterization with Gumbel-Softmax. ArXiv e-prints, November 2016.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
Diederik P Kingma and Max Welling. Auto-encoding variational bayes. ICLR, 2014. Tom´aËs KoËcisk´y, G´abor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, and In Karl Moritz Hermann. Semantic parsing with semi-supervised sequential autoencoders. EMNLP, 2016.
R. Duncan Luce. Individual Choice Behavior: A Theoretical Analysis. New York: Wiley, 1959. Chris J Maddison. A Poisson process model for Monte Carlo. In Tamir Hazan, George Papandreou, and Daniel Tarlow (eds.), Perturbation, Optimization, and Statistics, chapter 7. MIT Press, 2016.
Chris J Maddison, Daniel Tarlow, and Tom Minka. Aâ Sampling. In NIPS, 2014. Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. In
ICML, 2014.
Andriy Mnih and Danilo Jimenez Rezende. Variational inference for monte carlo objectives. In ICML, 2016.
Volodymyr Mnih, Nicolas Heess, Alex Graves, and koray kavukcuoglu. Recurrent Models of Visual Attention. In NIPS, 2014.
Christian A Naesseth, Francisco JR Ruiz, Scott W Linderman, and David M Blei. Rejection sam- pling variational inference. arXiv preprint arXiv:1610.05683, 2016.
John William Paisley, David M. Blei, and Michael I. Jordan. Variational bayesian inference with stochastic search. In ICML, 2012.
George Papandreou and Alan L Yuille. Perturb-and-map random ï¬elds: Using discrete optimization to learn and sample from energy models. In ICCV, 2011.
Tapani Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh. Techniques for learning binary stochastic feedforward neural networks. arXiv preprint arXiv:1406.2989, 2014.
Rajesh Ranganath, Sean Gerrish, and David M. Blei. Black box variational inference. In AISTATS, 2014.
William S Rayens and Cidambi Srinivasan. Dependence properties of generalized liouville distri- butions on the simplex. Journal of the American Statistical Association, 89(428):1465â1470, 1994.
Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, 2014.
Francisco JR Ruiz, Michalis K Titsias, and David M Blei. The generalized reparameterization gradient. arXiv preprint arXiv:1610.02287, 2016.
Ruslan Salakhutdinov and Geoffrey Hinton. Semantic hashing. International Journal of Approxi- mate Reasoning, 50(7):969â978, 2009.
Ruslan Salakhutdinov and Iain Murray. On the quantitative analysis of deep belief networks. In ICML, 2008.
John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient estimation using stochastic computation graphs. In NIPS, 2015.
Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. URL http://arxiv.org/abs/ 1605.02688.
Michalis Titsias and Miguel L´azaro-Gredilla. Doubly stochastic variational bayes for non-conjugate inference. In Tony Jebara and Eric P. Xing (eds.), ICML, 2014.
11
Published as a conference paper at ICLR 2017
Michalis Titsias and Miguel L´azaro-Gredilla. Local expectation gradients for black box variational inference. In NIPS, 2015.
Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML, 2015.
John I Yellott. The relationship between luceâs choice axiom, thurstoneâs theory of comparative judgment, and the double exponential distribution. Journal of Mathematical Psychology, 15(2): 109â144, 1977.
# A PROOF OF PROPOSITION 1
Let X Concrete(α, λ) with location parameters α (0, )n and temperature λ (0, ).
# Let X
Concrete(α, λ) with location parameters α
# â¼ 1. Let Gk â¼
â
â
â
Gumbel i.i.d., consider
â
exp((log ax + Gx)/A) DiL1 exp((log ai + Gi)/d) Yi
Let Zk = log αk + Gk, which has density
αk exp( zk) exp( αk exp( zk))
â
â
â
We will consider the invertible transformation
F (z1, . . . , zn) = (y1, . . . , ynâ1, c)
where
ye = exp(zn/A)e7* n c= Dexplsi/2) i=1
then F â1(y1, . . . , ynâ1, c) = (λ(log y1 + log c), . . . , λ(log ynâ1 + log c), λ(log yn + log c))
n-1 >;
where yn = 1
i=1 yi. This has Jacobian
â
â
λyâ1 1 0 0 λyâ1 n 0 λyâ1 2 0 λyâ1 n 0 0 λyâ1 3 ... λyâ1 n 0 0 0 λyâ1 n . . . . . . . . . . . . 0 0 0 λyâ1 n λcâ1 λcâ1 λcâ1 λcâ1
â
â
â
â
â
â 1 rows to the bottom row we see that this Jacobian
by adding yi/yn times each of the top n has the same determinant as λyâ1 1 0 0
0 λyâ1 2 0 0 0 0 0 λyâ1 3 ... 0 0 . . . 0 . . . . . . 0 0 . . . 0 0 0 λcâ1 λcâ1 λcâ1 0 λ(cyn)â1
and thus the determinant is equal to
yr oe eT] in Yi
12
Published as a conference paper at ICLR 2017
all together we have the density
# Aâ TI
Aâ TI pea Oe exp(âA log ys, â A log c) exp(âay, exp(âA log yx â A log c)) Tina yi
λ log c) exp( i=1 yi with r = log c change of variables we have density
Aâ TT,
Aâ TT, Oe exp(âAr) exp(âax exp(âA log yx â Ar)) Ty? at exp(ânAr) exp(â > a; exp(âAlog y; â Ar)) =
letting y = log(oy 4 any,)
# n=1 αkyâλ k ) k=1 αk
Te muro) exp(ânAr +7) exp(â exp(âAr + 7)
integrating out r
Aâ TT Oe (ao + vr) Thay 2 ePO) r ety Me aT = 1)ly"- 1 T= LOKYR (Shan Rv)â * (exp(âyn)F(n)) = -1
# Thus Y d= X.
2. Follows directly from (a) and the Gumbel-Max trick (Maddison, 2016). 3. Follows directly from (a) and the Gumbel-Max trick (Maddison, 2016). 4. Let λ
1)â1. The density of X can be rewritten as
â¤
â
n -r-1 ORY Po, r(@) & k=1 wit ay; * -Il a, Lune 1)-1 par Thai}
Thus, the log density is up to an additive constant C
1 n log pa,r(x) = S0(A(n = 1) = LD log yx â nlog | S> ax T] kal k=1 jfk
If λ log is convex. For the 1)â1. last term, Thus, their composition is convex. The sum of convex terms is convex, ï¬nishing the proof.
# B THE BINARY SPECIAL CASE
Bernoulli random variables are an important special case of discrete distributions taking states in . Here we consider the binary special case of the Gumbel-Max trick from Figure 1a along 0, 1 }
)2 be a two state discrete random variable on Let D â D1 + D2 = 1, parameterized as in Figure 1a by α1, α2 > 0: Discrete(α) for α (0, â¼ â 0, 1 { 2 such that }
P(D1 = 1) = α1 α1 + α2 (14)
13
Published as a conference paper at ICLR 2017
The distribution is degenerate, because D1 = 1 the Gumbel-Max reparameterization, the event that D1 = 1 is the event that G2 + log α2} G2 â¼ G1 â where U â¼ D2. Therefore we consider just D1. Under G1 + log α1 > Gumbel i.i.d. The difference of two Gumbels is a Logistic distribution U ) â { d= log U where Gk â¼ Logistic, which can be sampled in the following way, G1 â Uniform(0, 1). So, if α = α1/α2, then we have G2 log(1 â â
P(D1 = 1) = P(G1 + log α1 > G2 + log α2) = P(log U
log(1 U ) + log α > 0) (15)
â
â
Thus, D1 d= H(log α + log U log(1 U )), where H is the unit step function.
â
â
Correspondingly, we can consider the Binary Concrete relaxation that results from this process. As in the n-ary case, we consider the sampling routine for a Binary Concrete random variable X
â
â¼
X = 1 + exp( 1 (log α + L)/λ) (16)
â
We deï¬ne the Binary Concrete random variable X by its density on the unit interval. Deï¬nition 2 (Binary Concrete Random Variables). Let α has a Binary Concrete distribution X its density is:
# ⬠(0,1) temperature A, if
# X
pα,λ(x) = λαxâλâ1(1 (αxâλ + (1 x)âλâ1 x)âλ)2 . (17)
â â
We state without proof the special case of Proposition 1 for Binary Concrete distributions Proposition 2 (Some Properties of Binary Concrete Random Variables). Let X BinConcrete(α, λ) with location parameter α
~
â¼
# â Logistic, then X d=
â
â
1 1+exp(â(log α+L)/λ) ,
(a) (Reparameterization) If L
â¼
â
(b) (Rounding) P (X > 0.5) = α/(1 + α),
(c) (Zero temperature) P (limλâ0 X = 1) = α/(1 + α),
(d) (Convex eventually) If λ 1, then pα,λ(x) is log-convex in x.
â¤
We can generalize the binary circuit beyond Logistic random variables. Consider an arbitrary ran- dom variable X with inï¬nite support on R. If Φ : R
â P(H(X) = 1) = 1
Φ(0)
â
If we want this to have a Bernoulli distribution with probability α/(1 + α), then we should solve the equation
1 â Φ(0) = α 1 + α .
This gives Φ(0) = 1/(1 + α), which can be accomplished by relocating the random variable Y with CDF Φ to be X = Y
â
# C USING CONCRETE RELAXATIONS
In this section we include some tips for implementing and using the Concrete distribution as a relaxation. We use the following notation
# nm
Ï(x) = 1 1 + exp( x) n LΣE k=1 { xk} = log k=1 exp(xk)
â
Both sigmoid and log-sum-exp are common operations in libraries like TensorFlow or theano.
14
Published as a conference paper at ICLR 2017
# C.1 THE BASIC PROBLEM
For the sake of exposition, we consider a simple variational autoencoder with a single discrete random variable and objective L1(θ, a, α) given by Eq. 8 for a single data point x. This scenario will allow us to discuss all of the decisions one might make when using Concrete relaxations.
In particular, )n, let pθ(x Discrete(a) with a network), which is a continuous function of d and parameters θ, let D ⼠hot discrete random variable in (0, 1)n whose unnormalized probabilities α(x) function (possible a neural net with its own parameters) of x. Let Qα(d | D. Then, we care about optimizing
L1(θ, a, α) = E Dâ¼Qα(d|x) log pθ(x D)Pa(D) x) | | Qα(D (18)
with respect to θ, a, and any parameters in α from samples of the SCG required to simulate an estimator of
L1(θ, a, α).
# C.2 WHAT YOU MIGHT RELAX AND WHY
The ï¬rst consideration when relaxing an estimator of Eq. 18 is how to relax the stochastic computa- tion. The only sampling required to simulate Discrete(α(x)). The correspond- L1(θ, a, α) is D Concrete(α(x), λ1) with temperature λ1 and location ing Concrete relaxation is to sample Z â¼ parameters are the the unnormalized probabilities α(x) of D. Let density qα,λ1(z x) be the density | of Z. We get a relaxed objective of the form:
E Dâ¼Qα(d|x) [ · ] â E Zâ¼qα,λ1 (z|x) [ · ] (19)
This choice allows us to take derivatives through the stochastic computaitons of the graph.
The second consideration is which objective to put in place of [ ] in Eq. 19. We will consider the ideal scenario irrespective of numerical issues. In Subsection C.3 we address those numerical x) (which is issues. The central question is how to treat the expectation of the ratio Pa(D)/Qα(D | the KL component of the loss) when Z replaces D.
There are at least three options for how to modify the objective. They are, (20) replace the discrete mass with Concrete densities, (21) relax the computation of the discrete log mass, (22) replace it with the analytic discrete KL.
Pa,ro(Z) E log po (a|Z) + log ââ=â 20 soak ayy [lot volelZ) + log PAK) 20)
n i P,(d) E log pe (|Z) + Z; log ââ_._â 21 zogann, (ln) | 8 Po(a|Z) > 8 O, (dO]x) (21)
# n
E Zâ¼qα,λ1 (z|x) [log pθ(x Z)] + | i=1 Qα(d(i) x) log | Pa(d(i)) Qα(d(i) x) (22)
|
where d(i) is a one-hot binary vector with d(i) i = 1 and pa,λ2 (z) is the density of some Concrete random variable with temperature λ2 with location parameters a. Although (22) or (21) is tempting, we emphasize that these are NOT necessarily lower bounds on log p(x) in the relaxed model. (20) is the only objective guaranteed to be a lower bound:
; - Pa,d2(Z) ; . soaE oy [oePolel2) + toe 2 oy] <toe | polale)Paas(2) dr. 23)
For this reason we consider objectives of the form (20). Choosing (22) or (21) is possible, but the value of these objectives is not interpretable and one should early stop otherwise it will overï¬t to the spurious âKLâ component of the loss. We now consider practical issues with (20) and how to address them. All together we can interpret qα,λ1(z x) as the Concrete relaxation of the variational | posterior and pa,λ2 (z) the relaxation of the prior.
15
Published as a conference paper at ICLR 2017
C.3 WHICH RANDOM VARIABLE TO TREAT AS THE STOCHASTIC NODE
When implementing a SCG like the variational autoencoder example, we need to compute log- probabilities of Concrete random variables. This computation can suffer from underï¬ow, so where possible itâs better to take a different node on the relaxed graph as the stochastic node on which log- likelihood terms are computed. For example, itâs tempting in the case of Concrete random variables to treat the Gumbels as the stochastic node on which the log-likelihood terms are evaluated and the softmax as downstream computation. This will be a looser bound in the context of variational inference than the corresponding bound when treating the Concrete relaxed states as the node.
The solution we found to work well was to work with Concrete random variables in log-space. Consider the following vector in Rn for location parameters α ) and Gk â¼
# loga; + Gi x
log αk + Gk λ n LΣE i=1 Yk = â
therefore we call Y an Y ⼠ExpConcrete(α, λ). The advantage of this reparameterization is that the KL terms of a varia- tional loss are invariant under invertible transformation. exp is invertible, so the KL between two ExpConcrete random variables is the same as the KL between two Concrete random variables. The log-density log κα,λ(y) of an ExpConcrete(α, λ) is also simple to compute:
n n log Ka,,(y) = log((n â 1)!) + (n â 1) log 4 (Spree - an) â nLXE {log ax â Ayn} k=1
Rn such that LΣEn for y tribution is still interpretable in the zero temperature limit. In the limit of λ â random variables become discrete random variables over the one-hot vectors of d where LΣEn 0, 1 } { = 0. Note that the sample space of the ExpConcrete dis- 0 ExpConcrete n } yk} k=1{ â â {ââ n. = 0. exp(Y ) in this case results in the one-hot vectors in dk} , 0
# k=1{ C.3.1 n-ARY CONCRETE
Returning to our initial task of relaxing £1(0,a, a), let Y ~ ExpConcrete(a(x), 1) Ke,, (y|x) be the ExpConcrete latent variable corresponding to the Concrete relaxation of the variational posterior Q. (d|x). Let pa,y, (y) be the density of an ExpConcrete random corresponding to the Concrete relaxation pa,,,(z) of P,(d). All together we can see that Pa,d2(Z)_]
# with density qu,x, (z|x) variable
Pa,d2(Z)_] log po(a|Z) + log 2 | = E ow pote exp(Y)) + log Zar (2|@) da,d,(Z|t) | ¥~rme,; (ule) Ke, (Y |x) (24) Pa,d2(¥)
Therefore, we used ExpConcrete random variables as the stochastic nodes and treated exp as a downstream computation. The relaxation is then,
relax L£1(0,a,a) Y og po(z| exp(Y)) + log oa | ; (25) Y~Ra,d, (ylx) Kadi (Y|x)
and the objective on the RHS is fully reparameterizable and what we chose to optimize.
# C.3.2 BINARY CONCRETE
In the binary case, the logistic function is invertible, so it makes most sense to treat the logit plus noise as the stochastic node. In particular, the binary random node was sample from:
Y = log α + log U â λ log(1 â U ) (26)
Uniform(0, 1) and always followed by Ï as downstream computation. log U where U â U ) is a Logistic random variable, details in the cheat sheet, and so the log-density log gα,λ(y) of this node (before applying Ï) is
log gα,λ(y) = log λ λy + log α 2 log(1 + exp( λy + log α))
â
â
â
16
|
Published as a conference paper at ICLR 2017
All together the relaxation in the binary special case would be
£:(6,a,a)" EB [logpo(x|a(¥)) + 10g 242) ; 27 ¥~ga,a, (y|®) Ja, (¥|2) e
where fa,λ2(y) is the density of a Logistic random variable sampled via Eq. 26 with location a and temperature λ2.
This section had a dense array of densities, so we summarize the relevant ones, along with how to sample from them, in Appendix F.
C.4 CHOOSING THE TEMPERATURE
The success of Concrete relaxations will depend heavily on the choice of temperature during train- ing. It is important that the relaxed nodes are not able to represent a precise real valued mode in the interior of the simplex as in Figure For example, choosing additive Gaussian noise e ~ Normal(0, 1) with the logistic function o(x) to get relaxed Bernoullis of the form o(⬠+ 1) will result in a large mode in the centre of the interval. This is because the tails of the Gaussian distribution drop off much faster than the rate at which o squashes. Even including a temperature parameter does not completely solve this problem; the density of o((⬠+ 4)/A) at any temperature still goes to 0 as its approaches the boundaries 0 and 1 of the unit interval. Therefore |(D]of Proposi- tion|I]is a conservative guideline for generic n-ary Concrete relaxations; at temperatures lower than (n â1)~! we are guaranteed not to have any modes in the interior for any a ⬠(0, 00)â. In the case of the Binary Concrete distribution, the tails of the Logistic additive noise are balanced with the logistic squashing function and for temperatures \ < 1 the density of the Binary Concrete distribu- tion is log-convex for all parameters a, see Figure[3b] Still, practice will often disagree with theory here. The peakiness of the Concrete distribution increases with n, so much higher temperatures are tolerated (usually necessary).
For n = 1 temperatures A < (n â 1)~1 is a good guideline. For n > 1 taking A < (n â 1)~1 is not necessarily a good guideline, although it will depend on n and the specific application. As n â> oo the Concrete distribution becomes peakier, because the random normalizing constant ee exp((log ax + Gx)/A) grows. This means that practically speaking the optimization can tolerate much higher temperatures than (n â 1)~!. We found in the cases n = 4 that \ = 1 was the best temperature and in n = 8, A = 2/3 was the best. Yet A = 2/3 was the best single perform- ing temperature across the n ⬠{2,4,8} cases that we considered. We recommend starting in that ball-park and exploring for any specific application.
When the loss depends on a KL divergence between two Concrete nodes, itâs possible to give the nodes distinct temperatures. We found this to improve results quite dramatically. In the context of our original problem and itâs relaxation:
Y) L£1(0,a, a) = E log po(2| exp(Y)) + lo Por) > 1(0,, «) vn e te) ¢ pe(z| exp(Y)) 8 aa, Ve) |? (28)
Both λ1 for the posterior temperature and λ2 for the prior temperature are tunable hyperparameters.
# D EXPERIMENTAL DETAILS
The basic model architectures we considered are exactly analogous to those in Burda et al. (2016) with Concrete/discrete random variables replacing Gaussians.
# D.1 â VS
â¼
The conditioning functions we used were either linear or non-linear. Non-linear consisted of two tanh layers of the same size as the preceding stochastic layer in the computation graph.
# D.2 n-ARY LAYERS
All our models are neural networks with layers of n-ary discrete stochastic nodes with log2(n)- log2(n). For a generic n-ary node dimensional states on the corners of the hypercube
1, 1 }
{â
17
Published as a conference paper at ICLR 2017
Discrete(α) for sampling proceeds as follows. Sample a n-ary discrete random variable D log2(n) α } {â as columns, then we took Y = CD as downstream computation on D. The corresponding Con- crete relaxation is to take X ) and set (0, ËY = CX. For the binary case, this amounts to simply sampling U Uniform(0, 1) and taking â¼ 1. The corresponding Binary Concrete relaxation is Y = 2H(log U U ) + log α) â ËY = 2Ï((log U 1. U ) + log α)/λ)
â â
â â
â
# D.3 BIAS INITIALIZATION
All biases were initialized to 0 with the exception of the biases in the prior decoder distribution over the 784 or 392 observed units. These were initialized to the logit of the base rate averaged over the respective dataset (MNIST or Omniglot).
# D.4 CENTERING
We also found it beneï¬cial to center the layers of the inference network during training. The activity 1, 1)d of each stochastic layer was centered during training by maintaining a exponentially in ( decaying average with rate 0.9 over minibatches. This running average was subtracted from the activity of the layer before it was updated. Gradients did not ï¬ow throw this computation, so it simply amounted to a dynamic offset. The averages were not updated during the evaluation.
D.5 HYPERPARAMETER SELECTION
All models were initialized with the heuristic of Glorot & Bengio (2010) and optimized using Adam (Kingma & Ba, 2014) with parameters β1 = 0.9, β2 = 0.999 for 107 steps on minibatches of size 64. Hyperparameters were selected on the MNIST dataset by grid search taking the values that performed best on the validation set. Learning rates were chosen from and weight decay from . Two sets of hyperparameters were selected, one for linear models and one for non-linear models. The linear modelsâ hyperparameters were selected with L5(θ, Ï) objective. The non-linear modelsâ hyperpa- the 200Hâ200Hâ784V density model on the rameters were selected with the 200H L5(θ, Ï) objective. For 784V density model on the 200H â¼ density estimation, the Concrete relaxation hyperparameters were (weight decay = 0, learning rate 10â4) for linear and (weight decay = 0, learning rate = 10â4) for non-linear. For structured = 3 prediction Concrete relaxations used (weight decay = 10â3, learning rate = 3
In addition to tuning learning rate and weight decay, we tuned temperatures for the Concrete relax- ations on the density estimation task. We found it valuable to have different values for the prior and posterior distributions, see Eq. 28. In particular, for binary we found that (prior λ2 = 1/2, posterior λ1 = 2/3) was best, for 4-ary we found (prior λ2 = 2/3, posterior λ1 = 1) was best, and (prior λ2 = 2/5, posterior λ1 = 2/3) for 8-ary. No temperature annealing was used. For structured prediction we used just the corresponding posterior λ1 as the temperature for the whole graph, as there was no variational posterior.
We performed early stopping when training with the score function estimators (VIMCO/NVIL) as they were much more prone to overï¬tting.
18
Published as a conference paper at ICLR 2017
# E EXTRA RESULTS
binary (240H â¼784V) 4-ary (240H â¼784V) 8-ary (240H â¼784V) binary (240Hâ¼240H â¼784V) 4-ary (240Hâ¼240H â¼784V) 8-ary (240Hâ¼240H â¼784V) m Test 91.9 1 89.0 5 88.4 50 1 5 50 91.4 89.4 89.7 1 5 50 92.5 90.5 90.5 1 5 50 87.9 86.6 86.0 1 5 50 87.4 86.7 86.7 1 5 50 88.2 87.4 87.2 Train 90.7 87.1 85.7 89.7 87.0 86.5 89.9 87.0 86.7 86.0 83.7 82.7 85.0 83.3 83.0 85.9 84.6 84.0 Test 108.0 107.7 109.0 110.7 110.5 113.0 119.61 120.7 121.7 106.6 106.9 108.7 106.6 108.3 109.4 111.3 110.5 111.1 Train 102.2 100.0 99.1 1002.7 100.2 100.0 105.3 102.7 101.0 99.0 97.1 95.9 97.8 97.3 96.8 102.5 100.5 99.5
Table 2: Density estimation using Concrete relaxations with distinct arity of layers.
19
Published as a conference paper at ICLR 2017
# F CHEAT SHEET
1 = 1+ exp(â2) LEE {xx} = log (> a) k=1 log Anâ! = {© ⬠R" | xz ⬠(âc, 0), LEE{ex} = = of
Distribution and Domains Reparameterization/How To Sample
# Mass/Density
G G Gumbel R
â¼ â
# G d=
â10g(~log(U))
# log(
# log(U ))
â
â
# exp(
exp(âg â exp(â9))
# g
# exp(
g))
â
â
â
# L L
# Logistic R
~
â¼ â
# LeR
# L d= log(U )
â
â
# log(1
â
â
U )
# exp( â (1 + exp(
# l)
# expl-)?
l))2
â
# X µ λ
Logistic(µ, λ) R (0,
~
â¼ â â
# neR
) â
# X d=
# L + µ λ
# λ exp( (1 + exp(
λx + µ) λx + µ))2
â â
# exp(âAzx
# X X α
# Bernoulli(α) 0, 1
# ~ che
â¼ â { (0, â
} ) â
# X d=
1 {i
# if L + log α otherwise
â¥
0
α 1 + α
if x = 1
# X X α λ
BinConcrete(α, λ) (0, 1) ) (0, â ) (0, â
~
â¼ â â â
# X d= Ï((L + log α)/λ)
λαxâλâ1(1 (αxâλ + (1
â
â â
x)âλâ1 x)âλ)2
X X â¬
Discrete(α) â¼ n 0, 1 } â { k=1 Xk = 1
# d=
# Xk
# Xp=
# fl 0
if log αk + Gk > log αi + Gi for i otherwise
# = k
# αk i=1 αi
if xk = 1
# α
â¬
â
(0,
# )n
00)â
â
# X X α λ
X ~ Concrete(a, \) n -y- XeAr-l x, £ _2xp((log ax + Ge)/) (nâ1)! Il ag ⬠(0, 00)â x SUL, exp((log ax, + Gi)/A) A~(mI) hey Diet air; * ⬠(0, 00)
# X X α λ
~ ExpConcrete(a, \) â¬log A"! d logan + Gr n loga; + Gi (nâ1)! Qn exp( = ⬠(0, 00)â Xn = r ~ TEE r A~(mI) rl Foie Gi eXP(âAzi) ⬠(0, 00)
Table 3: Cheat sheet for the random variables we use in this work. Note that some of these are atypical parameterizations, particularly the Bernoulli and Logistic random variables. The table only Uniform(0, 1). From there on it may assumes that you can sample uniform random numbers U Logistic is deï¬ned in the deï¬ne random variables and reuse them later on. For example, L second row, and after that point L represents a Logistic random variable that can be replaced by U ). Whenever random variables are indexed, e.g. Gk, they represent separate log U independent calls to a random number generator.
20
# λxi) | {
"id": "1610.05683"
} |
1611.00625 | TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games | We present TorchCraft, a library that enables deep learning research on
Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it
easier to control these games from a machine learning framework, here Torch.
This white paper argues for using RTS games as a benchmark for AI research, and
describes the design and components of TorchCraft. | http://arxiv.org/pdf/1611.00625 | Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier | cs.LG, cs.AI, I.2.1 | null | null | cs.LG | 20161101 | 20161103 | 6 1 0 2
v o N 3 ] G L . s c [
2 v 5 2 6 0 0 . 1 1 6 1 : v i X r a
# TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games
Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier gab@fb.com, nantas@robots.ox.ac.uk
March 2, 2022
# Abstract
We present TorchCraft, a library that enables deep learning research on Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it easier to control these games from a machine learning framework, here Torch [9]. This white paper argues for using RTS games as a benchmark for AI research, and describes the design and components of TorchCraft.
# Introduction
Deep Learning techniques [13] have recently enabled researchers to successfully tackle low-level perception problems in a supervised learning fashion. In the ï¬eld of Reinforcement Learning this has transferred into the ability to develop agents able to learn to act in high-dimensional input spaces. In particular, deep neural networks have been used to help reinforcement learning scale to environments with visual inputs, allowing them to learn policies in testbeds that previously were completely intractable. For instance, algorithms such as Deep Q-Network (DQN) [14] have been shown to reach human-level performances on most of the classic ATARI 2600 games by learning a controller directly from raw pixels, and without any additional supervision beside the score. Most of the work spawned in this new area has however tackled environments where the state is fully observable, the reward function has no or low delay, and the action set is relatively small. To solve the great majority of real life problems agents must instead be able to handle partial observability, structured and complex dynamics, and noisy and high-dimensional control interfaces.
To provide the community with useful research environments, work was done towards building platforms based on videogames such as Torcs [27], Mario AI [20], Unrealâs BotPrize [10], the Atari Learning Environment [3], VizDoom [12], and Minecraft [11], all of which have allowed researchers to train deep learning models with imitation learning, reinforcement learning and various decision making algorithms on increasingly diï¬cult problems. Recently there have also been eï¬orts to unite those and many other such environments in one platform to provide a standard interface for interacting with them [4]. We propose a bridge between StarCraft: Brood War, an RTS game with an active AI research community and annual AI competitions [16, 6, 1], and Lua, with examples in Torch [9] (a machine learning library).
1
# 2 Real-Time Strategy for Games AI
Real-time strategy (RTS) games have historically been a domain of interest of the planning and decision making research communities [5, 2, 6, 16, 17]. This type of games aims to simulate the control of multiple units in a military setting at diï¬erent scales and level of complexity, usually in a ï¬xed-size 2D map, in duel or in small teams. The goal of the player is to collect resources which can be used to expand their control on the map, create buildings and units to ï¬ght oï¬ enemy deployments, and ultimately destroy the opponents. These games exhibit durative moves (with complex game dynamics) with simultaneous actions (all players can give commands to any of their units at any time), and very often partial observability (a âfog of warâ: opponent units not in the vicinity of a playerâs units are not shown).
RTS gameplay: Components RTS game play are economy and battles (âmacroâ and âmicroâ respectively): players need to gather resources to build military units and defeat their opponents. To that end, they often have worker units (or extraction structures) that can gather resources needed to build workers, buildings, military units and research upgrades. Workers are often also builders (as in StarCraft), and are weak in ï¬ghts compared to military units. Resources may be of varying degrees of abundance and importance. For instance, in StarCraft minerals are used for everything, whereas gas is only required for advanced buildings or military units, and technology upgrades. Buildings and research deï¬ne technology trees (directed acyclic graphs) and each state of a âtech treeâ allow for the production of diï¬erent unit types and the training of new unit abilities. Each unit and building has a range of sight that provides the player with a view of the map. Parts of the map not in the sight range of the playerâs units are under fog of war and the player cannot observe what happens there. A considerable part of the strategy and the tactics lies in which armies to deploy and where.
Military units in RTS games have multiple properties which diï¬er between unit types, such as: attack range (including melee), damage types, armor, speed, area of eï¬ects, invisibility, ï¬ight, and special abilities. Units can have attacks and defenses that counter each others in a rock-paper-scissors fashion, making planning armies a extremely challenging and strategically rich process. An âopeningâ denotes the same thing as in Chess: an early game plan for which the player has to make choices. That is the case in Chess because one can move only one piece at a time (each turn), and in RTS games because, during the development phase, one is economically limited and has to choose which tech paths to pursue. Available resources constrain the technology advancement and the number of units one can produce. As producing buildings and units also take time, the arbitrage between investing in the economy, in technological advancement, and in units production is the crux of the strategy during the whole game.
Related work: Classical AI approaches normally involving planning and search [2, 15, 24, 7] are extremely challenged by the combinatorial action space and the complex dynamics of RTS games, making simulation (and thus Monte Carlo tree search) diï¬cult [8, 22]. Other characteristics such as partial observability, the non-obvious quantiï¬cation of the value of the state, and the problem of featurizing a dynamic and structured state contribute to making them an interesting problem, which altogether ultimately also make them an excellent benchmark for AI. As the scope of this paper is not to give a review of RTS AI research, we refer the reader to these surveys about existing research on RTS and StarCraft AI [16, 17].
It is currently tedious to do machine learning research in this domain. Most previous reinforcement learning research involve simple models or limited experimental settings [26, 23]. Other models are trained on oï¬ine datasets of highly skilled players [25, 18, 19, 21]. Contrary to most Atari games [3], RTS games have much higher action spaces and much more structured states. Thus, we advocate here to have not only the pixels as input and keyboard/mouse for commands, as in [3, 4, 12], but also a structured representation of the game state, as in
2
-- main game engine loop: while true do game.receive_player_actions() game.compute_dynamics() -- our injected code: torchcraft.send_state() torchcraft.receive_actions()
featurize, model = init() tc = require âtorchcraftâ tc:connect(port) while not tc.state.game_ended do tc:receive() features = featurize(tc.state) actions = model:forward(features) tc:send(tc:tocommand(actions))
# end
# end
Figure 1: Simpliï¬ed client/server code that runs in the game engine (server, on the left) and the library for the machine learning library or framework (client, on the right).
[11]. This makes it easier to try a broad variety of models, and may be useful in shaping loss functions for pixel-based models.
Finally, StarCraft: Brood War is a highly popular game (more than 9.5 million copies sold) with professional players, which provides interesting datasets, human feedback, and a good benchmark of what is possible to achieve within the game. There also exists an active academic community that organizes AI competitions.
# 3 Design
The simplistic design of TorchCraft is applicable to any video game and any machine learning library or framework. Our current implementation connects Torch to a low level interface [1] to StarCraft: Brood War. TorchCraftâs approach is to dynamically inject a piece of code in the game engine that will be a server. This server sends the state of the game to a client (our machine learning code), and receives commands to send to the game. This is illustrated in Figure 1. The two modules are entirely synchronous, but the we provide two modalities of execution based on how we interact with the game:
Game-controlled - we inject a DLL that provides the game interface to the bots, and one that includes all the instructions to communicate with the machine learning client, interpreted by the game as a player (or bot AI). In this mode, the server starts at the beginning of the match and shuts down when that ends. In-between matches it is therefore necessary to re-establish the connection with the client, however this allows for the setting of multiple learning instances extremely easily.
Game-attached - we inject a DLL that provides the game interface to the bots, and we interact with it by attaching to the game process and communicating via pipes. In this mode there is no need to re-establish the connection with the game every time, and the control of the game is completely automatized out of the box, however itâs currently impossible to create multiple learning instances on the same guest OS.
Whatever mode one chooses to use, TorchCraft is seen by the AI programmer as a library that provides: connect(), receive() (to get the state), send(commands), and some helper functions about speciï¬cs of StarCraftâs rules and state representation. TorchCraft also provides an eï¬cient way to store game frames data from past (played or observed) games so that existing state (âreplaysâ, âtracesâ) can be re-examined.
3
# 4 Conclusion
We presented several work that established RTS games as a source of interesting and relevant problems for the AI research community to work on. We believe that an eï¬cient bridge between low level existing APIs and machine learning frameworks/libraries would enable and foster research on such games. We presented TorchCraft: a library that enables state-of-the-art machine learning research on real game data by interfacing Torch with StarCraft: BroodWar. TorchCraft has already been used in reinforcement learning experiments on StarCraft, which led to the results in [23] (soon to be open-sourced too and included within TorchCraft).
# 5 Acknowledgements
We would like to thank Yann LeCun, Léon Bottou, Pushmeet Kohli, Subramanian Ramamoorthy, and Phil Torr for the continuous feedback and help with various aspects of this work. Many thanks to David Churchill for proofreading early versions of this paper.
# References
[1] BWAPI: Brood war api, an api for interacting with starcraft: Broodwar (1.16.1). https://bwapi. github.io/, 2009â2015.
[2] Aha, D. W., Molineaux, M., and Ponsen, M. Learning to win: Case-based plan selection in a real-time strategy game. In International Conference on Case-Based Reasoning (2005), Springer, pp. 5â20.
[3] Bellemare, M. G., Naddaf, Y., Veness, J., and Bowling, M. The arcade learning environment: An evaluation platform for general agents. Journal of Artiï¬cial Intelligence Research (2012).
[4] Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J.,
and Zaremba, W. Openai gym. arXiv preprint arXiv:1606.01540 (2016).
[5] Buro, M., and Furtak, T. Rts games and real-time ai research. In Proceedings of the Behavior
Representation in Modeling and Simulation Conference (BRIMS) (2004), vol. 6370.
# [6] Churchill, D.
[6] Churchill, D. Starcraft ai competition. http://www.cs.mun.ca/~dchurchill/ starcraftaicomp/, 2011â2016.
[7] Churchill, D. Heuristic Search Techniques for Real-Time Strategy Games. PhD thesis, University
of Alberta, 2016.
[8] Churchill, D., Saffidine, A., and Buro, M. Fast heuristic search for rts game combat
scenarios. In AIIDE (2012).
[9] Collobert, R., Kavukcuoglu, K., and Farabet, C. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop (2011), no. EPFL-CONF-192376.
[10] Hingston, P. A turing test for computer game bots. IEEE Transactions on Computational
Intelligence and AI in Games 1, 3 (2009), 169â186.
[11] Johnson, M., Hofmann, K., Hutton, T., and Bignell, D. The malmo platform for artiï¬cial intelligence experimentation. In International joint conference on artiï¬cial intelligence (IJCAI) (2016).
[12] Kempka, M., Wydmuch, M., Runc, G., Toczek, J., and JaÅkowski, W. Vizdoom: A doom- based ai research platform for visual reinforcement learning. arXiv preprint arXiv:1605.02097 (2016).
[13] LeCun, Y., Bengio, Y., and Hinton, G. Deep learning. Nature 521, 7553 (2015), 436â444. [14] Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. Human-level control through deep reinforcement learning. Nature 518, 7540 (2015), 529â533.
4
[15] Ontañón, S., Mishra, K., Sugandh, N., and Ram, A. Case-based planning and execution for real-time strategy games. In International Conference on Case-Based Reasoning (2007), Springer Berlin Heidelberg, pp. 164â178.
[16] Ontanón, S., Synnaeve, G., Uriarte, A., Richoux, F., Churchill, D., and Preuss, M. A survey of real-time strategy game ai research and competition in starcraft. Computational Intelligence and AI in Games, IEEE Transactions on 5, 4 (2013), 293â311.
[17] Robertson, G., and Watson, I. A review of real-time strategy game ai. AI Magazine 35, 4
(2014), 75â104.
[18] Synnaeve, G. Bayesian programming and learning for multi-player video games: application to RTS AI. PhD thesis, PhD thesis, Institut National Polytechnique de GrenobleâINPG, 2012. [19] Synnaeve, G., and Bessiere, P. A dataset for starcraft ai & an example of armies clustering.
arXiv preprint arXiv:1211.4552 (2012).
[20] Togelius, J., Karakovskiy, S., and Baumgarten, R. The 2009 mario ai competition. In
IEEE Congress on Evolutionary Computation (2010), IEEE, pp. 1â8.
[21] Uriarte, A. Starcraft brood war data mining. http://nova.wolfwork.com/dataMining.html,
2015.
[22] Uriarte, A., and Ontañón, S. Game-tree search over high-level game states in rts games. In
Tenth Artiï¬cial Intelligence and Interactive Digital Entertainment Conference (2014).
[23] Usunier, N., Synnaeve, G., Lin, Z., and Chintala, S. Episodic exploration for deep deterministic policies: An application to starcraft micromanagement tasks. arXiv preprint arXiv:1609.02993 (2016).
[24] Weber, B. Reactive planning for micromanagement in rts games. Department of Computer
Science, University of California, Santa Cruz (2014).
[25] Weber, B. G., and Mateas, M. A data mining approach to strategy prediction. In 2009 IEEE
Symposium on Computational Intelligence and Games (2009), IEEE, pp. 140â147.
[26] Wender, S., and Watson, I. Applying reinforcement learning to small scale combat in the real-time strategy game starcraft: broodwar. In Computational Intelligence and Games (CIG), 2012 IEEE Conference on (2012), IEEE, pp. 402â408.
[27] Wymann, B., Espié, E., Guionneau, C., Dimitrakakis, C., Coulom, R., and Sumner, A. Torcs, the open racing car simulator. Software available at http://torcs. sourceforge. net (2000).
5
# A Frame data
In addition to the visual data, the TorchCraft server extracts certain information for the game state and sends it over to the connected clients in a structured âframeâ. The frame is formatted in a table in roughly the following structure:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 R e c e i v e d u p d a t e : { // Number o f // NB : a â game â can be composed o f : frame_from_bwapi f r a m e s i n t h e c u r r e n t game s e v e r a l b a t t l e s i n t u n i t s _ m y s e l f : { // U n i t i n t : { // U n i t t a r g e t t a r g e t p o s ID ID : i n t : { // A b s o l u t e x 1 : // A b s o l u t e y 2 : } i n t i n t // Type o f a i r weapon a w t y p e : i n t // Type o f g r o u n d weapon g wt yp e : i n t // Number o f awcd : // Number o f h i t p o i n t s hp : // Number o f e n e r g y / mana p o i n t s , e n e r g y : // U n i t i n t t y p e : p o s i t i o n : f r a m e s b e f o r e n e x t a i r weapon p o s s i b l e a t t a c k i n t i n t i f any i n t t y p e { // A b s o l u t e x 1 : // A b s o l u t e y 2 : } i n t i n t // Number o f ar mor p o i n t s ar mor : // Number o f gwcd : // Ground weapon a t t a c k damage g w a t t a c k : // P r o t o s s s h i e l d : // A i r weapon a t t a c k damage a w a t t a c k : // S i z e o f s i z e i n t : // Whether u n i t enemy : b o o l // Whether u n i t i d l e : b o o l // Ground weapon max r a n g e g w r a n g e : i n t // A i r weapon max r a n g e i n t a w r a n g e : i n t f r a m e s b e f o r e n e x t g r o u n d weapon p o s s i b l e a t t a c k i n t i n t s h i e l d p o i n t s ( l i k e HP , b u t w i t h s p e c i a l p r o p e r t i e s ) i n t i n t t h e u n i t ( a i r weapon a t t a c k damage ) i s an enemy o r n o t i s i d l e , i . e . n o t f o l l o w i n g any o r d e r s c u r r e n t l y } } // Same f o r m a t a s " u n i t s _ m y s e l f " . . . u n i t s _ e n e m y : }
6 | {
"id": "1606.01540"
} |
1610.07629 | A Learned Representation For Artistic Style | The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style. | http://arxiv.org/pdf/1610.07629 | Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur | cs.CV, cs.LG | 9 pages. 15 pages of Appendix, International Conference on Learning
Representations (ICLR) 2017 | null | cs.CV | 20161024 | 20170209 | 7 1 0 2
b e F 9 ] V C . s c [
5 v 9 2 6 7 0 . 0 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# A LEARNED REPRESENTATION FOR ARTISTIC STYLE
Vincent Dumoulin & Jonathon Shlens & Manjunath Kudlur Google Brain, Mountain View, CA vi.dumoulin@gmail.com, shlens@google.com, keveman@google.com
# ABSTRACT
The diversity of painting styles represents a rich visual vocabulary for the con- struction of an image. The degree to which one may learn and parsimoniously capture this visual vocabulary measures our understanding of the higher level fea- tures of paintings, if not images in general. In this work we investigate the con- struction of a single, scalable deep network that can parsimoniously capture the artistic style of a diversity of paintings. We demonstrate that such a network gen- eralizes across a diversity of artistic styles by reducing a painting to a point in an embedding space. Importantly, this model permits a user to explore new paint- ing styles by arbitrarily combining the styles learned from individual paintings. We hope that this work provides a useful step towards building rich models of paintings and offers a window on to the structure of the learned representation of artistic style.
# INTRODUCTION
A pastiche is an artistic work that imitates the style of another one. Computer vision and more recently machine learning have a history of trying to automate pastiche, that is, render an image in the style of another one. This task is called style transfer, and is closely related to the texture synthesis task. While the latter tries to capture the statistical relationship between the pixels of a source image which is assumed to have a stationary distribution at some scale, the former does so while also attempting to preserve some notion of content.
On the computer vision side, Efros & Leung (1999) and Wei & Levoy (2000) attempt to âgrowâ textures one pixel at a time using non-parametric sampling of pixels in an examplar image. Efros & Freeman (2001) and Liang et al. (2001) extend this idea to âgrowingâ textures one patch at a time, and Efros & Freeman (2001) uses the approach to implement âtexture transferâ, i.e. transfering the texture of an object onto another one. Kwatra et al. (2005) approaches the texture synthesis problem from an energy minimization perspective, progressively reï¬ning the texture using an EM- like algorithm. Hertzmann et al. (2001) introduces the concept of âimage analogiesâ: given a pair of âunï¬lteredâ and âï¬lteredâ versions of an examplar image, a target image is processed to create an analogous âï¬lteredâ result. More recently, Frigo et al. (2016) treats style transfer as a local texture transfer (using an adaptive patch partition) followed by a global color transfer, and Elad & Milanfar (2016) extends Kwatraâs energy-based method into a style transfer algorithm by taking content similarity into account.
On the machine learning side, it has been shown that a trained classiï¬er can be used as a feature extractor to drive texture synthesis and style transfer. Gatys et al. (2015a) uses the VGG-19 network (Simonyan & Zisserman, 2014) to extract features from a texture image and a synthesized texture. The two sets of features are compared and the synthesized texture is modiï¬ed by gradient descent so that the two sets of features are as close as possible. Gatys et al. (2015b) extends this idea to style transfer by adding the constraint that the synthesized image also be close to a content image with respect to another set of features extracted by the trained VGG-19 classiï¬er.
While very ï¬exible, this algorithm is expensive to run due to the optimization loop being carried. Ulyanov et al. (2016a), Li & Wand (2016) and Johnson et al. (2016) tackle this problem by intro- ducing a feedforward style transfer network, which is trained to go from content to pastiche image in one pass. However, in doing so some of the ï¬exibility of the original algorithm is lost: the style transfer network is tied to a single style, which means that separate networks have to be trained
1
Published as a conference paper at ICLR 2017
(a) With conditional instance normalization, a single style transfer network can capture 32 styles at the same time, ï¬ve of which are shown here. All 32 styles in this single model are in the Appendix. Golden Gate Bridge photograph by Rich Niewiroski Jr.
(b) The style representation learned via conditional instance normalization permits the arbitrary combination of artistic styles. Each pastiche in the sequence corresponds to a different step in interpolating between the γ and β values associated with two styles the model was trained on.
Figure 1: Pastiches produced by a style transfer network trained on 32 styles chosen for their variety.
for every style being modeled. Subsequent work has brought some performance improvements to style transfer networks, e.g. with respect to color preservation (Gatys et al., 2016a) or style transfer quality (Ulyanov et al., 2016b), but to our knowledge the problem of the single-purpose nature of style transfer networks remains untackled.
We think this is an important problem that, if solved, would have both scientiï¬c and practical im- portance. First, style transfer has already found use in mobile applications, for which on-device processing is contingent upon the models having a reasonable memory footprint. More broadly, building a separate network for each style ignores the fact that individual paintings share many com- mon visual elements and a true model that captures artistic style would be able to exploit and learn from such regularities. Furthermore, the degree to which an artistic styling model might general- ize across painting styles would directly measure our ability to build systems that parsimoniously capture the higher level features and statistics of photographs and images (Simoncelli & Olshausen, 2001).
In this work, we show that a simple modiï¬cation of the style transfer network, namely the in- troduction of conditional instance normalization, allows it to learn multiple styles (Figure 1a).We demonstrate that this approach is ï¬exible yet comparable to single-purpose style transfer networks, both qualitatively and in terms of convergence properties. This model reduces each style image into a point in an embedding space. Furthermore, this model provides a generic representation for artistic styles that seems ï¬exible enough to capture new artistic styles much faster than a single-purpose net-
2
Published as a conference paper at ICLR 2017
VGG-16
Figure 2: Style transfer network training diagram (Johnson et al., 2016; Ulyanov et al., 2016a). A pastiche image is produced by feeding a content image through the style transfer network. The two images, along with a style image, are passed through a trained classiï¬er, and the resulting interme- diate representations are used to compute the content loss Lc and style loss Ls. The parameters of the classiï¬er are kept ï¬xed throughout training.
work. Finally, we show that the embeddding space representation permits one to arbitrarily combine artistic styles in novel ways not previously observed (Figure 1b).
# 2 STYLE TRANSFER WITH DEEP NETWORKS
Style transfer can be deï¬ned as ï¬nding a pastiche image p whose content is similar to that of a content image c but whose style is similar to that of a style image s. This objective is by nature vaguely deï¬ned, because similarity in content and style are themselves vaguely deï¬ned.
The neural algorithm of artistic style proposes the following deï¬nitions:
⢠Two images are similar in content if their high-level features as extracted by a trained classiï¬er are close in Euclidian distance.
⢠Two images are similar in style if their low-level features as extracted by a trained classiï¬er share the same statistics or, more concretely, if the difference between the featuresâ Gram matrices has a small Frobenius norm.
The ï¬rst point is motivated by the empirical observation that high-level features in classiï¬ers tend to correspond to higher levels of abstractions (see Zeiler & Fergus (2014) for visualizations; see Johnson et al. (2016) for style transfer features). The second point is motivated by the observation that the artistic style of a painting may be interpreted as a visual texture (Gatys et al., 2015a). A visual texture is conjectured to be spatially homogenous and consist of repeated structural motifs whose minimal sufï¬cient statistics are captured by lower order statistical measurements (Julesz, 1962; Portilla & Simoncelli, 1999).
In its original formulation, the neural algorithm of artistic style proceeds as follows: starting from some initialization of p (e.g. c, or some random initialization), the algorithm adapts p to minimize the loss function
L(s, c, p) = λsLs(p) + λcLc(p), (1) where Ls(p) is the style loss, Lc(p) is the content loss and λs, λc are scaling hyperparameters. Given a set of âstyle layersâ S and a set of âcontent layersâ C, the style and content losses are themselves deï¬ned as
£(0) =o FN G(Os(0)) = G(64(6)) I: Q) ieS ~'
Le(v) = > = | i(P) ~ 65(0) |B @) jec 4
where Ïl(x) are the classiï¬er activations at layer l, Ul is the total number of units at layer l and G(Ïl(x)) is the Gram matrix associated with the layer l activations. In practice, we set λc = 1.0 and and leave λs as a free hyper-parameter.
3
Published as a conference paper at ICLR 2017
In order to speed up the procedure outlined above, a feed-forward convolutional network, termed a style transfer network T , is introduced to learn the transformation (Johnson et al., 2016; Li & Wand, 2016; Ulyanov et al., 2016a). It takes as input a content image c and outputs the pastiche image p directly (Figure 2). The network is trained on many content images (Deng et al., 2009) using the same loss function as above, i.e.
L(s, c) = λsLs(T (c)) + λcLc(T (c)). (4)
While feedforward style transfer networks solve the problem of speed at test-time, they also suffer from the fact that the network T is tied to one speciï¬c painting style. This means that a separate network T has to be trained for every style to be imitated. The real-world impact of this limitation is that it becomes prohibitive to implement a style transfer application on a memory-limited device, such as a smartphone.
# 2.1 N-STYLES FEEDFORWARD STYLE TRANSFER NETWORKS
Our work stems from the intuition that many styles probably share some degree of computation, and that this sharing is thrown away by training N networks from scratch when building an N - styles style transfer system. For instance, many impressionist paintings share similar paint strokes but differ in the color palette being used. In that case, it seems very wasteful to treat a set of N impressionist paintings as completely separate styles.
To take this into account, we propose to train a single conditional style transfer network T (c, s) for N styles. The conditional network is given both a content image and the identity of the style to apply and produces a pastiche corresponding to that style. While the idea is straightforward on paper, there remains the open question of how conditioning should be done. In exploring this question, we found a very surprising fact about the role of normalization in style transfer networks: to model a style, it is sufï¬cient to specialize scaling and shifting parameters after normalization to each speciï¬c style. In other words, all convolutional weights of a style transfer network can be shared across many styles, and it is sufï¬cient to tune parameters for an afï¬ne transformation after normalization for each style.
We call this approach conditional instance normalization. The goal of the procedure is transform a layerâs activations x into a normalized activation z speciï¬c to painting style s. Building off the instance normalization technique proposed in Ulyanov et al. (2016b), we augment the γ and β parameters so that theyâre N à C matrices, where N is the number of styles being modeled and C is the number of output feature maps. Conditioning on a style is achieved as follows:
e=1. (4) +4, (5)
where µ and Ï are xâs mean and standard deviation taken across spatial axes and γs and βs are obtained by selecting the row corresponding to s in the γ and β matrices (Figure 3). One added beneï¬t of this approach is that one can stylize a single image into N painting styles with a single feed forward pass of the network with a batch size of N . In constrast, a single-style network requires N feed forward passes to perform N style transfers (Johnson et al., 2016; Li & Wand, 2016; Ulyanov et al., 2016a).
Because conditional instance normalization only acts on the scaling and shifting parameters, training a style transfer network on N styles requires fewer parameters than the naive approach of training N separate networks. In a typical network setup, the model consists of roughly 1.6M parameters, only around 3K (or 0.2%) of which specify individual artistic styles. In fact, because the size of γ and β grows linearly with respect to the number of feature maps in the network, this approach requires O(N à L) parameters, where L is the total number of feature maps in the network.
In addition, as is discussed in subsection 3.4, conditional instance normalization presents the advan- tage that integrating an N + 1th style to the network is cheap because of the very small number of parameters to train.
4
Published as a conference paper at ICLR 2017
<â_ io
Figure 3: Conditional instance normalization. The input activation x is normalized across both spatial dimensions and subsequently scaled and shifted using style-dependent parameter vectors γs, βs where s indexes the style label.
# 3 EXPERIMENTAL RESULTS
3.1 METHODOLOGY
Unless noted otherwise, all style transfer networks were trained using the hyperparameters outlined in the Appendixâs Table 1.
We used the same network architecture as in Johnson et al. (2016), except for two key details: zero-padding is replaced with mirror-padding, and transposed convolutions (also sometimes called deconvolutions) are replaced with nearest-neighbor upsampling followed by a convolution. The use of mirror-padding avoids border patterns sometimes caused by zero-padding in SAME-padded convolutions, while the replacement for transposed convolutions avoids checkerboard patterning, as discussed in in Odena et al. (2016). We ï¬nd that with these two improvements training the network no longer requires a total variation loss that was previously employed to remove high frequency noise as proposed in Johnson et al. (2016).
Our training procedure follows Johnson et al. (2016). Brieï¬y, we employ the ImageNet dataset (Deng et al., 2009) as a corpus of training content images. We train the N -style network with stochastic gradient descent using the Adam optimizer (Kingma & Ba, 2014). Details of the model architecture are in the Appendix. A complete implementation of the model in TensorFlow (Abadi et al., 2016) as well as a pretrained model are available for download 1. The evaluation images used for this work were resized such that their smaller side has size 512. Their stylized versions were then center-cropped to 512x512 pixels for display.
3.2 TRAINING A SINGLE NETWORK ON N STYLES PRODUCES STYLIZATIONS COMPARABLE TO INDEPENDENTLY-TRAINED MODELS
As a ï¬rst test, we trained a 10-styles model on stylistically similar images, namely 10 impressionist paintings from Claude Monet. Figure 4 shows the result of applying the trained network on evalu- ation images for a subset of the styles, with the full results being displayed in the Appendix. The model captures different color palettes and textures. We emphasize that 99.8% of the parameters are shared across all styles in contrast to 0.2% of the parameters which are unique to each painting style.
To get a sense of what is being traded off by folding 10 styles into a single network, we trained a separate, single-style network on each style and compared them to the 10-styles network in terms of style transfer quality and training speed (Figure 5).
The left column compares the learning curves for style and content losses between the single-style networks and the 10-styles network. The losses were averaged over 32 random batches of content images. By visual inspection, we observe that the 10-styles network converges as quickly as the single-style networks in terms of style loss, but lags slightly behind in terms of content loss.
In order to quantify this observation, we compare the ï¬nal losses for 10-styles and single-style models (center column). The 10-styles networkâs content loss is around 8.7 ± 3.9% higher than its
# 1https://github.com/tensorflow/magenta
5
Published as a conference paper at ICLR 2017
Figure 4: A single style transfer network was trained to capture the style of 10 Monet paintings, ï¬ve of which are shown here. All 10 styles in this single model are in the Appendix. Golden Gate Bridge photograph by Rich Niewiroski Jr.
single-style counterparts, while the difference in style losses (8.9 ± 16.5% lower) is insigniï¬cant. While the N -styles network suffers from a slight decrease in content loss convergence speed, this may not be a fair comparison, given that it takes N times more parameter updates to train N single- style networks separately than to train them with an N -styles network.
The right column shows a comparison between the pastiches produced by the 10-styles network and the ones produced by the single-style networks. We see that both results are qualitatively similar.
3.3 THE N-STYLES MODEL IS FLEXIBLE ENOUGH TO CAPTURE VERY DIFFERENT STYLES
We evaluated the ï¬exibility of the N -styles model by training a style transfer network on 32 works of art chosen for their diversity. Figure 1a shows the result of applying the trained network on eval- uation images for a subset of the styles. Once again, the full results are displayed in the Appendix. The model appears to be capable of modeling all 32 styles in spite of the tremendous variation in color palette and the spatial scale of the painting styles.
3.4 THE TRAINED NETWORK GENERALIZES ACROSS PAINTING STYLES
Since all weights in the transformer network are shared between styles, one way to incorporate a new style to a trained network is to keep the trained weights ï¬xed and learn a new set of γ and β parameters. To test the efï¬ciency of this approach, we used it to incrementally incorporate Monetâs Plum Trees in Blossom painting to the network trained on 32 varied styles. Figure 6 shows that doing so is much faster than training a new network from scratch (left) while yielding comparable pastiches: even after eight times fewer parameter updates than its single-style counterpart, the ï¬ne- tuned model produces comparable pastiches (right).
3.5 THE TRAINED NETWORK CAN ARBITRARILY COMBINE PAINTING STYLES
The conditional instance normalization approach raises some interesting questions about style rep- resentation. In learning a different set of γ and β parameters for every style, we are in some sense learning an embedding of styles.
6
Published as a conference paper at ICLR 2017
45000 s0000 Total content loss Final content loss (N styles) 30000 0 âsw0 âJadoo 15000 20000 â2sv00 0000 35000 20000 â000035000 âa0000 45000 Parameter updates Final content los (1 style) 25000 20000 s000 Total style oss inal style loss (N styles) E 10000 . 5000 oâswo âadoo â15t00 â2ad00 â2st00 sooo 35000 âa0000 âsoot 0000 1S000 20000 25000 Parameter updates Final style loss (1 style) N styles 1 style
N styles 1 style
Figure 5: The N -styles model exhibits learning dynamics comparable to individual models. (Left column) The N-styles model converges slightly slower in terms of content loss (top) and as fast in terms of style loss (bottom) than individual models. Training on a single Monet painting is repre- sented by two curves with the same color. The dashed curve represents the N -styles model, and the full curves represent individual models. Emphasis has been added on the styles for Vetheuil (1902) (teal) and Water Lilies (purple) for visualization purposes; remaining colors correspond to other Monet paintings (see Appendix). (Center column) The N-styles model reaches a slightly higher ï¬nal content loss than (top, 8.7 ± 3.9% increase) and a ï¬nal style loss comparable to (bot- tom, 8.9 ± 16.5% decrease) individual models. (Right column) Pastiches produced by the N -styles network are qualitatively comparable to those produced by individual networks.
10° â _ From scratch 2 â Finetuned 5,000 steps 40,000 steps 2 E 8 s f= fs 2 3 ri 3 oO . g 10° s 2 a oO 8 g 2 8 > & 10° ⬠3 i 3 = 0 5000 10000 15000 20000 25000 30000 35000 40000 Parameter updates
Figure 6: The trained network is efï¬cient at learning new styles. (Left column) Learning γ and β from a trained style transfer network converges much faster than training a model from scratch. (Right) Learning γ and β for 5,000 steps from a trained style transfer network produces pastiches comparable to that of a single network trained from scratch for 40,000 steps. Conversely, 5,000 step of training from scratch produces leads to a poor pastiche.
Previous work suggested that cleverly balancing optimization strategies offers an opportunity to blend painting styles 2. To probe the utility of this embedding, we tried convex combinations of the
# 2For instance, https://github.com/jcjohnson/neural-style
7
Published as a conference paper at ICLR 2017
109 _ 83 83 i £74 Aras § 3 64 64 8 & 2 5s 55 8 8 s 46 46 3 E36 36 5 g 27 27 8 2 17 17.2 = & B os 0.8 0.0 0.0 00 #02 04 06 08 1.0 a
Figure 7: The N -styles network can arbitrarily combine artistic styles. (Left) Combining four styles, shown in the corners. Each pastiche corresponds to a different convex combination of the four stylesâ γ and β values. (Right) As we transition from one style to another (Bicentennial Print and Head of a Clown in this case), the style losses vary monotonically.
γ and β values to blend very distinct painting styles (Figure 1b; Figure 7, left column). Employing a single convex combination produces a smooth transition from one style to the other. Suppose (γ1, β1) and (γ2, β2) are the parameters corresponding to two different styles. We use γ = α à γ1 + (1 â α) à γ2 and β = α à β1 + (1 â α) à β2 to stylize an image. Employing convex combinations may be extended to an arbitrary number of styles 3. Figure 7 (right column) shows the style loss from the transformer network for a given source image, with respect to the Bicentennial Print and Head of a Clown paintings, as we vary α from 0 to 1. As α increases, the style loss with respect to Bicentennial Print increases, which explains the smooth fading out of that styleâs artifact in the transformed image.
# 4 DISCUSSION
It seems surprising that such a small proportion of the networkâs parameters can have such an im- pact on the overall process of style transfer. A similar intuition has been observed in auto-regressive models of images (van den Oord et al., 2016b) and audio (van den Oord et al., 2016a) where the conditioning process is mediated by adjusting the biases for subsequent samples from the model. That said, in the case of art stylization when posed as a feedforward network, it could be that the speciï¬c network architecture is unable to take full advantage of its capacity. We see evidence for this behavior in that pruning the architecture leads to qualitatively similar results. Another interpretation could be that the convolutional weights of the style transfer network encode transformations that represent âelements of styleâ. The scaling and shifting factors would then provide a way for each style to inhibit or enhance the expression of various elements of style to form a global identity of style. While this work does not attempt to verify this hypothesis, we think that this would consti- tute a very promising direction of research in understanding the computation behind style transfer networks as well as the representation of images in general.
Concurrent to this work, Gatys et al. (2016b) demonstrated exciting new methods for revising the loss to selectively adjust the spatial scale, color information and spatial localization of the artistic style information. These methods are complementary to the results in this paper and present an interesting direction for exploring how spatial and color information uniquely factor into artistic style representation.
The question of how predictive each style image is of its corresponding style representation is also of great interest. If it is the case that the style representation can easily be predicted from a style image,
3Please see the code repository for real-time, interactive demonstration. A screen capture is available at https://www.youtube.com/watch?v=6ZHiARZmiUI.
8
Published as a conference paper at ICLR 2017
one could imagine building a transformer network which skips learning an individual conditional embedding and instead learn to produce a pastiche directly from a style and a content image, much like in the original neural algorithm of artistic style, but without any optimization loop at test time.
Finally, the learned style representation opens the door to generative models of style: by modeling enough paintings of a given artistic movement (e.g. impressionism), one could build a collection of style embeddings upon which a generative model could be trained. At test time, a style represen- tation would be sampled from the generative model and used in conjunction with the style transfer network to produce a random pastiche of that artistic movement.
In summary, we demonstrated that conditional instance normalization constitutes a simple, efï¬cient and scalable modiï¬cation of style transfer networks that allows them to model multiple styles at the same time. A practical consequence of this approach is that a new painting style may be transmitted to and stored on a mobile device with a small number of parameters. We showed that despite its simplicity, the method is ï¬exible enough to capture very different styles while having very little impact on training time and ï¬nal performance of the trained network. Finally, we showed that the learned representation of style is useful in arbitrarily combining artistic styles. This work suggests the existence of a learned representation for artistic styles whose vocabulary is ï¬exible enough to capture a diversity of the painted world.
# ACKNOWLEDGMENTS
We would like to thank Fred Bertsch, Douglas Eck, Cinjon Resnick and the rest of the Google Ma- genta team for their feedback; Peyman Milanfar, Michael Elad, Feng Yang, Jon Barron, Bhavik Singh, Jennifer Daniel as well as the the Google Brain team for their crucial suggestions and ad- vice; an anonymous reviewer for helpful suggestions about applying this model in a mobile domain. Finally, we would like to thank the Google Cultural Institute, whose curated collection of art pho- tographs was very helpful in ï¬nding exciting style images to train on.
9
Published as a conference paper at ICLR 2017
# REFERENCES
Martın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorï¬ow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248â255. IEEE, 2009.
In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pp. 341â346. ACM, 2001.
Alexei A Efros and Thomas K Leung. Texture synthesis by non-parametric sampling. In Computer Vision, 1999. The Proceedings of the Seventh IEEE International Conference on, volume 2, pp. 1033â1038. IEEE, 1999.
Michael Elad and Peyman Milanfar. Style-transfer via texture-synthesis. arXiv preprint arXiv:1609.03057, 2016.
Oriel Frigo, Neus Sabater, Julie Delon, and Pierre Hellier. Split and match: Example-based adaptive patch sampling for unsupervised style transfer. 2016.
Leon Gatys, Alexander S Ecker, and Matthias Bethge. Texture synthesis using convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 262â270, 2015a.
Leon A Gatys, Alexander S Ecker, and Matthias Bethge. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576, 2015b.
Leon A Gatys, Matthias Bethge, Aaron Hertzmann, and Eli Shechtman. Preserving color in neural artistic style transfer. arXiv preprint arXiv:1606.05897, 2016a.
Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, Aaron Hertzmann, and Eli Shechtman. Controlling perceptual factors in neural style transfer. CoRR, abs/1611.07865, 2016b. URL http://arxiv.org/abs/1611.07865.
Image analogies. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pp. 327â340. ACM, 2001.
Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. arXiv preprint arXiv:1603.08155, 2016.
Bela Julesz. Visual pattern discrimination. IRE Trans. Info Theory, 8:84â92, 1962.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Vivek Kwatra, Irfan Essa, Aaron Bobick, and Nipun Kwatra. Texture optimization for example- based synthesis. ACM Transactions on Graphics (ToG), 24(3):795â802, 2005.
Chuan Li and Michael Wand. Precomputed real-time texture synthesis with markovian generative adversarial networks. ECCV, 2016. URL http://arxiv.org/abs/1604.04382.
Lin Liang, Ce Liu, Ying-Qing Xu, Baining Guo, and Heung-Yeung Shum. Real-time texture syn- thesis by patch-based sampling. ACM Transactions on Graphics (ToG), 20(3):127â150, 2001.
Augustus Odena, Christopher Olah, and Vincent Dumoulin. Avoiding checkerboard artifacts in neural networks. Distill, 2016.
Javier Portilla and Eero Simoncelli. A parametric texture model based on joint statistics of complex wavelet coefï¬cients. International Journal of Computer Vision, 40:49â71, 1999.
10
Published as a conference paper at ICLR 2017
Eero Simoncelli and Bruno Olshausen. Natural image statistics and neural representation. Annual Review of Neuroscience, 24:1193â1216, 2001.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, and Victor Lempitsky. Texture networks: Feed- forward synthesis of textures and stylized images. arXiv preprint arXiv:1603.03417, 2016a.
Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing in- gredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016b.
A¨aron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. CoRR, abs/1609.03499, 2016a. URL http://arxiv.org/abs/1609.03499.
A¨aron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray Kavukcuoglu. Conditional image generation with pixelcnn decoders. CoRR, abs/1606.05328, 2016b. URL http://arxiv.org/abs/1606.05328.
In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pp. 479â488. ACM Press/Addison-Wesley Publishing Co., 2000.
Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. European Conference on Computer Vision, pp. 818â833. Springer, 2014. In
11
Published as a conference paper at ICLR 2017
# APPENDIX
HYPERPARAMETERS
Operation Kernel size Stride Feature maps 9 3 3 9 3 3 1 2 2 1 1 1 32 64 128 128 128 128 128 128 64 32 3 C C SAME SAME SAME SAME SAME SAME ReLU ReLU ReLU Sigmoid ReLU Linear Add the input and the output Nearest-neighbor interpolation, factor 2 Convolution 3 1 C SAME ReLU Padding mode REFLECT Normalization Conditional instance normalization after every convolution
# Padding Nonlinearity
Network â 256 Ã 256 Ã 3 input Convolution Convolution Convolution Residual block Residual block Residual block Residual block Residual block Upsampling Upsampling Convolution Residual block â C feature maps Convolution Convolution
Upsampling â C feature maps
Optimizer Adam (Kingma & Ba, 2014) (α = 0.001, β1 = 0.9, β2 = 0.999)
Parameter updates 40,000
# Batch size 16
# Weight initialization Isotropic gaussian (µ = 0, Ï = 0.01)
# Table 1: Style transfer network hyperparameters.
12
Published as a conference paper at ICLR 2017
MONET PASTICHES
Claude Monet, Grainstacks at Giverny; the Evening Sun (1888/1889).
Claude Monet, Plum Trees in Blossom (1879).
Claude Monet, Poppy Field (1873).
13
Published as a conference paper at ICLR 2017
Claude Monet, Rouen Cathedral, West Fac¸ade (1894).
Claude Monet, Sunrise (Marine) (1873).
Claude Monet, The Road to V´etheuil (1879).
14
Published as a conference paper at ICLR 2017
Claude Monet, Three Fishing Boats (1886).
Claude Monet, V´etheuil (1879).
Claude Monet, V´etheuil (1902).
15
Published as a conference paper at ICLR 2017
se
se
Claude Monet, Water Lilies (ca. 1914-1917).
VARIED PASTICHES
# Roy Lichtenstein, Bicentennial Print (1975).
Ernst Ludwig Kirchner, Boy with Sweets (1918).
16
Published as a conference paper at ICLR 2017
Paul Signac, Cassis, Cap Lombard, Opus 196 (1889).
Paul Klee, Colors from a Distance (1932).
Frederic Edwin Church, Cotopaxi (1855).
17
Published as a conference paper at ICLR 2017
Jamini Roy, Cruciï¬xion.
Henri de Toulouse-Lautrec, Divan Japonais (1893).
Egon Schiele, Edith with Striped Dress, Sitting (1915).
18
Published as a conference paper at ICLR 2017
Georges Rouault, Head of a Clown (ca. 1907-1908).
William Hoare, Henry Hoare, âThe Magniï¬centâ, of Stourhead (about 1750-1760).
Giorgio de Chirico, Horses on the seashore (1927/1928).
19
Published as a conference paper at ICLR 2017
Vincent van Gogh, Landscape at Saint-R´emy (Enclosed Field with Peasant) (1889).
Nicolas Poussin, Landscape with a Calm (1650-1651).
Bernardino Fungai, Madonna and Child with Two Hermit Saints (early 1480s).
20
Published as a conference paper at ICLR 2017
Max Hermann Maxy, Portrait of a Friend (1926).
Juan Gris, Portrait of Pablo Picasso (1912).
Severini Gino, Ritmo plastico del 14 luglio (1913).
21
Published as a conference paper at ICLR 2017
Richard Diebenkorn, Seawall (1957).
Alice Bailly, Self-Portrait (1917).
Grayson Perry, The Annunciation of the Virgin Deal (2012).
22
Published as a conference paper at ICLR 2017
William Glackens, The Green Boathouse (ca. 1922).
Edvard Munch, The Scream (1910).
Vincent van Gogh, The Starry Night (1889).
23
Published as a conference paper at ICLR 2017
Pieter Bruegel the Elder, The Tower of Babel (1563).
Wolfgang Lettl, The Trial (1981).
Douglas Coupland, Thomson No. 5 (Yellow Sunset) (2011).
24
Published as a conference paper at ICLR 2017
Claude Monet, Three Fishing Boats (1886).
John Ruskin, Trees in a Lane (1847).
Giuseppe Cades, Tullia about to Ride over the Body of Her Father in Her Chariot (about 1770-1775).
25
Published as a conference paper at ICLR 2017
Berthe Morisot, Under the Orange Tree (1889).
Giulio Romano (Giulio Pippi), Victory, Janus, Chronos and Gaea (about 1532-1534).
Wassily Kandinsky, White Zig Zags (1922).
26 | {
"id": "1603.03417"
} |
1610.07272 | Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary. | http://arxiv.org/pdf/1610.07272 | Jiajun Zhang, Chengqing Zong | cs.CL | 10 pages, 2 figures | null | cs.CL | 20161024 | 20161024 | 6 1 0 2
t c O 4 2 ] L C . s c [
1 v 2 7 2 7 0 . 0 1 6 1 : v i X r a
# Bridging Neural Machine Translation and Bilingual Dictionaries
Jiajun Zhangâ and Chengqing Zongâ â¡ â University of Chinese Academy of Sciences, Beijing, China National Laboratory of Pattern Recognition, CASIA, Beijing, China â¡CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, China {jjzhang,cqzong}@nlpr.ia.ac.cn
# Abstract
Neural Machine Translation (NMT) has become the new state-of-the-art in sev- eral language pairs. However, it remains a challenging problem how to integrate NMT with a bilingual dictionary which mainly contains words rarely or never seen in the bilingual training data. In this pa- per, we propose two methods to bridge NMT and the bilingual dictionaries. The core idea behind is to design novel models that transform the bilingual dictionaries into adequate sentence pairs, so that NMT can distil latent bilingual mappings from the ample and repetitive phenomena. One method leverages a mixed word/character model and the other attempts at synthesiz- ing parallel sentences guaranteeing mas- sive occurrence of the translation lexi- con. Extensive experiments demonstrate that the proposed methods can remarkably improve the translation quality, and most of the rare words in the test sentences can obtain correct translations if they are cov- ered by the dictionary.
Typically, NMT adopts the encoder-decoder ar- chitecture which consists of two recurrent neural networks. The encoder network models the se- mantics of the source sentence and transforms the source sentence into the context vector represen- tation, from which the decoder network generates the target translation word by word.
One important feature of NMT is that each word in the vocabulary is mapped into a low- dimensional (word embed- ding). The use of continuous representations en- ables NMT to learn latent bilingual mappings for accurate translation and explore the statistical sim- ilarity between words (e.g. desk and table) as well. As a disadvantage of the statistical models, NMT can learn good word embeddings and accurate bilingual mappings only when the words occur frequently in the parallel sentence pairs. However, low-frequency words are ubiquitous, especially when the training data is not enough (e.g. low- resource language pairs). Fortunately, in many language pairs and domains, we have handmade bilingual dictionaries which mainly contain words rarely or never seen in the training corpus. There- fore, it remains a big challenge how to bridge NMT and the bilingual dictionaries.
1
# 1 Introduction
Due to its superior ability in modelling the end-to-end translation process, neural machine translation (NMT), recently proposed by (Kalch- brenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014), has become the novel paradigm and achieved the new state-of-the-art translation performance for several language pairs, such as English-to-French, English-to-German and Chinese-to-English (Sutskever et al., 2014; Bahdanau et al., 2014; Luong et al., 2015b; Sen- nrich et al., 2015b; Wu et al., 2016).
Recently, Arthur et al. (2016) attempt at incor- porating discrete translation lexicons into NMT. The main idea of their method is leveraging the discrete translation lexicons to positively inï¬uence the probability distribution of the output words in the NMT softmax layer. However, their approach only addresses the translation lexicons which are in the restricted vocabulary 1 of NMT. The out-of- vocabulary (OOV) words are out of their consid- eration.
1NMT usually keeps only the words whose occurrence is more than a threshold (e.g. 10), since very rare words can not yield good embeddings and large vocabulary leads to high computational complexity.
' Chinese Source: : Chinese Pinyin: Bilingual Dictionary: 1. mixed word/character model 4L1E > fireworks JEZE A A HY Be RR ALE i zhéngzai wei ziji de chuangyi shifang lihua ; English Reference: was setting off fireworks for its creativity 2. pseudo sentence pair synthesis model <B>#L <E>4é > fireworks the fireworks light up the night fireworks open in the sky LEAT RH) they held talks on fireworks JH tek Riz thie AY 4L4E fireworks product for London Olympics. Vv mixed word/character NUT y. EE ANSI ¥. is trying to release their own creative fireworks EM <B>4L <E>4E NMT trained on mixed corpus J IEE AA HY is releasing their own creative fireworks FE
Figure 1: The framework of our proposed methods.
In this paper, we aim at making full use of all the bilingual dictionaries, especially the ones covering the rare or OOV words. Our basic idea is to trans- form the low-frequency word pair in bilingual dic- tionaries into adequate sequence pairs which guar- antee the frequent occurrence of the word pair, so that NMT can learn translation mappings between the source word and the target word.
To achieve this goal, we propose two methods, as shown in Fig. 1. In the test sentence, the Chi- nese word lËihu¯a appears only once in our train- ing data and the baseline NMT cannot correctly translate this word. Fortunately, our bilingual dic- tionary contains this translation lexicon. Our ï¬rst method extends the mixed word/character model proposed by Wu et al. (2016) to re-label the rare words in both of the dictionary and training data with character sequences in which characters are now frequent and the character translation map- pings can be learnt by NMT. Instead of backing off words into characters, our second method is well designed to synthesize adequate pseudo sentence pairs containing the translation lexicon, allowing NMT to learn the word translation mappings.
We make the following contributions in this pa- per:
⢠We propose a low-frequency to high- frequency framework to bridge NMT and the bilingual dictionaries.
⢠We propose and investigate two methods to utilize the bilingual dictionaries. One ex-
tends the mixed word/character model and the other designs a pseudo sentence pair syn- thesis model.
⢠The extensive experiments on Chinese-to- English translation show that our proposed methods signiï¬cantly outperform the strong attention-based NMT. We further ï¬nd that most of rare words can be correctly trans- lated, as long as they are covered by the bilin- gual dictionary.
# 2 Neural Machine Translation
Our framework bridging NMT and the discrete bilingual dictionaries can be applied in any neural machine translation model. Without loss of gen- erality, we use the attention-based NMT proposed by (Luong et al., 2015b), which utilizes stacked Long-Short Term Memory (LSTM, (Hochreiter and Schmidhuber, 1997)) layers for both encoder and decoder as illustrated in Fig. 2.
The encoder-decoder NMT ï¬rst encodes the source sentence X = (x1, x2, · · · , xTx) into a se- quence of context vectors C = (h1, h2, · · · , hTx) whose size varies with respect to the source sen- tence length. Then, the encoder-decoder NMT decodes from the context vectors C and gener- ates target translation Y = (y1, y2, · · · , yTy ) one word each time by maximizing the probability of p(yi|y<i, C). Note that xj (yi) is word embedding corresponding to the jth (ith) word in the source (target) sentence. Next, we brieï¬y review the en-
' ' ' i ' ' | | start ' ' ' ' decoder ' : + ' i ' encoder i ! ' i ht | â> |h} |-> â hi, ' ' 7 ! ' x1 x2 Xt |
Figure 2: The architecture of the attention-based NMT which has m stacked LSTM layers for en- coder and l stacked LSTM layers for decoder.
coder introducing how to obtain C and the decoder addressing how to calculate p(yi|y<i, C).
context vectors C = ) are generated by the en- is Encoder: 1 , hm The (hm coder using m stacked LSTM layers. hk j calculated as follows: 2 , · · · , hm Tx
j = LST M (hk hk jâ1, hkâ1 j ) (1)
Where hkâ1 Decoder: probability p(yi|y<i, C) is computed in different ways according to the choice of the context C at time i. In (Cho et al., 2014), the authors choose C = hm , Tx while Bahdanau et al. (2014) use different context ci at different time step and the conditional probability will become:
p(yi|y<i, C) = p(yi|y<i, ci) = sof tmax(W Ëzi)) (2)
where Ëzi is the attention output:
Ëzi = tahn(Wc[zl i; ci]) (3)
The attention model calculates ci as the weighted sum of the source-side context vectors, just as illustrated in the middle part of Fig. 2.
Tx G = S- Oj 2 (4) j=l
where αij is a normalized item calculated as fol- lows:
Qj; = byt 4 (5) 7 Sy ht 2h
zk i is computed using the following formula:
iâ1, zkâ1 i = LST M (zk zk i i will be calculated by combining
If k = 1, z1 Ëziâ1 as feed input (Luong et al., 2015b):
i = LST M (z1 z1 iâ1, yiâ1, Ëziâ1) (7)
Given the sentence aligned bilingual train- ing data Db = {(X (n) , Y (n) n=1 , all the pa- b rameters of the encoder-decoder NMT are opti- mized to maximize the following conditional log- likelihood:
N Ty 1 Li)=5> S > logp(y!â |y?, XA) (8) n=1 i=1
# Incorporating Bilingual Dictionaries
The word translation pairs in bilingual dictionar- ies are difï¬cult to use in neural machine transla- tion, mainly because they are rarely or never seen in the parallel training corpus. We attempt to build a bridge between NMT and bilingual dictionaries. We believe the bridge is data transformation that can transform rarely or unseen word translation pairs into frequent ones and provide NMT ade- quate information to learn latent translation map- pings. In this work, we propose two methods to perform data transformation from character level and word level respectively.
# 3.1 Mixed Word/Character Model
= dictionary Dic Given {(Dic(i) i=1, we focus on the trans- lation lexicons (Dicx, Dicy) if Dicx is a rare or unknown word in the bilingual corpus Db.
We ï¬rst introduce data transformation using the character-based method. We all know that words
are composed of characters and most of the char- acters are frequent even though the word is never seen. This idea is popularly used to deploy open vocabulary NMT (Ling et al., 2015; Costa-Juss`a and Fonollosa, 2016; Chung et al., 2016).
Character translation mappings are much eas- ier to learn for NMT than word translation map- pings. However, given a character sequence of a source language word, NMT cannot guarantee the generated character sequence would lead to a valid target language word. Therefore, we prefer the framework mixing the words and characters, which is employed by Wu et al. (2016) to handle OOV words. If it is a frequent word, we keep it un- changed. Otherwise, we fall back to the character sequence.
We perform data transformation on both parallel training corpus and bilingual dictionaries. Here, English sentences and words are adopted as ex- amples. Suppose we keep the English vocabulary V in which the frequency of each word exceeds a threshold K. For each English word w (e.g. oak) in a parallel sentence pair (Xb, Yb) or in a trans- lation lexicon (Dicx, Dicy), if w â V , w will be left as it is. Otherwise, w is re-labelled by charac- ter sequence. For example, oak will be:
oak + (B)o (M)a (E)k (9)
Where (B), (/) and (£) denotes respectively begin, middle and end of a word.
# 3.2 Pseudo Sentence Pair Synthesis Model
Since NMT is a data driven approach, it can learn latent translation mappings for a word pair (Dicx, Dicy) if these exist many parallel sen- tences containing (Dicx, Dicy). Along this line, we propose the pseudo sentence pair synthesis In this model, we aim at synthesiz- model. ing for a rare or unknown translation lexicon (Dicx, Dicy) the adequate pseudo parallel sen- tences {(X j j=1 each of which contains (Dicx, Dicy).
Although there are no enough bilingual sen- tence pairs in many languages (and many do- mains), a huge amount of the monolingual data is available in the web. In this paper, we plan to make use of the source-side monolingual data Dsm = {(x im) M_|(M > N)to synthesize the pseudo bilingual sentence pairs Dy, = {(Xp, ¥p)} 7.1.
For constructing Dbp, we resort to statistical ma- chine translation (SMT) and apply a self-learning
Algorithm 1 Pseudo Sentence Pair Synthesis. Input: bilingual training data Db; bilingual dic- tionary Dic; source language monolingual data Dsm; pseudo sentence pair number K for each (Dicx, Dicy);
Output: pseudo {(X j p )}J p, Y j sentence j=1: pairs Dbp =
1: Build an SMT system PBMT on {Db, Dic}; 2: Dbp = {}; 3: for each (Dicx, Dicy) in Dic do Retrieve K monolingual 4: p }K Translate {X k p }K p , Y k
p }K k=1 using PBMT; p }K k=1 into Dbp;
sentences {Y k Add {X k
# 6: 7: end for 8: return Dbp
method as illustrated in Algorithm 1. In contrast to NMT, statistical machine translation (SMT, e.g. phrase-based SMT (Koehn et al., 2007; Xiong et al., 2006)) is easy to integrate bilingual dictio- naries (Wu et al., 2008) as long as we consider the translation lexicons of bilingual dictionaries as phrasal translation rules. Following (Wu et al., 2008), we ï¬rst merge the bilingual sentence cor- pus Db with the bilingual dictionaries Dic, and employ the phrase-based SMT to train an SMT system called PBMT (line 1 in Algorithm 1).
For each rare or unknown word translation pair (Dicx, Dicy), we can easily retrieve the ad- equate source language monolingual sentences {(X k p ) from the web or other data collections. PBMT is then applied to translate {(X k p )}K k=1 to generate target language translations {(Y k p )}K k=1. As PBMT employs the bilingual dictionaries Dic as additional transla- tion rules, each target translation sentence Yp â {(Y k p )}K k=1 will contain Dicy. Then, the sen- tence pair (X k p ) will include the word trans- lation pair (Dicx, Dicy). Finally, we can pair p )}K {(X k k=1 to yield pseudo sen- tence pairs {(X k p )}K k=1, which will be added into Dbp (line 2-6 in Algorithm 1).
The original bilingual corpus Db and the pseudo bilingual sentence pairs Dbp are combined to- gether to train a new NMT model. Some may worry that the target parts of Dbp are SMT re- sults but not well-formed sentences which would Fortunately, Sennrich et harm NMT training.
al. (2015b), Cheng et al. (2016b) and Zhang and Zong (2016) observe from large-scale exper- iments that the synthesized bilingual data using self-learning framework can substantially improve NMT performance. Since Dbp now contains bilin- gual dictionaries, we expect that the NMT trained on {Db, Dbp} cannot only signiï¬cantly boost the translation quality, but also solve the problem of rare word translation if they are covered by Dic.
Note that the pseudo sentence pair synthesis model can be further augmented by the mixed word/character model to solve other OOV trans- lations.
# 4 Experimental Settings
In this section we describe the data sets, data pre- processing, the training and evaluation details, and all the translation methods we compare in the ex- periments.
# 4.1 Dataset
We perform the experiments on Chinese-to- English translation. Our bilingual training data Db includes 630K2 sentence pairs (each sentence length is limited up to 50 words) extracted from LDC corpora3. For validation, we choose NIST 2003 (MT03) dataset. For testing, we use NIST 2004 (MT04), NIST 2005 (MT05), NIST 2006 (MT06) and NIST 2006 (MT08) datasets. The test sentences are remained as their original length. As for the source-side monolingual data Dsm, we collect about 100M Chinese sentences in which approximately 40% are provided by Sogou and the rest are collected by searching the words in the bilingual data from the web. We use two bilingual dictionaries: one is from LDC (LDC2002L27) and the other is manually collected by ourselves. The combined dictionary Dic contains 86,252 translation lexicons in total.
# 4.2 Data Preprocessing
If necessary, the Chinese sentences are word seg- mented using Stanford Word Segmenter4. The En- glish sentences are tokenized using the tokenizer script from the Moses decoder5. We limit the vo- cabulary in both Chinese and English using a fre-
2Without using very large-scale data, it is relatively easy to evaluate the effectiveness of the bilingual dictionaries.
3LDC2000T50, LDC2002E18, LDC2003E07, LDC2003E14, LDC2003T17, LDC2004T07. LDC2002T01, 4http://nlp.stanford.edu/software/segmenter.shtml 5http://www.statmt.org/moses/
quency threshold u. We choose uc = 10 for Chi- nese and ue = 8 for English, resulting |Vc| = 38815 and |Ve| = 30514 for Chinese and En- glish respectively in Db. As we focus on rare or unseen translation lexicons of the bilingual dictio- nary Dic in this work, we ï¬lter Dic and retain the ones (Dicx, Dicy) if Dicx /â Vc, resulting 8306 entries in which 2831 ones appear in the valida- tion and test data sets. All the OOV words are re- placed with UNK in the word-based NMT and are re-labelled into character sequences in the mixed word/character model.
# 4.3 Training and Evaluation Details
We build the described models using the Zoph RNN6 in C++/CUDA and provides training In the NMT architecture across multiple GPUs. as illustrated in Fig. the encoder includes 2, two stacked LSTM layers, followed by a global attention layer, and the decoder also contains two stacked LSTM layers followed by the softmax layer. The word embedding dimension and the size of hidden layers are all set to 1000.
Each NMT model is trained on GPU K80 us- ing stochastic gradient decent algorithm AdaGrad (Duchi et al., 2011). We use a mini batch size of B = 128 and we run a total of 20 iterations for all the data sets. The training time for each model ranges from 2 days to 4 days. At test time, we em- ploy beam search with beam size b = 10. We use case-insensitive 4-gram BLEU score as the auto- matic metric (Papineni et al., 2002) for translation quality evaluation.
# 4.4 Translation Methods
In the experiments, we compare our method with the conventional SMT model and the baseline attention-based NMT model. We list all the trans- lation methods as follows:
⢠Moses: It is the state-of-the-art phrase-based SMT system (Koehn et al., 2007). We use its default conï¬guration and train a 4-gram language model on the target portion of the bilingual training data.
⢠Zoph RNN: It is the baseline attention-based NMT system (Luong et al., 2015a; Zoph et al., 2016) using two stacked LSTM layers for both of the encoder and the decoder.
6https://github.com/isi-nlp/Zoph RNN
Method Moses Zoph RNN Zoph RNN-mixed Zoph RNN-mixed-dic Zoph RNN-pseudo (K = 10) Zoph RNN-pseudo-dic (K = 10) Zoph RNN-pseudo (K = 20) Zoph RNN-pseudo-dic (K = 20) Zoph RNN-pseudo (K = 30) Zoph RNN-pseudo-dic (K = 30) Zoph RNN-pseudo (K = 40) Zoph RNN-pseudo-dic (K = 40) Zoph RNN-pseudo-mixed (K = 40) Zoph RNN-pseudoâmixed-dic (K = 40) |Vc| 38815 42769 42892 42133 42133 43080 43080 44162 44162 45195 45195 45436 45436 |Ve| MT03 MT04 MT05 MT06 MT08 23.20 25.93 26.81 27.04 27.65 28.65 26.80 29.53 27.58 30.17 27.80 30.25 28.46 30.64 30.30 34.77 35.57 36.29 35.66 36.48 35.00 36.92 36.07 37.26 35.44 36.93 38.17 38.66 31.04 37.40 38.07 38.75 38.02 38.59 36.99 38.63 37.74 39.01 37.96 39.15 39.55 40.78 28.19 32.94 34.44 34.86 34.66 35.81 34.22 36.09 34.63 36.64 34.89 36.85 36.86 38.36 30.04 33.85 36.07 36.57 36.51 38.14 36.09 38.13 36.66 38.50 36.92 38.77 38.53 39.56 30514 30630 30630 32300 31734 32813 32255 33357 32797 33961 33399 32659 32421 Ave 28.55 32.98 34.19 34.70 34.50 35.53 33.82 35.86 34.54 36.32 34.60 36.39 36.31 37.60
Table 1: Translation results (BLEU score) for different translation methods. K = 10 denotes that we synthesize 10 pseudo sentence pairs for each word translation pair (Dicx, Dicy). The column |Vc| (|Ve|) reports the vocabulary size limited by frequency threshold uc = 10 (ue = 8). Note that all the NMT systems use the single model rather than the ensemble model.
⢠Zoph RNN-mixed-dic: It is our NMT sys- tem which integrates the bilingual dictio- naries by re-labelling the rare or unknown words with character sequence on both bilin- gual training data and bilingual dictionar- ies. Zoph RNN-mixed indicates that mixed word/character model is performed only on the bilingual training data and the bilingual dictionary is not used.
⢠Zoph RNN-pseudo-dic: It is our NMT sys- tem that integrates the bilingual dictionar- ies by synthesizing adequate pseudo sen- tence pairs that contain the focused rare or unseen translation lexicons. Zoph RNN- pseudo means that the target language parts of pseudo sentence pairs are obtained by the SMT system PBMT without using the bilin- gual dictionary Dic.
Can the combined two proposed methods further boost the translation performance?
# 5.1 NMT vs. SMT
Table 1 reports the detailed translation quality for different methods. Comparing the ï¬rst two lines in Table 1, it is very obvious that the attention- based NMT system Zoph RNN substantially out- performs the phrase-based SMT system Moses on just 630K bilingual Chinese-English sentence pairs. The gap can be as large as 6.36 absolute BLEU points on MT04. The average improve- ment is up to 4.43 BLEU points (32.98 vs. 28.55). It is in line with the ï¬ndings reported in (Wu et al., 2016; Junczys-Dowmunt et al., 2016) which conducted experiments on tens of millions or even more parallel sentence pairs. Our experiments fur- ther show that NMT can be still much better even we have less than 1 million sentence pairs.
is a NMT system combining the two methods Zoph RNN-pseudo and Zoph RNN-mixed. Zoph RNN-pseudo-mixed to Zoph RNN-pseudo.
# 5 Translation Results and Analysis
For translation quality evaluation, we attempt to ï¬gure out the following three questions: 1) Could the employed attention-based NMT outperform SMT even on less than 1 million sentence pairs? 2) Which model is more effective for integrating the bilingual dictionaries: mixed word/character model or pseudo sentence pair synthesis data? 3)
# 5.2 The Effect of The Mixed W/C Model
The two lines (3-4 in Table 1) presents the BLEU scores when applying the mixed word/character model this model markedly improves the translation quality over the baseline attention-based NMT, although the idea behind is very simple.
the system Zoph RNN-mixed, trained only on the bitext Db, achieves an aver- age improvement of more than 1.0 BLEU point (34.19 vs 32.98) over the baseline Zoph RNN. It indicates that the mixed word/character model can alleviate the OOV translation problem to some ex-
Chinese Word zh`uli´u d¯ongji¯a li`ey`an ¯anw`eij`i hËaixi`ao j`ingm`ai f Ëany`ingl´u hu´angpËuji¯ang ch¯aoch¯ed`ao Translation remain owner blaze placebo tsunami intravenous anti-subsidization lingchiang river take-owned lane Correct remain owner blaze placebo tsunami intravenous reactor huangpu river overtaking lane
Table 2: The effect of the Zoph RNN-mixed-dic model in using bilingual dictionaries. The Chinese word is written in Pinyin. The ï¬rst two parts are positive word translation examples, while the third part shows some bad cases.
tent. For example, the number 3/.3 is an OOV word in Chinese. The mixed model transforms this word into (B)3 (M)1 (M). (E)3 and it is correctly copied into target side, yielding a correct translation 3/.3. Moreover, some named entities (e.g. person name hecker) can be well translated. When adding the bilingual dictionary Dic as training data, the system Zoph_RNN-mixed-dic further gets a moderate improvement of 0.51 BLEU points (34.70 vs 34.19) on average. We find that the mixed model could make use of some rare or unseen translation lexicons in NMT, as illus- trated in the first two parts of Table 2. In the first part of Table 2, the English side of the translation lexicon is a frequent word (e.g. remain). The Chi- nese frequent character (e.g. /it%) shares the most meaning of the whole word (zhiliz) and thus it could be correctly translated into remain. We are a little surprised by the examples in the second part of Table 2, since the correct English parts are all OOV words which require each English charac- ter to be correctly generated. It demonstrates that the mixed model has some ability to predict the correct character sequence. However, this mixed model fails in many scenarios. The third part in Table 2 gives some bad cases. If the first predicted character is wrong, the final word translation will be incorrect (e.g. take-owned lane vs. overtak- ing lane). This is the main reason why the mixed model could not obtain large improvements.
# 5.3 The Effect of Data Synthesis Model
The eight lines (5-12) in Table 1 show the trans- lation performance of the pseudo sentence pair synthesis model. We can analyze the results from three perspectives: 1) the effect of the self-
pseudo-dic mixed-dic K = 10 K = 20 K = 30 K = 40 0.76 0.36 0.71 0.78 0.79
Table 3: The hit rate of the bilingual dictionary for different models.
learning method for using the source-side mono- lingual data; 2) the effect of the bilingual dictio- nary; and 3) the effect of pseudo sentence pair number.
(lines with Zoph RNN-pseudo) demonstrate that the synthe- sized parallel sentence pairs using source-side monolingual data can signiï¬cantly improve the baseline NMT Zoph RNN, and the average im- provement can be up to 1.62 BLEU points (34.60 vs. 32.98). This ï¬nding is also reported by Cheng et al. (2016b) and Zhang and Zong (2016).
augmenting Zoph RNN-pseudo with bilingual dictionaries, we can further obtain con- siderable gains. The largest average improvement can be 3.41 BLEU points when compared to the baseline NMT Zoph RNN and 2.04 BLEU points when compared to Zoph RNN-pseudo (35.86 vs. 33.82).
When investigating the effect of pseudo sen- tence pair number (from K = 10 to K = 40), we ï¬nd that the performance is largely better and better if we synthesize more pseudo sentence pairs for each rare or unseen word translation pair (Dicx, Dicy). We can also notice that improve- ment gets smaller and smaller when K grows.
# 5.4 Mixed W/C Model vs. Data Synthesis Model
Comparing the results between the mixed model and the data synthesis model (Zoph RNN-mixed- dic vs. Zoph RNN-pseudo-dic) in Table 1, we can easily see that the data synthesis model is much better to integrate bilingual dictionaries in NMT. Zoph RNN-pseudo-dic can substantially outperform Zoph RNN-mixed-dic by an average improvement up to 1.69 BLEU points (36.39 vs. 34.70).
Through a deep analysis, we ï¬nd that most of rare or unseen words in test sets can be well trans- lated by Zoph RNN-pseudo-dic if they are covered by the bilingual dictionary. Table 3 reports the hit rate of the bilingual dictionaries. 0.71 indicates that 2010 (2831 à 0.71) words among the 2831 covered rare or unseen words in the test set can
be correctly translated. This table explains why Zoph RNN-pseudo-dic performs much better than Zoph RNN-mixed-dic.
The last two lines in Table 1 demonstrate that the combined method can further boost the trans- lation quality. The biggest average improvement over the baseline NMT Zoph RNN can be as large as 4.62 BLEU points, which is very promising. We believe that this method fully exploits the ca- pacity of the data synthesis model and the mixed model. Zoph RNN-pseudo-dic can well incorpo- rate the bilingual dictionary and Zoph RNN-mixed can well handle the OOV word translation. Thus, the combined method is the best.
One may argue that the proposed methods use bigger vocabulary and the performance gains may be attributed to the increased vocabulary size. We further conduct an experiment for the baseline NMT Zoph RNN by setting |Vc| = 4600 and |Ve| = 3400. We ï¬nd that this setting decreases the translation quality by an average BLEU points 0.88 (32.10 vs. 32.98). This further veriï¬es the superiority of our proposed methods.
# 6 Related Work
The recently proposed neural machine translation has drawn more and more attention. Most of the existing methods mainly focus on designing better attention models (Luong et al., 2015b; Cheng et al., 2016a; Cohn et al., 2016; Feng et al., 2016; Liu et al., 2016; Meng et al., 2016; Mi et al., 2016a; Mi et al., 2016b; Tu et al., 2016), better objective functions for BLEU evaluation (Shen et al., 2016), better strategies for handling open vo- cabulary (Ling et al., 2015; Luong et al., 2015c; Jean et al., 2015; Sennrich et al., 2015b; Costa- Juss`a and Fonollosa, 2016; Lee et al., 2016; Li et al., 2016; Mi et al., 2016c; Wu et al., 2016) and exploiting large-scale monolingual data (Gulcehre et al., 2015; Sennrich et al., 2015a; Cheng et al., 2016b; Zhang and Zong, 2016).
Our focus in this work is aiming to fully inte- grate the discrete bilingual dictionaries into NMT. The most related works lie in three aspects: 1) applying the character-based method to deal with open vocabulary; 2) making use of the synthesized data in NMT, and 3) incorporating translation lex- icons in NMT.
Ling et al. (2015), Costa-Juss`a and Fonollosa (2016) and Sennrich et al. (2015b) propose purely character-based or subword-based neural machine
translation to circumvent the open word vocabu- lary problem. Luong et al. (2015c) and Wu et al. (2016) present the mixed word/character model which utilizes character sequence to replace the OOV words. We introduce the mixed model to integrate the bilingual dictionaries and ï¬nd that it is useful but not the best method.
Sennrich et al. (2015a) propose an approach to use target-side monolingual data to synthesize the bitexts. They generate the synthetic bilingual data by translating the target monolingual sentences to source language sentences and retrain NMT with the mixture of original bilingual data and the syn- thetic parallel data. Cheng et al. (2016b) and Zhang and Zong (2016) also investigate the effect of the synthesized parallel sentences. They report that the pseudo sentence pairs synthesized using the source-side monolingual data can signiï¬cantly improve the translation quality. These studies in- spire us to leverage the synthesized data to incor- porate the bilingual dictionaries in NMT.
Very recently, Arthur et al. (2016) try to use dis- crete translation lexicons in NMT. Their approach attempts to employ the discrete translation lexi- cons to positively inï¬uence the probability distri- bution of the output words in the NMT softmax layer. However, their approach only focuses on the words that belong to the vocabulary and the out- of-vocabulary (OOV) words are not considered. In contrast, we concentrated ourselves on the word translation lexicons which are rarely or never seen in the bilingual training data. It is a much tougher problem. The extensive experiments demonstrate that our proposed models, especially the data syn- thesis model, can solve this problem very well.
# 7 Conclusions and Future Work
In this paper, we have presented two models to bridge neural machine translation and the bilin- gual dictionaries in which translation lexicons are rarely or never seen in the bilingual training data. Our proposed methods focus on data transforma- tion mechanism which guarantees the massive and repetitive occurrence of the translation lexicon.
The mixed word/character model tackles this problem by re-labelling the OOV words with char- acter sequence, while our data synthesis model constructs adequate pseudo sentence pairs for each translation lexicon. The extensive experiments show that the data synthesis model substantially outperforms the mixed word/character model, and
the combined method performs best. All of the proposed methods obtain promising improve- ments over the baseline NMT. We further ï¬nd that more than 70% of the rare or unseen words in test sets can get correct translations as long as they are covered by the bilingual dictionary.
Currently, the data synthesis model does not distinguish the original bilingual training data from the synthesized parallel sentences in which the target sides are SMT translation results. In the future work, we plan to modify the neural network structure to avoid the negative effect of the SMT translation noise.
# References
[Arthur et al.2016] Philip Arthur, Graham Neubig, and Satoshi Nakamura. 2016. Incorporating discrete translation lexicons into neural machine translation. arXiv preprint arXiv:1606.02006.
[Bahdanau et al.2014] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
Shen, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016a. Agreement-based joint training for bidirectional attention-based neural machine translation. In Proceedings of AAAI 2016.
[Cheng et al.2016b] Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016b. Semi-supervised learning for neural machine translation. In Proceedings of ACL 2016.
Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase represen- tations using rnn encoder-decoder for statistical In Proceedings of EMNLP machine translation. 2014.
[Chung et al.2016] Junyoung Chung, Kyunghyun Cho, and Yoshua Bengio. 2016. A character-level de- coder without explicit segmentation for neural ma- chine translation. arXiv preprint arXiv:1603.06147.
[Cohn et al.2016] Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vymolova, Kaisheng Yao, Chris Dyer, Incorporating and Gholamreza Haffari. structural alignment biases into an attentional neural translation model. In Proceedings of NAACL 2016.
[Costa-Juss`a and Fonollosa2016] Marta R Costa-Juss`a and Jos´e AR Fonollosa. Character- based neural machine translation. arXiv preprint arXiv:1603.00810.
[Duchi et al.2011] John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for on- line learning and stochastic optimization. The Jour- nal of Machine Learning Research, 12:2121â2159.
[Feng et al.2016] Shi Feng, Shujie Liu, Mu Li, and Implicit distortion and fertil- Ming Zhou. 2016. ity models for attention-based encoder-decoder nmt model. arXiv preprint arXiv:1601.03317.
[Gulcehre et al.2015] Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei- Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On using monolingual cor- pora in neural machine translation. arXiv preprint arXiv:1503.03535.
[Hochreiter and Schmidhuber1997] Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735â1780.
[Jean et al.2015] Sebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural ma- chine translation. In Proceedings of ACL 2015.
Junczys- Dowmunt, Tomasz Dwojak, and Hieu Hoang. 2016. Is neural machine translation ready for deployment? arXiv a case study on 30 translation directions. preprint arXiv:1610.01108.
[Kalchbrenner and Blunsom2013] Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous In Proceedings of EMNLP translation models. 2013.
[Koehn et al.2007] Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine In Proceedings of ACL 2007, pages translation. 177â180.
and Thomas Hofmann. 2016. Fully character-level neu- ral machine translation without explicit segmenta- tion. arXiv preprint arXiv:1610.03017.
and Chengqing Zong. 2016. Towards zero unknown word in neural machine translation. In Proceedings of IJCAI 2016.
[Ling et al.2015] Wang Ling, Isabel Trancoso, Chris Character- Dyer, and Alan W Black. based neural machine translation. arXiv preprint arXiv:1511.04586.
[Liu et al.2016] Lemao Liu, Masao Utiyama, Andrew Finch, and Eiichiro Sumita. 2016. Neural machine translation with supervised attention. arXiv preprint arXiv:1609.04186.
[Luong et al.2015a] Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015a. Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114.
[Luong et al.2015b] Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015b. Effective ap- proaches to attention-based neural machine transla- tion. In Proceedings of EMNLP 2015.
Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wo- 2015c. Addressing the rare jciech Zaremba. word problem in neural machine translation. In Proceedings of ACL 2015.
[Meng et al.2016] Fandong Meng, Zhengdong Lu, Hang Li, and Qun Liu. 2016. Interactive atten- tion for neural machine translation. arXiv preprint arXiv:1610.05011.
Sankaran, Zhiguo Wang, and Abe Ittycheriah. 2016a. A coverage embedding model for neural machine translation. In Proceedings of EMNLP 2016.
[Mi et al.2016b] Haitao Mi, Zhiguo Wang, Niyu Ge, and Abe Ittycheriah. 2016b. Supervised attentions In Proceedings of for neural machine translation. EMNLP 2016.
[Mi et al.2016c] Haitao Mi, Zhiguo Wang, and Abe It- tycheriah. 2016c. Vocabulary manipulation for large vocabulary neural machine translation. In Pro- ceedings of ACL 2016.
[Papineni et al.2002] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine transla- tion. In Proceedings of ACL 2002, pages 311â318.
[Sennrich et al.2015a] Rico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709.
[Sennrich et al.2015b] Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015b. Neural machine trans- arXiv lation of rare words with subword units. preprint arXiv:1508.07909.
[Shen et al.2016] Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Proceedings of ACL 2016.
[Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence In Proceedings of learning with neural networks. NIPS 2014.
[Tu et al.2016] Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Coverage-based neural machine translation. In Proceedings of ACL 2016.
and Domain adaptation 2008. Chengqing Zong. for statistical machine translation with domain dic- tionary and monolingual corpora. In Proceedings of COLING 2008, pages 993â1000.
[Wu et al.2016] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Googleâs neural ma- chine translation system: Bridging the gap between arXiv preprint human and machine translation. arXiv:1609.08144.
[Xiong et al.2006] Deyi Xiong, Qun Liu, and Shouxun Lin. 2006. Maximum entropy based phrase reorder- ing model for statistical machine translation. In Pro- ceedings of ACL-COLING, pages 521â528. Associ- ation for Computational Linguistics.
[Zhang and Zong2016] Jiajun Zhang and Chengqing Zong. 2016. Exploiting source-side monolingual data in neural machine translation. In Proceedings of EMNLP.
[Zoph et al.2016] Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Multi-source neu- ral translation. In Proceedings of NAACL 2016. | {
"id": "1609.04186"
} |
1610.04286 | Sim-to-Real Robot Learning from Pixels with Progressive Nets | Applying end-to-end learning to solve complex, interactive, pixel-driven
control tasks on a robot is an unsolved problem. Deep Reinforcement Learning
algorithms are too slow to achieve performance on a real robot, but their
potential has been demonstrated in simulated environments. We propose using
progressive networks to bridge the reality gap and transfer learned policies
from simulation to the real world. The progressive net approach is a general
framework that enables reuse of everything from low-level visual features to
high-level policies for transfer to new tasks, enabling a compositional, yet
simple, approach to building complex skills. We present an early demonstration
of this approach with a number of experiments in the domain of robot
manipulation that focus on bridging the reality gap. Unlike other proposed
approaches, our real-world experiments demonstrate successful task learning
from raw visual input on a fully actuated robot manipulator. Moreover, rather
than relying on model-based trajectory optimisation, the task learning is
accomplished using only deep reinforcement learning and sparse rewards. | http://arxiv.org/pdf/1610.04286 | Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell | cs.RO, cs.LG | null | null | cs.RO | 20161013 | 20180522 | 8 1 0 2
y a M 2 2 ] O R . s c [
2 v 6 8 2 4 0 . 0 1 6 1 : v i X r a
# Sim-to-Real Robot Learning from Pixels with Progressive Nets
Andrei A. Rusu DeepMind London, UK andreirusu@google.com
Mel VeËcerÃk DeepMind London, UK matejvecerik@google.com
Thomas Rothörl DeepMind London, UK tcr@google.com
Nicolas Heess DeepMind London, UK heess@google.com
Razvan Pascanu DeepMind London, UK razp@google.com
Raia Hadsell DeepMind London, UK raia@google.com
Abstract: Applying end-to-end learning to solve complex, interactive, pixel- driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high- level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real- world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model- based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.
Keywords: Robot learning, transfer, progressive networks, sim-to-real, CoRL.
# 1 Introduction
Deep Reinforcement Learning offers new promise for achieving human-level control in robotics domains, especially for pixel-to-action scenarios where state estimation is from high dimensional sen- sors and environment interaction and feedback are critical. With deep RL, a new set of algorithms has emerged that can attain sophisticated, precise control on challenging tasks, but these accomplishments have been demonstrated primarily in simulation, rather than on actual robot platforms.
While recent advances in simulation-driven deep RL are impressive [1, 2, 3, 4, 5, 6, 7], demonstrating learning capabilities on real robots remains the bar by which we must measure the practical applica- bility of these methods. However, this poses a signiï¬cant challenge, given the "data-hungry" training regime required for current pixel-based deep RL methods, and the relative frailty of research robots and their human handlers. One solution is to use transfer learning methods to bridge the reality gap that separates simulation from real world domains. In this paper, we use progressive networks, a deep learning architecture that has recently been proposed for transfer learning, to demonstrate such an approach, thus providing a proof-of-concept pathway by which deep RL can be used to effect fast policy learning on a real robot.
Progressive nets have been shown to produce positive transfer between disparate tasks such as Atari games by utilizing lateral connections to previously learnt models [8]. The addition of new capacity for each new task allows specialized input features to be learned, an important advantage for deep RL algorithms which are improved by sharply-tuned perceptual features. An advantage of progressive
1st Conference on Robot Learning (CoRL 2017), Mountain View, United States.
nets compared with other methods for transfer learning or domain adaptation is that multiple tasks may be learned sequentially, without needing to specify source and target tasks.
This paper presents an approach for transfer from simulation to the real robot that is proven using real-world, sparse-reward tasks. The tasks are learned using end-to-end deep RL, with RGB inputs and joint velocity output actions. First, an actor-critic network is trained in simulation using multiple asynchronous workers [6]. The network has a convolutional encoder followed by an LSTM. From the LSTM state, using a linear layer, we compute a set of discrete action outputs that control the different degrees of freedom of the simulated robot as well as the value function. After training, a new network is initialized with lateral, nonlinear connections to each convolutional and recurrent layer of the simulation-trained network. The new network is trained on a similar task on the real robot. Our initial ï¬ndings show that the inductive bias imparted by the features and encoded policy of the simulation net is enough to give a dramatic learning speed-up on the real robot.
# 2 Transfer Learning from Simulation to Real
Our approach relies on the progressive nets architecture, which enables transfer learning through lateral connections which connect each layer of previously learnt network columns to each new column, thus supporting rich compositionality of features. We ï¬rst summarize progressive nets, and then we discuss their application for transfer in robot domains.
# 2.1 Progressive Networks
Progressive networks are ideal for simulation-to-real transfer of policies in robot control domains, for multiple reasons. First, features learnt for one task may be transferred to many new tasks without destruction from ï¬ne-tuning. Second, the columns may be heterogeneous, which may be important for solving different tasks, including different input modalities, or simply to improve learning speed when transferring to the real robot. Third, progressive nets add new capacity, including new input connections, when transferring to new tasks. This is advantageous for bridging the reality gap, to accommodate dissimilar inputs between simulation and real sensors.
A progressive network starts with a single column: a deep neural network having L layers with hidden activations h(1) i â Rni, with ni the number of units at layer i ⤠L, and parameters Î(1) trained to convergence. When switching to a second task, the parameters Î(1) are âfrozenâ and a new column with parameters Î(2) is instantiated (with random initialization), where layer h(2) receives input from both h(2) iâ1 via lateral connections. Progressive networks can be generalized in a straightforward manner to have arbitrary network width per column/layer, to accommodate varying degrees of task difï¬culty, or to compile lateral connections from multiple, independent networks in an ensemble setting.
ni? = fl WR, + Con? |, eo) j<k
where W (k) â RniÃnj are the lateral connections from layer i â 1 of column j, to layer i of column k and h0 is the network input. f is an element-wise non-linearity: we use f (x) = max(0, x) for all intermediate layers.
In the standard pretrain-and-ï¬netune paradigm, there is often an implicit assumption of âoverlapâ between the tasks. Finetuning is efï¬cient in this setting, as parameters need only be adjusted slightly to the target domain, and often only the top layer is retrained. In contrast, we make no assumptions about the relationship between tasks, which may in practice be orthogonal or even adversarial. Progressive networks side-step this issue by allocating a new column, potentially with different structure or inputs, for each new task. Columns in progressive networks are free to reuse, modify or ignore previously learned features via the lateral connections.
Application to Reinforcement Learning. Although progressive networks are widely applicable, this paper focuses on their application to deep reinforcement learning. In this case, each column is trained to solve a particular Markov Decision Process (MDP): the k-th column thus deï¬nes a policy
2
Ï(k)(a | s) taking as input a state s given by the environment, and generating probabilities over actions Ï(k)(a | s) := h(k) L (s). At each time-step, an action is sampled from this distribution and taken in the environment, yielding the subsequent state. This policy implicitly deï¬nes a stationary distribution ÏÏ(k)(s, a) over states and actions.
# 2.2 Approach
The proposed approach for transfer from simulated to real robot domains is based on a progressive network with some speciï¬c changes. First, the columns of a progressive net do not need to have identical capacity or structure, and this can be an advantage in sim-to-real situations. Thus, the simulation-trained column is designed to have sufï¬cient capacity and depth to learn the task from scratch, but the robot-trained columns have minimal capacity, to encourage fast learning and limit total parameter growth. Secondly, the layer-wise adapters proposed for progressive nets are unnecessary for the output layers of complementary sequences of tasks, so they are not used. Third, the output layer of the robot-trained column is initialised from the simulation-trained column in order to improve exploration. These architectural features are shown in Fig. 1.
simulation | reality output, output, output, output, output, | | output, 7 rt a 7 input input
Figure 1: Depiction of a progressive network, left, and a modiï¬ed progressive architecture used for robot transfer learning, right. The ï¬rst column is trained on Task 1, in simulation, the second column is trained on Task 1 on the robot, and the third column is trained on Task 2 on the robot. Columns may differ in capacity, and the adapter functions (marked âaâ) are not used for the output layers of this non-adversarial sequence of tasks.
The greatest risk in this approach to transfer learning is that rewards will be so sparse, or non-existent, in the real domain that the reinforcement learning will not improve a vastly suboptimal initial policy within a practical time frame. Thus, in order to maximise the likelihood of reward during exploration in the real domain, the new column is initialised such that the initial policy of the agent will be identical to the previous column. This is accomplished by initialising the weights coming from the last layer of the previous column to the output layer of the new column with the output weights of the previous column, and the connections incoming from the last hidden layer of the current column are initialised with zero-valued weights. Thus, using the example network in Fig. 1 (right), when and h(2) parameters Î(2) are instantiated, layer output(2) 2 . However, unlike the other parameters in Î(2), which will be randomly initialised, the weights W (2) out will be zeros and the weights U (1:2) out. Note that this only affects the initial policy of the agent and does not prevent the new column from training.
# 3 Related Literature
There exist many different paradigms for domain transfer and many approaches designed speciï¬cally for deep neural models, but substantially fewer approaches for transfer from simulation to reality for robot domains. Even more rare are methods that can be used for transfer in interactive, rich sensor domains using end-to-end (pixel-to-action) learning.
A growing body of work has been investigating the ability of deep networks to transfer between domains. Some research [9, 10] considers simply augmenting the target domain data with data from the source domain where an alignment exists. Building on this work, [11] starts from the observation that as one looks at higher layers in the model, the transferability of the features decreases quickly. To correct this effect, a soft constraint is added that enforces the distribution of the features to be
3
more similar. In [11], a âconfusionâ loss is proposed which forces the model to ignore variations in the data that separate the two domains [12, 13].
Based on [12], [14] attempts to address the simulation to reality gap by using aligned data. The work is focused on pose estimation of the robotic arm, where training happens on a triple loss that looks at aligned simulation to real data, including the domain confusion loss. The paper does not show the efï¬ciency of the method on learning novel complex policies.
Several recent works from the supervised learning literature, e.g. [15, 16, 17], demonstrate how ideas from the adversarial training of neural networks can be used to reduce the sensitivity of a trained network to inter-domain variations, without requiring aligned training data. Intuitively these approaches train a representation that makes it hard to distinguish between data points drawn from the different domains. These ideas have, however, not yet been tested in the context of control. Demonstrating the difï¬culty of the problem, [10] provides evidence that a simple application of a model trained on synthetic data on the real robot fails. The paper also shows that the main failure point is the discrepancy in visual cues between simulation and reality.
Partial success on transferring from simulation to a real robot has been reported [18, 19, 20]. They focus primarily on the problem of transfer from a more restricted simpler version of a task to the full, more difï¬cult version. While transfer from simulation to reality remains difï¬cult, progress has been made with directly learning neural network control policies on a real robot, both from low-dimensional representations of the state and from visual input (e.g. [21],[22]). While the results are impressive, to achieve sufï¬cient data efï¬ciency these works currently rely on relatively restrictive task setups, specialized visual architectures, and carefully designed training regimes. Alternative approaches embrace big data ideas for robotics ([23, 24]).
# 4 Experiments
For training in simulation, we use the Asynchronous Advantage Actor-Critic (A3C) framework introduced in [6]. Compared to DQN [25], the model simultaneously learns a policy and a value function for predicting expected future rewards, and can be trained with CPUs, using multiple threads. A3C has been shown to converge faster than DQN, which makes it advantageous for research experimentation.
For the manipulation domain of the Jaco arm, the agent policy controls nine degrees of freedom using velocity commands. This includes six joints on the arm plus three actuated ï¬ngers. The full policy Î (A|s, θ) comprises nine joint policies learnt by the agent, each one a softmax connected to the inputs from the previous layer and any lateral connections. Each joint policy i has three actions (a ï¬xed positive velocity, a ï¬xed negative velocity, and a zero velocity): Ïi(ai|s; θi). This discrete action set, while potentially lacking the precision of a continuous control policy, has worked well in practice. There is also a single value function that is linearly connected to the previous layer and lateral layers: V (s, θv).
We evaluate both feedforward and recurrent neural networks. Both have convolutional input layers followed by either a fully connected layer or an LSTM. A standard-sized network is used for the simulation-trained column and a reduced-capacity network is used for the robot-trained columns, chosen because we found empirically that more capacity does not accelerate learning (see Section4.2), presumably because of the features reused from the previous column. Details of the architecture are given in Figure 2 and Table 1. In all variants, the input is 3x64x64 pixels and the output is 28 (9 discrete joint policies plus one value function).
The MuJoCo physics simulator [26] is used to train the ï¬rst column for our experiments, with a rendered camera view to provide observations. In the real domain, a similarly positioned RGB camera provides the input. While the modeled Jaco and its dynamics are quite accurate, the visual discrepancies are obvious, as shown in Figure 3.
The experiments are all focused around the task of reaching to a visual target, with only pure rewards provided as feedback (no shaped rewards). Though simple, this task requires that the state of the arm and the position of the target are correctly inferred from visual observations, and that the agent learns robust control over a high-dimensional state space. The arm is set to a random start position at the beginning of every episode, and the target is placed randomly within a 40cm by 30cm area. The agent receives a reward of +1 if its palm is within 10cm of the target, and episodes last for at most
4
ala] [xl LSTM DHâ i fe â nye cam Poo Pa 2 âââ conv, 1? 1,2 | t
feedforward wide narrow recurrent wide narrow fc (output) LSTM fc conv 2 conv 1 params 28 - 512 32 16 621K 28 128 128 32 16 39K 299K 28 - 32 8 8 28 16 16 8 8 37K
Figure 2: Detailed schematic of progressive recurrent network architecture. The activations of the LSTM are connected as inputs to the progressive column. The factored policy and single value function are shown.
_
Table 1: Network sizes for wide columns (simulation-trained) and narrow columns (robot- trained). For all networks, the ï¬rst convolutional layer uses 8x8, stride 4 kernels and the second uses 5x5, stride 2 kernels. The total parameters include the lateral connections.
Figure 3: Sample images from the real camera input image and the MuJoCo-rendered image. Though a more realistic model appearance could have been used, the blocky Jaco model was used to accelerate MuJoCo rendering, which was done on CPUs. The images show the diversity of Jaco start positions and target positions.
50 steps. Though there is some variance due to randomized starting states, a well-performing agent can achieve an average score of over 30 points by quickly reaching to the target and remaining in safe positions at all times. The episode is terminated if the agent causes a safety violation through self-intersection, by touching the table top, or by exceeding set joint limits.
# 4.1 Training in simulation
The ï¬rst column is trained in simulation using A3C, as previously mentioned, using a wide feedfor- ward or recurrent network. Intuitively, it makes sense to use a larger capacity network for training in simulation, to reach maximum performance. We veriï¬ed this intuition by comparing wide and narrow
Simulation-trained first column 4o Simulation-trained first column (LSTM) ââ Wide network â Narrow network ââ Narrow network â Wide network 0 1 2 3 4 0 1 2 3 4 Steps wv Steps
Figure 4: Learning curves are shown for wide and narrow versions of the feedforward (left) and recurrent (right) models, which are trained with the MuJoCo simulator. The plots show mean and variance over 5 training runs with different seeds and hyperparameters. Stable performance is reached after approximately 50 million steps, which is more than one million episodes. While both the feedforward and the recurrent models learn the task, the recurrent network reaches a higher ï¬nal mean score.
5
Real-robot-trained progressive nets vs. baselines
35 25 Wide column (progressive) Narrow column (progressive) Wide column (finetuned) Narrow column (from scratch) Wide column (from scratch) Rewards 15 ie} 10000 20000 30000 40000 50000 60000 Steps
Figure 5: Real robot training: We compare progressive, ï¬netuning, and âfrom scratchâ learning curves. All experiments use a recurrent architecture, trained on the robot, from RGB inputs. We compare wide and narrow columns for both the progressive experiments and the randomly initialized baseline. For all results, a median- ï¬ltered solid curve is shown overlaid on the raw rewards (dotted line). The âfrom scratchâ baseline was a randomly initialized narrow or wide column, both of which fail to get any reward during training.
network architectures, and found that the narrow network had slower learning and worse performance (see Figure 4). We also see that the LSTM model out-performs the feedforward model by an average of 3 points per episode. Even on this relatively simple task, full performance is only achieved after substantial interaction with the environment, on the order of 50 million steps - a number which is infeasible with a real robot.
The simulation training, compared with the real robot, is accelerated because of fast rendering, multithreaded learning algorithms, and the ability to continuously train without human involvement. We calculate that learning this task, which trains to convergence in 24 hours using a CPU compute cluster, would take 53 days on the real robot even with continuous training for 24 hours a day. Moreover, multiple experiments in parallel were used to explore hyperparameters in simulation; this sort of search would multiply the hypothetical real robot training time.
In simulation, we explore learning rates and entropy costs, which are sampled uniformly at random on a log scale. Learning rates are sampled between 5e-5 and 5e-3 and entropy costs between 1e-5 and 1e-2. The conï¬guration with the best ï¬nal performance from a grid of 30 is chosen as ï¬rst column. For real Jaco experiments, both learning rates and entropy costs were optimized separately using a simulated transfer experiment with a single-threaded agent (A2C).
# 4.2 Transfer to the robot
To train on the real Jaco, a ï¬at target is manually repositioned within a 40cm by 30cm area on every third episode. Rewards are given automatically by tracking the colored target and giving reward based on the position of the Jaco gripper with respect to it. We train a baseline from scratch, a ï¬netuned ï¬rst column, and a progressive second column. Each experiment is run for approximately 60000 steps (about four hours). The baseline is trained by randomly initializing a narrow network and then training. We also try a randomly initialized wide network. As seen in Figure 5 (green curve), the randomly initialized column fails to learn and the agent gets zero reward throughout training. The progressive second column gets to 34 points, while the experiment with ï¬netuning, which starts with the simulation-trained column and continues training on the robot, does not reach the same score as the progressive network.
Finetuning vs. progressive approaches. The progressive approach is clearly well-suited for contin- ual learning scenarios, where it is important to mitigate forgetting of previous tasks while supporting transfer to new tasks, but the advantage is less intuitive for curricula of tasks where the focus is on
6
Subtle perspective changes Significant perspective changes Subtle color changes Significant color changes 25 40 â finetuned â finetuned 35 â progressive 30 35 â finetuned 30 â progressive â finetuned â progressive â progressive $25 Final rewards ° 0 50 100 150 200 250 300 0 50 100 150 200 250 0 50 100 150 200 250 300 Trials sorted by decreasing final reward: Trials sorted by decreasing final rewards Trials sorted by decreasing final rewards Trials sorted by decreasing final 0 50 100 150 200 250 300
# Final rewards
Figure 6: To analyse the relative stability and performance of ï¬netuning vs. progressive approaches, we add color or perspective changes to the environment in simulation and then train 300 networks with different random seeds, learning rates, and entropy costs. The progressive networks have signiï¬cantly higher performance and less sensitivity to hyperparameter selection for all four experiments.
maximising transfer learning. To assess this empirically, we start with a simulator-trained ï¬rst column, as described above, and then either ï¬netune that column or add a narrow progressive column and retrain for the reacher task under a variety of conditions, including small or large color changes and small or large perspective changes. For each of these environment perturbations, we train 300 times with different seeds, learning rates, and entropy costs, which are the most sensitive hyperparameters. As shown in Figure 6, we ï¬nd that progressive networks are more stable and reach higher ï¬nal performance than ï¬netuning.
# 4.3 Transfer to a dynamic robot task with proprioception
Unlike the ï¬netuning paradigm, which is unable to accommodate changing network morphology or new input modalities, progressive nets offer a ï¬exibility that is advantageous for transferring to new data sources while still leveraging previous knowledge. To demonstrate this, we train a second column on the reacher task but add proprioceptive features as an additional input, alongside the RGB images. The proprioceptive features are joint angles and velocities for each of the 9 joints of the arm and ï¬ngers, 18 in total, input to a MLP (a single linear layer plus ReLU) and joined with the outputs of the convolutional stack. Then, a third progressive column is added that only learns from the proprioceptive features, while the visual input is forwarded through the previous columns and the features are used via the lateral connections. A diagram of this architecture is shown in Figure 7 (left).
To evaluate this architecture, we train on a dynamic target task. By employing a small motorized pulley, the red target is smoothly translated across the table with random reversals in the motion, creating a tracking task that requires a different control policy while maintaining a similar visual presentation. Other aspects of the task, including rewards and episode lengths, were kept the same. If the second column is trained on this conveyor task, the learning is relatively slow, and full performance is reached after 50000 steps (about 4 hours). If the second column is instead trained on the static reacher task, and the third column is then trained on the conveyor task, we observe immediate transfer, and full performance is reached almost immediately (Figure 7, right). This demonstrates both the utility of progressive nets for curriculum tasks, as well as the capability of the architecture to immediately reuse previously learnt features.
# 5 Discussion
Transfer learning, the ability to accumulate and transfer knowledge to new domains, is a core characteristic of intelligent beings. Progressive neural networks offer a framework that can be used for continual learning of many tasks and which facilitates transfer learning, even across the divide which separates simulation and robot. We took full advantage of the ï¬exibility and computational scaling afforded by simulation and compared many hyperparameters and architectures for a random start, random target control task with visual input, then successfully transferred the skill to an agent training on the real robot.
In order to fulï¬ll the potential of deep reinforcement learning applied in real-world robotic domains, learning needs to become many times more efï¬cient. One route to achieving this is via transfer learning from simulation-trained agents. We have described an initial set of experiments that prove that progressive nets can be used to achieve reliable, fast transfer for pixel-to-action RL policies.
7
300
# rewards
Real-robot-trained progressive nets (conveyor task)
30 : ; O f i ISL | | @ â Progressive 3 column x | x @ x âââ Progressive 2 column . . . 0 static task | Static task dynamic task 0 10000 20000 30000 40000 50000 60000 Steps
Figure 7: Real robot training results are shown for the dynamic âconveyorâ task. A three-column architecture is depicted (left), in which vision (x) is used to train column one, vision and proprioception (Ï) are used in column two, and only proprioception is used to train column three. Encoder 1 is a convolutional net, encoder 2 is a convolutional net with proprioceptive features added before the LSTM, and encoder 3 is an MLP. The learning curves (right) show the results of training on a conveyor (dynamic target) task. If the conveyor task is learned as the third column, rather than the second, then the learning is signiï¬cantly faster.
# References
[1] S. Levine and P. Abbeel. under unknown dynamics. and K. Q. Weinberger, pages 1071â1079. Curran Associates, 5444-learning-neural-network-policies-with-guided-policy-search-under-unknown-dynamics. pdf.
[2] J. Schulman, S. Levine, P. Moritz, M. I. Jordan, and P. Abbeel. Trust region policy optimization. In Proceedings of the 32nd International Conference on Machine Learning (ICML), 2015.
[3] N. Heess, G. Wayne, D. Silver, T. P. Lillicrap, T. Erez, and Y. Tassa. Learning continuous con- In Advances in Neural Information Processing Sys- trol policies by stochastic value gradients. tems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 2944â2952, 2015. URL http://papers.nips.cc/paper/ 5796-learning-continuous-control-policies-by-stochastic-value-gradients.
[4] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. Proceedings of the International Conference on Learning Representations (ICLR), 2016. URL http://arxiv.org/abs/1509.02971.
[5] J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel. High-dimensional continuous control using generalized advantage estimation. In Proceedings of the International Conference on Learning Representations (ICLR), 2016.
[6] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Intâl Conf. on Machine Learning (ICML), 2016.
[7] S. Gu, T. P. Lillicrap, I. Sutskever, and S. Levine. Continuous deep q-learning with model-based acceleration. In ICML 2016, 2016.
[8] A. Rusu, N. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, and R. Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016.
[9] X. Peng, B. Sun, K. Ali, and K. Saenko. Learning deep object detectors from 3d models. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 1278â1286, 2015.
[10] H. Su, C. R. Qi, Y. Li, and L. J. Guibas. Render for CNN: viewpoint estimation in images using cnns trained with rendered 3d model views. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 2686â2694, 2015.
8
[11] M. Long, Y. Cao, J. Wang, and M. I. Jordan. Learning transferable features with deep adaptation networks. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pages 97â105, 2015.
[12] E. Tzeng, J. Hoffman, T. Darrell, and K. Saenko. Simultaneous deep transfer across domains and tasks. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 4068â4076, 2015.
[13] E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell. Deep domain confusion: Maximizing for domain invariance. CoRR, abs/1412.3474, 2014. URL http://arxiv.org/abs/1412.3474.
[14] E. Tzeng, C. Devin, J. Hoffman, C. Finn, X. Peng, S. Levine, K. Saenko, and T. Darrell. Towards adapting deep visuomotor representations from simulated to real environments. CoRR, abs/1511.07111, 2015. URL http://arxiv.org/abs/1511.07111.
[15] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(59):1â35, 2016.
[16] H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, and M. Marchand. Domain-adversarial neural networks. CoRR, abs/1412.4446, 2014. URL http://arxiv.org/abs/1412.4446.
[17] K. Bousmalis, G. Trigeorgis, N. Silberman, D. Krishnan, and D. Erhan. Domain separation networks. In Advances in Neural Information Processing Systems, pages 343â351, 2016.
[18] S. Barrett, M. E. Taylor, and P. Stone. Transfer learning for reinforcement learning on a physical robot. In Ninth International Conference on Autonomous Agents and Multiagent Systems - Adaptive Learning Agents Workshop (AAMAS - ALA), 2010.
[19] S. James and E. Johns. 3D Simulation for Robot Arm Control with Deep Q-Learning. ArXiv e-prints, 2016.
[20] Y. Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. Fei-Fei, and A. Farhadi. Target-driven visual navigation in indoor scenes using deep reinforcement learning. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pages 3357â3364. IEEE, 2017.
[21] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies. Journal of Machine Learning Research, 17(39):1â40, 2016.
[22] S. Levine, N. Wagener, and P. Abbeel. Learning contact-rich manipulation skills with guided policy search. In IEEE International Conference on Robotics and Automation, ICRA 2015, Seattle, WA, USA, 26-30 May, 2015, pages 156â163, 2015.
[23] L. Pinto and A. Gupta. Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. In ICRA 2016, 2016.
[24] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. The International Journal of Robotics Research, page 0278364917710318, 2016.
[25] V. Mnih, K. Kavukcuoglu, D. Silver, A. Rusu, J. Veness, M. Bellemare, A. Graves, M. Riedmiller, A. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
[26] E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In International Conference on Intelligent Robots and Systems IROS, 2012.
9 | {
"id": "1606.04671"
} |
1610.03017 | Fully Character-Level Neural Machine Translation without Explicit Segmentation | Most existing machine translation systems operate at the level of words,
relying on explicit segmentation to extract tokens. We introduce a neural
machine translation (NMT) model that maps a source character sequence to a
target character sequence without any segmentation. We employ a character-level
convolutional network with max-pooling at the encoder to reduce the length of
source representation, allowing the model to be trained at a speed comparable
to subword-level models while capturing local regularities. Our
character-to-character model outperforms a recently proposed baseline with a
subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable
performance on FI-EN and RU-EN. We then demonstrate that it is possible to
share a single character-level encoder across multiple languages by training a
model on a many-to-one translation task. In this multilingual setting, the
character-level encoder significantly outperforms the subword-level encoder on
all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality
of the multilingual character-level translation even surpasses the models
specifically trained on that language pair alone, both in terms of BLEU score
and human judgment. | http://arxiv.org/pdf/1610.03017 | Jason Lee, Kyunghyun Cho, Thomas Hofmann | cs.CL, cs.LG | Transactions of the Association for Computational Linguistics (TACL),
2017 | null | cs.CL | 20161010 | 20170613 | 7 1 0 2 n u J 3 1 ] L C . s c [
3 v 7 1 0 3 0 . 0 1 6 1 : v i X r a
# Fully Character-Level Neural Machine Translation without Explicit Segmentation
# Jason Leeâ ETH Z¨urich jasonlee@inf.ethz.ch
Kyunghyun Cho New York University kyunghyun.cho@nyu.edu
# Thomas Hofmann ETH Z¨urich thomas.hofmann@inf.ethz.ch
# Abstract
Most existing machine translation systems op- erate at the level of words, relying on ex- plicit segmentation to extract tokens. We in- troduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any seg- mentation. We employ a character-level con- volutional network with max-pooling at the encoder to reduce the length of source rep- resentation, allowing the model to be trained at a speed comparable to subword-level mod- els while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword- level encoder on WMTâ15 DE-EN and CS- EN, and gives comparable performance on FI- EN and RU-EN. We then demonstrate that it is possible to share a single character- level encoder across multiple languages by training a model on a many-to-one transla- the tion task. character-level encoder signiï¬cantly outper- forms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the mul- tilingual character-level translation even sur- passes the models speciï¬cally trained on that language pair alone, both in terms of BLEU score and human judgment.
# Introduction
Nearly all previous work in machine translation has been at the level of words. Aside from our intu- âThe majority of this work was completed while the author
was visiting New York University.
itive understanding of word as a basic unit of mean- ing (Jackendoff, 1992), one reason behind this is that sequences are signiï¬cantly longer when rep- resented in characters, compounding the problem of data sparsity and modeling long-range depen- dencies. This has driven NMT research to be al- most exclusively word-level (Bahdanau et al., 2015; Sutskever et al., 2015).
remarkable success, word-level NMT models suffer from several major weaknesses. For one, they are unable to model rare, out-of- vocabulary words, making them limited in translat- ing languages with rich morphology such as Czech, Finnish and Turkish. If one uses a large vocabulary to combat this (Jean et al., 2015), the complexity of training and decoding grows linearly with respect to the target vocabulary size, leading to a vicious cycle. To address this, we present a fully character-level NMT model that maps a character sequence in a source language to a character sequence in a target language. We show that our model outperforms a baseline with a subword-level encoder on DE-EN and CS-EN, and achieves a comparable result on FI-EN and RU-EN. A purely character-level NMT model with a basic encoder was proposed as a base- line by Luong and Manning (2016), but training it was prohibitively slow. We were able to train our model at a reasonable speed by drastically reducing the length of source sentence representation using a stack of convolutional, pooling and highway layers. One advantage of character-level models is that they are better suited for multilingual translation than their word-level counterparts which require a separate word vocabulary for each language. We
verify this by training a single model to translate four languages (German, Czech, Finnish and Rus- sian) to English. Our multilingual character-level model outperforms the subword-level baseline by a considerable margin in all four language pairs, strongly indicating that a character-level model is more ï¬exible in assigning its capacity to different language pairs. Furthermore, we observe that our multilingual character-level translation even exceeds the quality of bilingual translation in three out of four language pairs, both in BLEU score metric and human evaluation. This demonstrates excel- lent parameter efï¬ciency of character-level transla- tion in a multilingual setting. We also showcase our modelâs ability to handle intra-sentence code- switching while performing language identiï¬cation on the ï¬y.
The contributions of this work are twofold: we empirically show that (1) we can train character-to- character NMT model without any explicit segmen- tation; and (2) we can share a single character-level encoder across multiple languages to build a mul- tilingual translation system without increasing the model size.
# 2 Background: Attentional Neural Machine Translation
Neural machine translation (NMT) is a recently proposed approach to machine translation that builds a single neural network which takes as an input a source sentence X = (a1,...,a7,) and generates its translation Y = (y1,...,y7,), where xz and y are source and target symbols (Bahdanau et al., 2015; Sutskever et al., 2015; Luong et al., 2015; Cho et al., 2014a). Attentional NMT models have three components: an encoder, a decoder and an attention mechanism.
Encoder Given a source sentence X, the en- coder constructs a continuous representation that summarizes its meaning with a recurrent neural network (RNN). A_ bidirectional RNN is often implemented as proposed in (Bahdanau et al., 2015). A forward encoder reads the input sentence from left to right! hy = fenc(Ex(#:), by-1). Similarly, a backward encoder reads it from right o_ : to left: hy = Fone (Ex(we), ev), where E,, is
the source embedding lookup table, and Fen and fenc are recurrent activation functions such as long short-term memory units (LSTMs, (Hochreiter and Schmidhuber, 1997)) or gated recurrent units (GRUs, (Cho et al., 2014b)). The encoder constructs a set of continuous source sentence representations C by concatenating the forward and backward hid- den states at each timestep: C = {hi, ..., hry }, Se : where hy = [hu; hi].
Attention First introduced in (Bahdanau et al., 2015), the attention mechanism lets the decoder at- tend more to different source symbols for each target symbol. More concretely, it computes the context vector cy at each decoding time step tâ as a weighted sum of the source hidden states: cy = wy ay¢hy. Similarly to (Chung et al., 2016; Firat et al., 2016a), each attentional weight a, represents how relevant the t-th source token 2; is to the tâ-th target token yzâ, and is computed as:
1 ant = 780 (seore(Ey(ueâ1)-81-â1-he) J (1)
where Z = Se exp(score(Ey(yyâ1), Svâ1, hx) is the normalization constant. score() is a feed- forward neural network with a single hidden layer that scores how well the source symbol 2; and the target symbol y match. Ey is the target embedding lookup table and sy is the target hidden state at time t.
Decoder Given a source context vector c,, the de- coder computes its hidden state at time t/ as: sy = Feec (Ey (yrâ1), Svâ1, Cvâ). Then, a parametric func- tion out;() returns the conditional probability of the next target symbol being k:
(ye =klycv, X) = 1 (2) zor (ous, (Ev); Sy, =)
where Z is again the normalization constant: Z=0j exp (out; (Ey (ysâ1), $7, â¬v")).-
Training The entire model can be trained end-to- end by minimizing the negative conditional log-
likelihood, which is deï¬ned as:
1 Tyâ _ 1 km), (n) fay > > log p(y = yf lyse, Xâ¢),
where N is the number of sentence pairs, and X (n) and y(n) are the source sentence and the t-th target symbol in the n-th pair, respectively.
# 3 Fully Character-Level Translation
# 3.1 Why Character-Level?
The beneï¬ts of character-level translation over word-level translation are well known. Chung et al. (2016) present three main arguments: character level models (1) do not suffer from out-of-vocabulary is- sues, (2) are able to model different, rare morpho- logical variants of a word, and (3) do not require seg- mentation. Particularly, text segmentation is highly non-trivial for many languages and problematic even for English as word tokenizers are either manually designed or trained on a corpus using an objective function that is unrelated to the translation task at hand, which makes the overall system sub-optimal. Here we present two additional arguments for character-level translation. First, a character-level translation system can easily be applied to a mul- tilingual translation setting. Between European lan- guages where the majority of alphabets overlaps, for instance, a character-level model may easily iden- tify morphemes that are shared across different lan- guages. A word-level model, however, will need a separate word vocabulary for each language, allow- ing no cross-lingual parameter sharing.
Also, by not segmenting source sentences into words, we no longer inject our knowledge of words and word boundaries into the system; instead, we encourage the model to discover an internal struc- ture of a sentence by itself and learn how a sequence of symbols can be mapped to a continuous meaning representation.
# 3.2 Related Work
To address these limitations associated with word- level translation, a recent line of research has inves- tigated using sub-word information.
Costa-Juss´a and Fonollosa (2016) replaced the word-lookup table with convolutional and highway
layers on top of character embeddings, while still segmenting source sentences into words. Target sen- tences were also segmented into words, and predic- tion was made at word-level.
Similarly, Ling et al. (2015) employed a bidi- rectional LSTM to compose character embeddings into word embeddings. At the target side, another LSTM takes the hidden state of the decoder and generates the target word, character by character. While this system is completely open-vocabulary, it also requires ofï¬ine segmentation. Also, character- to-word and word-to-character LSTMs signiï¬cantly slow down training.
Most recently, Luong and Manning (2016) pro- posed a hybrid scheme that consults character-level information whenever the model encounters an out- of-vocabulary word. As a baseline, they also imple- mented a purely character-level NMT model with 4 layers of unidirectional LSTMs with 512 cells, with attention over each character. Despite being ex- tremely slow (approximately 3 months to train), the character-level model gave comparable performance to the word-level baseline. This shows the possibil- ity of fully character-level translation.
Having a word-level decoder restricts the model to only being able to generate previously seen words. Sennrich et al. (2015) introduced a subword-level NMT model that is capable of open-vocabulary translation using subword-level segmentation based on the byte pair encoding (BPE) algorithm. Starting from a character vocabulary, the algorithm identi- ï¬es frequent character n-grams in the training data and iteratively adds them to the vocabulary, ulti- mately giving a subword vocabulary which consists of words, subwords and characters. Once the seg- mentation rules have been learned, their model per- forms subword-to-subword translation (bpe2bpe) in the same way as word-to-word translation.
Perhaps the work that is closest to our end goal is (Chung et al., 2016), which used a subword-level encoder from (Sennrich et al., 2015) and a fully character-level decoder (bpe2char). Their results show that character-level decoding performs better than subword-level decoding. Motivated by this work, we aim for fully character-level translation at both sides (char2char).
Outside NMT, our work is based on a few exist- ing approaches that applied convolutional networks
to text, most notably in text classiï¬cation (Zhang et al., 2015; Xiao and Cho, 2016). Also, we drew in- spiration for our multilingual models from previous work that showed the possibility of training a single recurrent model for multiple languages in domains other than translation (Tsvetkov et al., 2016; Gillick et al., 2015).
# 3.3 Challenges
Sentences are on average 6 (DE, CS and RU) to 8 (FI) times longer when represented in characters. This poses three major challenges to achieving fully character-level translation.
(1) Training/decoding latency For the decoder, although the sequence to be generated is much longer, each character-level softmax operation costs considerably less compared to a word- or subword- level softmax. Chung et al. (2016) report that character-level decoding is only 14% slower than subword-level decoding.
On the other hand, computational complexity of the attention mechanism grows quadratically with respect to the sentence length, as it needs to attend to every source token for every target token. This makes a naive character-level approach, such as in (Luong and Manning, 2016), computationally pro- hibitive. Consequently, reducing the length of the source sequence is key to ensuring reasonable speed in both training and decoding.
(2) Mapping character sequence to continu- ous representation The arbitrary relationship be- tween the orthography of a word and its meaning is a well-known problem in linguistics (de Saus- sure, 1916). Building a character-level encoder is arguably a more difï¬cult problem, as the encoder needs to learn a highly non-linear function from a long sequence of character symbols to a meaning representation.
(3) Long range dependencies in characters A character-level encoder needs to model dependen- cies over longer timespans than a word-level en- coder does.
# 4 Fully Character-Level NMT
# 4.1 Encoder
We design an encoder that addresses all the chal- lenges discussed above by using convolutional and pooling layers aggressively to both (1) drastically shorten the input sentence and (2) efï¬ciently capture local regularities. Inspired by the character-level language model from (Kim et al., 2015), our encoder ï¬rst reduces the source sentence length with a series of convolutional, pooling and highway layers. The shorter representation, instead of the full character sequence, is passed through a bidirectional GRU to (3) help it resolve long term dependencies. We illustrate the proposed encoder in Figure 1 and discuss each layer in detail below.
Embedding We map the sequence of source characters of dc: character X = (C(x1), . . . , C(xTx)) â RdcÃTx where Tx is the number of source characters and C is the character embedding lookup table: C â RdcÃ|C|.
Convolution One-dimensional convolution opera- tion is then used along consecutive character embed- dings. Assuming we have a single filter f ¢ R¢*â of width w, we first apply padding to the begin- ning and the end of X, such that the padded sen- tence Xâ ⬠R&ex(f=+Â¥â-)) is w â 1 symbols longer. We then apply narrow convolution between Xâ and f such that the k-th element of the output Y;, is given as:
Yn = (X'* f)g = > (Xi ew 1m) ® f), (3) ig
where ® denotes elementwise matrix multiplica- tion and * is the convolution operation. X [ kâ-w+L:k] is the sliced subset of Xâ that contains all the rows but only w adjacent columns. The padding scheme employed above, commonly known as half convolu- tion, ensures the length of the output is identical to the inputâs: Y ¢ R!*7?=,
We just illustrated how a single convolutional ï¬lter of ï¬xed width might be applied to a sentence. In order to extract informative character patterns of different lengths, we employ a set of ï¬lters of varying widths. More concretely, we use a ï¬lter
IRN*(x/s) IRN*(x/s) RYxT. l (he+w-1) | l + RiXTs er sjon| P ! â= >
# Racx
Single-layer Bidirectional GRU
Four-layer Highway Network
# Segment Embeddings
# Max Pooling with Stride 5
# Single-layer Convolution ReLU
# Character
# Embeddings
Figure 1: Encoder architecture schematics. Underscore denotes padding. A dotted vertical line delimits each segment. The stride of pooling s is 5 in the diagram.
bank F = {fi,...,fm} where f; = Réexixni ig a collection of n; filters of width 7. Our model uses m = 8, hence extracts character n-grams up to 8 characters long. Outputs from all the filters are stacked upon each other, giving a single repre- sentation Y ⬠RN*?=, where the dimensionality of each column is given by the total number of filters N = SC", n;. Finally, rectified linear activation (ReLU) is applied elementwise to this representation.
at increased training time. We chose s = 5 in our experiments as it gives a reasonable balance between the two.
Highway network A sequence of segment embed- dings from the max pooling layer is fed into a high- way network (Srivastava et al., 2015). Highway net- works are shown to signiï¬cantly improve the qual- ity of a character-level language model when used with convolutional layers (Kim et al., 2015). A high- way network transforms input x with a gating mech- anism that adaptively regulates information ï¬ow:
Max pooling with stride The output from the con- volutional layer is ï¬rst split into segments of width s, and max-pooling over time is applied to each seg- ment with no overlap. This procedure selects the most salient features to give a segment embedding. Each segment embedding is a summary of meaning- ful character n-grams occurring in a particular (over- lapping) subsequence in the source sentence. Note that the rightmost segment (above âonâ) in Figure 1 may capture âsonâ (the ï¬lter in green) although âsâ occurs in the previous segment. In other words, our segments are overlapping as opposed to in word- or subword-level models with hard segmentation.
Segments act as our internal linguistic unit from this layer and above: the attention mechanism, for instance, attends to each source segment instead of source character. This shortens the source repre- sentation s-fold: Yâ â¬Â¢ RN*(7:/s), Empirically, we found using smaller s leads to better performance
y =9 © ReLU(W 2 + bi) + (1-9) Ox,
where g = Ï((W2x + b2)). We apply this to each segment embedding individually.
Recurrent from the highway layer is given to a bidirectional GRU from §2, using each segment embedding as input.
Subword-level encoder Unlike a subword-level encoder, our model does not commit to a speciï¬c is instead trained to choice of segmentation; consider every possible character pattern and extract only the most meaningful ones. Therefore, the deï¬nition of segmentation in our model is dynamic unlike subword-level encoders. During training, the model ï¬nds the most salient character patterns in a sentence via max-pooling, and the character
Vocab size Source emb. Target emb. Conv. ï¬lters Pool stride Highway Encoder Decoder 24,440 512 512 300 128 512 200-200-250-250 -300-300-300-300 5 4 layers 1-layer 512 GRUs 2-layer 1024 GRUs
Table 1: Bilingual model architectures. The char2char model uses 200 ï¬lters of width 1, 200 ï¬lters of width 2, · · · and 300 ï¬lters of width 8.
sequences extracted by the model change over the course of training. This is in contrast to how BPE segmentation rules are learned: the segmentation is learned and ï¬xed before training begins.
# 4.2 Attention and Decoder
Similarly to the attention model in (Chung et al., 2016; Firat et al., 2016a), a single-layer feedforward network computes the attention score of next target character to be generated with every source segment representation. A standard two-layer character-level decoder then takes the source context vector from the attention mechanism and predicts each target character. This decoder was described as base de- coder by Chung et al. (2016).
# 5 Experiment Settings
# 5.1 Task and Models
We evaluate the proposed character-to-character (char2char) translation model against subword- and bpe2char) on level baselines the WMTâ15 DEâEN, CSâEN, FIâEN and RUâEN translation tasks.1 We do not consider word-level models, as it has already been shown that subword-level models outperform them by mit- igating issues inherent to closed-vocabulary transla- tion (Sennrich et al., 2015; Sennrich et al., 2016). Indeed, subword-level NMT models have been the de-facto state-of-the-art and are now used in a very large-scale industry NMT system to serve millions of users per day (Wu et al., 2016).
1http://www.statmt.org/wmt15/translation -task.html
We experiment in two different scenarios: 1) a bilingual setting where we train a model on data from a single language pair; and 2) a multilingual setting where the task is many-to-one translation: we train a single model on data from all four lan- guage pairs. Hence, our baselines and models are:
(a) bilingual bpe2bpe: from (Firat et al., 2016a). (b) bilingual bpe2char: from (Chung et al., 2016). (c) bilingual char2char (d) multilingual bpe2char (e) multilingual char2char
We train all the models ourselves other than (a), for which we report the results from (Firat et al., 2016a). We detail the conï¬guration of our models in Table 1 and Table 2.
# 5.2 Datasets and Preprocessing
We use all available parallel data on the four lan- guage pairs from WMTâ15: DE-EN, CS-EN, FI-EN and RU-EN.
For the bpe2char baselines, we only use sentence pairs where the source is no longer than 50 subword symbols. For our char2char models, we only use pairs where the source sentence is no longer than 450 characters. For all the language pairs apart from FI-EN, we use newstest-2013 as a develop- ment set and newstest-2014 and newstest-2015 as test sets. For FI-EN, we use newsdev-2015 and newstest-2015 as development and test sets respec- tively. We tokenize2 each corpus using the script from Moses.3
When training bilingual bpe2char models, we ex- tract 20,000 BPE operations from each of the source and target corpus using a script from (Sennrich et al., 2015). This gives a source BPE vocabulary of size 20kâ24k for each language.
# 5.3 Training Details
Each model is trained using stochastic gradient de- scent and Adam (Kingma and Ba, 2014) with learn- ing rate 0.0001 and minibatch size 64. Training con- tinues until the BLEU score on the validation set
2This is unnecessary for char2char models, yet was carried out for comparison.
3https://github.com/moses-smt/mosesdecod er
Vocab size Source emb. Target emb. Conv. ï¬lters Pool stride Highway Encoder Decoder 54,544 512 512 400 128 512 200-250-300-300 -400-400-400-400 5 4 layers 1-layer 512 GRUs 2-layer 1024 GRUs
Table 2: Multilingual model architectures.
stops improving. The norm of the gradient is clipped with a threshold of 1 (Pascanu et al., 2013). All weights are initialized from a uniform distribution [â0.01, 0.01].
Each model is trained on a single pre-2016 GTX Titan X GPU with 12GB RAM.
# 5.4 Decoding Details
As from (Chung et al., 2016), a two-layer unidirec- tional character-level decoder with 1024 GRU units is used for all our experiments. For decoding, we use beam search with length-normalization to penal- ize shorter hypotheses. The beam width is 20 for all models.
# 5.5 Training Multilingual Models
Task description We train a model on a many-to- one translation task to translate a sentence in any of the four languages (German, Czech, Finnish and Russian) to English. We do not provide a language identiï¬er to the encoder, but merely the sentence itself, encouraging the model to perform language identiï¬cation on the ï¬y. In addition, by not providing the language identiï¬er, we expect the model to handle intra-sentence code-switching seamlessly.
Model architecture The multilingual char2char model uses slightly more convolutional ï¬lters than the bilingual char2char model, namely (200-250- 300-300-400-400-400-400). Otherwise, the archi- tecture remains the same as shown in Table 1. By not changing the size of the encoder and the decoder, we ï¬x the capacity of the core translation module, and only allow the multilingual model to detect more character patterns.
Similarly, the multilingual bpe2char model has the same encoder and decoder as the bilingual bpe2char model, but a larger vocabulary. We learn 50,000 multilingual BPE operations on the multilingual corpus, resulting in 54,544 subwords. See Table 2 for the exact conï¬guration of our multilingual models.
Data scheduling For the multilingual models, an appropriate scheduling of data from different lan- guages is crucial to avoid overï¬tting to one language too soon. Following (Firat et al., 2016a; Firat et al., 2016b), each minibatch is balanced, in that the pro- portion of each language pair in a single minibatch corresponds to that of the full corpus. With this minibatch scheme, roughly the same number of up- dates is required to make one full pass over the entire training corpus of each language pair. Minibatches from all language pairs are combined and presented to the model as a single minibatch. See Table 3 for the minibatch size for each language pair.
DE-EN CS-EN FI-EN RU-EN corpus size minibatch size 4.5m 14 12.1m 37 1.9m 6 2.3m 7
Table 3: The minibatch size of each language (second row) is proportionate to the number of sentence pairs in each corpus (ï¬rst row).
Treatment of Cyrillic To facilitate cross-lingual pa- rameter sharing, we convert every Cyrillic charac- ter in the Russian source corpus to Latin alphabet according to ISO-9. Table 4 shows an example of how this conversion may help the multilingual mod- els identify lexemes that are shared across multiple languages.
school schools CS RU RU (ISO-9) Ëskoly Ñкола ÑÐºÐ¾Ð»Ñ Ëskoly Ëskola Ëskola
Table 4: Czech and Russian words for school and schools, alongside the conversion of Russian characters into Latin.
Multilingual BPE For the multilingual bpe2char model, multilingual BPE segmentation rules are extracted from a large dataset containing training source corpora of all the language pairs. To ensure the BPE rules are not biased towards one language,
Setting Src Trg Dev Test1 Test2 DE-EN CS-EN FI-EN RU-EN (a)â (b) (c) (d) (e) (f)â (g) (h) (i) (j) (k)â (l) (m) (n) (o) (p)â (q) (r) (s) (t) bi bi bi multi multi bi bi bi multi multi bi bi bi multi multi bi bi bi multi multi bpe bpe char bpe char bpe bpe char bpe char bpe bpe char bpe char bpe bpe char bpe char bpe char char char char bpe char char char char bpe char char char char bpe char char char char 24.13 25.64 26.30 24.92 25.67 21.24 22.95 23.38 23.27 24.09 13.15 14.54 14.18 14.70 15.96 21.04 21.68 21.75 21.75 22.20 24.59 25.77 24.54 25.13 23.78 24.08 24.27 25.01 26.21 26.80 26.31 26.33 24.00 25.27 25.83 25.23 25.79 20.32 22.40 22.46 22.42 23.24 12.24 13.98 13.10 14.40 15.74 22.44 22.83 22.73 22.81 23.33
Table 5: BLEU scores of ï¬ve different models on four language pairs. For each test or development set, the best performing model is shown in bold. (â) results are taken from (Firat et al., 2016a).
larger datasets such as Czech and German corpora are trimmed such that every corpus contains an approximately equal number of characters.
# 6 Quantitative Analysis
# 6.1 Evaluation with BLEU Score
In this section, we ï¬rst establish our main hypothe- ses for introducing character-level and multilingual models, and investigate whether our observations support or disagree with our hypotheses. From our (1) if fully empirical results, we want to verify: character-level translation outperforms subword- level translation, (2) in which setting and to what extent is multilingual translation beneï¬cial and (3) if multilingual, character-level translation achieves superior performance to other models. We outline our results with respect to each hypothesis below.
subword-level In a bilin- (1) Character- vs. gual setting, the char2char model outperforms both subword-level baselines on DE-EN (Table 5 (a-c)) and CS-EN (Table 5 (f-h)). On the other two language pairs, it exceeds the bpe2bpe model and achieves similar performance with the bpe2char baseline (Table 5 (k-m) and (p-r)). We conclude that
the proposed character-level model is comparable to or better than both subword-level baselines.
the Meanwhile, character-level surpasses the subword-level encoder consistently in all the language pairs (Table 5 (d-e), (i-j), (n-o) and (s-t)). From this, we conclude that translating at the level of characters allows the model to discover shared constructs between languages more effectively. This also demonstrates that the character-level model is more ï¬exible in assigning model capacity to different language pairs.
(2) Multilingual vs. bilingual At the level of char- acters, we note that multilingual translation is indeed strongly beneï¬cial. On the test sets, the multilin- gual character-level model outperforms the single- pair character-level model by 2.64 BLEU in FI-EN (Table 5 (m, o)) and 0.78 BLEU in CS-EN (Ta- ble 5 (h, j)), while achieving comparable results on DE-EN and RU-EN.
At the level of subwords, on the other hand, we do not observe the same degree of performance beneï¬t from multilingual translation. Also, the multilingual bpe2char model requires much more updates to reach the performance of the bilingual
Adequacy Fluency Setting Src Trg Raw (%) Stnd. (Ï) Raw (%) Stnd. (Ï) DE-EN bi (a) (b) bi (c) multi bpe char char char char char 65.47 68.11 67.80 -0.0536 0.0509 0.0281 68.64 68.80 68.92 0.0052 0.0468 0.0282 CS-EN bi (d) (e) bi (f) multi bpe char char char char char 62.76 60.78 63.03 0.0361 -0.0154 0.0415 61.62 63.37 65.08 -0.0285 0.0410 0.1047 FI-EN (g) (h) (i) bi bi multi bpe char char char char char 47.03 50.17 50.95 -0.1326 -0.0650 -0.0110 59.33 59.97 63.26 -0.0329 -0.0216 0.0969 RU-EN (j) (k) (l) bi bi multi bpe char char char char char 61.26 64.06 64.77 -0.1062 0.0105 0.0116 57.74 59.85 63.32 -0.0592 0.0168 0.1748
Table 6: Human evaluation results for adequacy and ï¬uency. We present both the averaged raw scores (Raw) and the averaged standardized scores (Stnd.). Standardized adequacy is used to rank the systems and standardized ï¬uency is used to break ties. A positive standardized score should be interpreted as the number of standard deviations above this particular workerâs mean score that this system scored on average. For each language pair, we boldface the best performing model with statistical signiï¬cance. When there is a tie, we boldface both systems.
This suggests bpe2char model (see Figure 2). that learning useful subword segmentation across languages is difï¬cult.
(3) Multilingual char2char vs. others The mul- tilingual char2char model is the best performer in CS-EN, FI-EN and RU-EN (Table 5 (j, o, t)), and is the runner-up in DE-EN (Table 5 (e)). The fact that the multilingual char2char model outperforms the single-pair models goes to show the parameter efï¬ciency of character-level translation: instead of training N separate models for N language pairs, it is possible to get better performance with a single multilingual character-level model.
Approximately 1k turkers assessed a single test set (3k sentences in newstest-2014) for each system and language pair. Each turker conducted a mini- mum of 100 assessments for quality control, and the set of scores generated by each turker was standard- ized to remove any bias in the individualâs scoring strategy.
We consider three models (bilingual bpe2char, bilingual char2char and multilingual char2char) for the human evaluation. We leave out the multilingual bpe2char model to minimize the number of similar systems to improve the interpretability of the evalu- ation overall.
# 6.2 Human Evaluation
It is well known that automatic evaluation met- rics such as BLEU encourage reference-like transla- tions and do not fully capture true translation qual- ity (Callison-Burch, 2009; Graham et al., 2015). Therefore, we also carry out a recently proposed evaluation from (Graham et al., 2016) where we have human assessors rate both (1) adequacy and (2) ï¬uency of each system translation on a scale from 0 to 100 via Amazon Mechanical Turk. Adequacy is the degree to which assessors agree that the system translation expresses the meaning of the reference translation. Fluency is evaluated using system trans- lation alone without any reference translation.
For DE-EN, we observe that the multilingual char2char and bilingual char2char models are tied with respect to both adequacy and ï¬uency (Ta- ble 6 (b-c)). For CS-EN, the multilingual char2char and bilingual bpe2char models ared tied for ade- quacy. However, the multilingual char2char model yields signiï¬cantly better ï¬uency (Table 6 (d, f)). For FI-EN and RU-EN, the multilingual char2char model is tied with the bilingual char2char model with respect to adequacy, but signiï¬cantly outper- forms all other models in ï¬uency (Table 6 (g-i, j-l)). Overall, the improvement in translation quality yielded by the multilingual character-level model mainly comes from ï¬uency. We conjecture that be- cause the English decoder of the multilingual model is tuned on all the training sentence pairs, it becomes
(a) Spelling mistakes
DE ori DE src EN ref bpe2char char2char Why should we not be friends ? Warum sollten wir nicht Freunde sei ? Warum solltne wir nich Freunde sei ? Why should not we be friends ? Why are we to be friends ?
# (b) Rare words
# DE src EN ref bpe2char char2char
Siebentausendzweihundertvierundf¨unfzig . Seven thousand two hundred ï¬fty four . Fifty-ï¬ve Decline of the Seventy . Seven thousand hundred thousand ï¬fties .
# (c) Morphology
Die ZufahrtsstraÃen wurden gesperrt , wodurch sich laut CNN lange R¨uckstaus bildeten . The access roads were blocked off , which , according to CNN , caused long tailbacks . The access roads were locked , which , according to CNN , was long back . The access roads were blocked , which looked long backwards , according to CNN .
# (d) Nonce words
DE src EN ref bpe2char char2char Der Test ist nun ¨uber , aber ich habe keine gute Note . Es ist wie eine Verschlimmbesserung . The test is now over , but i donât have any good grade . it is like a worsened improvement . The test is now over , but i do not have a good note . The test is now , but i have no good note , it is like a worsening improvement .
# (e) Multilingual
Bei der Metropolitn´ıho v´yboru pro dopravu f¨ur das Gebiet der San Francisco Bay erkl¨arten Beamte , der Kon- gress k¨onne das Problem банкÑоÑÑÑво довеÑиÑелÑного Фонда ÑÑÑоиÑелÑÑÑва ÑоÑÑейнÑÑ
доÑог einfach durch Erh¨ohung der Kraftstoffsteuer l¨osen . At the Metropolitan Transportation Commission in the San Francisco Bay Area , ofï¬cials say Congress could very simply deal with the bankrupt Highway Trust Fund by raising gas taxes . During the Metropolitan Committee on Transport for San Francisco Bay , ofï¬cials declared that Congress could solve the problem of bankruptcy by increasing the fuel tax bankrupt . At the Metropolitan Committee on Transport for the territory of San Francisco Bay , ofï¬cials explained that the Congress could simply solve the problem of the bankruptcy of the Road Construction Fund by increasing the fuel tax .
Table 7: Sample translations. For each example, we show the source sentence as src, the human translation as ref, and the translations from the subword-level baseline and our character-level model as bpe2char and char2char, re- spectively. For (a), the original, uncorrupted source sentence is also shown (ori). The source sentence in (e) contains words in German (in green), Czech (in yellow) and Russian (in blue). The translations in (a-d) are from the bilingual models, whereas those in (e) are from the multilingual models.
a better language model than a bilingual modelâs de- coder. We leave it for future work to conï¬rm if this is indeed the case.
sample translations from the character-level model with those from the subword-level model, which al- ready sidesteps some of the issues associated with word-level translation.
# 7 Qualitative Analysis
In Table 7, we demonstrate our character-level modelâs robustness in four translation scenarios that conventional NMT systems are known to suffer in. We also showcase our modelâs ability to seamlessly handle intra-sentence code-switching, or mixed ut- terances from two or more languages. We compare
With real-world text containing typos and spelling mistakes, the quality of word-based translation would severely drop, as every non-canonical form of a word cannot be represented. On the other hand, a character-level model has a much better chance recovering the original word or sentence. Indeed, our char2char model is robust against a few spelling
mistakes (Table 7 (a)). Given a long,
rare word such as âSieben- tausendzweihundertvierundf¨unfzigâ (seven thou- sand two hundred ï¬fty four) in Table 7 (b), the subword-level model segments âSiebentausendâ as (Sieb, ent, aus, end), which results in an inaccurate translation. The character-level model performs bet- ter on these long, concatenative words with ambigu- ous segmentation.
Also, we expect a character-level model to han- dle novel and unseen morphological inï¬ections well. We observe that this is indeed the case, as our char2char model correctly understands âgesperrtâ, a past participle form of âsperrenâ (to block) (Ta- ble 7 (c)).
Nonce words are terms coined for a single use. They are not actual words but are constructed in a way that humans can intuitively guess what they mean, such as workoliday and friyay. We construct a few DE-EN sentence pairs that contain German nonce words (one example shown in Table 7 (d)), and observe that the character-level model can in- deed detect salient character patterns and arrive at a correct translation.
Finally, we evaluate our multilingual modelsâ ca- pacity to perform intra-sentence code-switching, by giving them as input mixed sentences from multiple languages. The newstest-2013 development datasets for DE-EN, CS-EN and FI-EN contain intersecting examples with the same English sentences. We com- pile a list of these sentences in DE/CS/FI and their translation in EN, and choose a few samples uni- formly at random from the English side. Words or clauses from different languages are manually inter- mixed to create multilingual sentences.
We discover that when given sentences with high degree of language intermixing, as in Table 7 (e), the multilingual bpe2char model fails to seamlessly handle alternation of languages. Overall, however, both multilingual models generate reasonable trans- lations. This is possible because we did not provide a language identiï¬er when training our multilingual models; as a result, they learned to understand a multilingual sentence and translate it into a coherent English sentence. We show supplementary sample translations in each scenario on a webpage.4
4https://sites.google.com/site/dl4mtc2c
Training and decoding speed On a single Titan X GPU, we observe that our char2char models are ap- proximately 35% slower to train than our bpe2char baselines when the same batch size was used. Our bilingual character-level models can be trained in roughly two weeks.
We further note that the bilingual bpe2char model can translate 3,000 sentences in 66.63 minutes while the bilingual char2char model requires 71.71 minutes (online, not in batch). See Table 8 for the exact details.
Model Time to execute 1k updates (s) Batch size Time to decode 3k sentences (m) bpe2char char2char 2461.72 2371.93 128 64 66.63 71.71 Multi bpe2char char2char 1646.37 2514.23 64 64 68.99 72.33
Table 8: Speed comparison. The second column shows the time taken to execute 1,000 training updates. The model makes each update after having seen one mini- batch.
Further observations We also note that the mul- tilingual models are less prone to overï¬tting than the bilingual models. This is particularly visible for low-resource language pairs such as FI-EN. Figure 2 shows the evolution of the FI-EN validation BLEU scores where the bilingual models overï¬t rapidly but the multilingual models seem to regularize learning by training simultaneously on other language pairs.
BLEU on FI-EN newstest-2013 mene o 15 ~ â qeaseessueeeeeees® a aaonetanas so aaa rine yo aâ . 2 sms bi-bpe2char J mee bi-char2char ae multi-bpe2char multi-char2char 10) â- 500 7500 2000 7000 Number of updates (k)
Figure 2: Multilingual models overï¬t less than bilingual models on low-resource language pairs.
# 8 Conclusion
We propose a fully character-level NMT model that accepts a sequence of characters in the source lan- guage and outputs a sequence of characters in the target language. What is remarkable about this model is the absence of explicitly hard-coded knowl- edge of words and their boundaries, and that the model learns these concepts from a translation task alone.
the fully character-level model performs as well as, or bet- ter than, subword-level translation models. The per- formance gain is distinctly pronounced in the multi- lingual many-to-one translation task, where results show that character-level model can assign model capacities to different languages more efï¬ciently than the subword-level models. We observe a partic- ularly large improvement in FI-EN translation when the model is trained to translate multiple languages, indicating positive cross-lingual transfer to a low- resource language pair.
We discover two main beneï¬ts of the multilingual character-level model: (1) it is much more param- eter efï¬cient than the bilingual models and (2) it can naturally handle intra-sentence code-switching as a result of the many-to-one translation task. Ul- timately, we present a case for fully character-level translation: that translation at the level of charac- ter is strongly beneï¬cial and should be encouraged more.
The repository https://github.com/nyu-dl /dl4mt-c2c contains the source code and pre- trained models for reproducing the experimental re- sults.
In the next stage of this research, we will investi- gate extending our multilingual many-to-one trans- lation models to perform many-to-many translation, which will allow the decoder, similarly with the en- coder, to learn from multiple target languages. Fur- thermore, a more thorough investigation into model architectures and hyperparameters is needed.
# Acknowledgements
KC thanks the support by eBay, Facebook, Google (Google Faculty Award 2016) and NVidia (NVIDIA AI Lab 2016-2019). This work was partly sup- ported by Samsung Advanced Institute of Technol-
ogy (Deep Learning). JL was supported by Qual- comm Innovation Fellowship, and thanks David Yenicelik and Kevin Wallimann for their contribu- tion in designing the qualitative analysis. The au- thors would like to thank Prof. Zheng Zhang (NYU Shanghai) for fruitful discussion and comments, as well as Yvette Graham for her help with the human evaluation.
# References
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- 2015. Neural machine translation by jointly gio. In Proceedings of learning to align and translate. the International Conference on Learning Represen- tations (ICLR).
Chris Callison-Burch. 2009. Fast, cheap, and creative: Evaluating translation quality using amazonâs mechan- ical turk. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. Kyunghyun Cho, Bart van Merri¨enboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014a. On the proper- ties of neural machine translation: Encoder-decoder In Proceedings of the 8th Workshop on approaches. Syntax, Semantics, and Structure in Statistical Trans- lation, page 103.
Kyunghyun Cho, Bart van Merri¨enboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Ben- gio. 2014b. Learning phrase representations using RNN encoder-decoder for statistical machine transla- tion. In Proceedings of the Empiricial Methods in Nat- ural Language Processing.
Junyoung Chung, Kyunghyun Cho, and Yoshua Bengio. 2016. A character-level decoder without explicit seg- mentation for neural machine translation. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics.
Marta R. Costa-Juss´a and Jos`e A. R. Fonollosa. 2016. In Pro- Character-based neural machine translation. ceedings of the 54th Annual Meeting of the Association for Computational Linguistics, page 357.
Ferdinand de Saussure. 1916. Course in General Lin- guistics.
Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016a. Multi-way, multilingual neural machine trans- lation with a shared attention mechanism. In Proceed- ings of the 2016 Conference of the North American Chapter of the Association for Computational Linguis- tics.
Orhan Firat, Baskaran Sankaran, Yaser Al-Onaizan, Fatos T. Yarman Vural, and Kyunghyun Cho. 2016b. Zero-resource translation with multi-lingual neural machine translation.
Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2015. Multilingual language processing In Proceedings of the 2016 Conference from bytes. of the North American Chapter of the Association for Computational Linguistics.
Yvette Graham, Nitika Mathur, and Timothy Baldwin. 2015. Accurate evaluation of segment-level machine translation metrics. In Proceedings of the 2015 Con- ference of the North American Chapter of the Associ- ation for Computational Linguistics Human Language Technologies, Denver, Colorado.
Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2016. Can machine translation systems be evaluated by the crowd alone. Natural Language Engineering, FirstView.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735â 1780.
Ray S. Jackendoff. 1992. Semantic Structures, vol- ume 18. MIT press.
S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vo- cabulary for neural machine translation. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics.
Yoon Kim, Yacine Jernite, David Sontag, and Alexan- der M. Rush. 2015. Character-aware neural language models. In Proceedings of the 30th AAAI Conference on Artiï¬cial Intelligence.
2014. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference for Learning Repre- sentations (ICLR).
Wang Ling, Isabel Trancoso, Chris Dyer, and Alan W. Black. 2015. Character-based neural machine transla- tion. arXiv preprint arXiv:1511.04586.
Minh-Thang Luong and Christopher D. Manning. 2016. Achieving open vocabulary neural machine translation with hybrid word-character models. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics.
Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention- In Proceedings of based neural machine translation. the 54th Annual Meeting of the Association for Com- putational Linguistics.
Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difï¬culty of training recurrent neural net- works. In Proceedings of the 30th International Con- ference on Machine Learning (ICML).
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with In Proceedings of the 54th Annual subword units.
Meeting of the Association for Computational Linguis- tics.
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Edinburgh neural machine translation systems for WMT 16.
Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Training very deep networks. In Advances in Neural Information Processing Systems (NIPS 2015), volume 28.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2015. Se- quence to sequence learning with neural networks. In Advances in Neural Information Processing Systems (NIPS 2015), volume 28.
Yulia Tsvetkov, Sunayana Sitaram, Manaal Faruqui, Guillaume Lample, Patrick Littell, David Mortensen, Alan W. Black, Lori Levin, and Chris Dyer. 2016. Polyglot neural language models: A case study in cross-lingual phonetic representation learning. In Pro- ceedings of the 2016 Conference of the North Ameri- can Chapter of the Association for Computational Lin- guistics.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Googleâs neural machine translation system: Bridg- ing the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Yijun Xiao and Kyunghyun Cho.
Efï¬cient character-level document classiï¬cation by combin- ing convolution and recurrent layers. arXiv preprint arXiv:1602.00367.
2015. Character-level convolutional networks for text classi- ï¬cation. In Advances in Neural Information Process- ing Systems (NIPS 2015), volume 28. | {
"id": "1602.00367"
} |
1610.02850 | Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets | We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time
budgets during application. They allow for individual budgets given a priori
for each test example and for anytime prediction, i.e., a possible interruption
at multiple stages during inference while still providing output estimates. Our
approach can therefore tackle the computational costs and energy demands of
DNNs in an adaptive manner, a property essential for real-time applications.
Our Impatient DNNs are based on a new general framework of learning dynamic
budget predictors using risk minimization, which can be applied to current DNN
architectures by adding early prediction and additional loss layers. A key
aspect of our method is that all of the intermediate predictors are learned
jointly. In experiments, we evaluate our approach for different budget
distributions, architectures, and datasets. Our results show a significant gain
in expected accuracy compared to common baselines. | http://arxiv.org/pdf/1610.02850 | Manuel Amthor, Erik Rodner, Joachim Denzler | cs.CV | British Machine Vision Conference (BMVC) 2016 | null | cs.CV | 20161010 | 20161010 | MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS
# Impatient DNNs â Deep Neural Networks with Dynamic Time Budgets
# t c O 0 1
# Manuel Amthor manuel.amthor@uni-jena.de
# Erik Rodner erik.rodner@uni-jena.de
Computer Vision Group Friedrich Schiller University Jena Germany www.inf-cv.uni-jena.de
]
# ren
# Joachim Denzler joachim.denzler@uni-jena.de
# V C . s c [
# Abstract
1 v 0 5 8 2 0 . 0 1 6 1 : v i X r 1 a
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e. a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications.
Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN archi- tectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a signiï¬cant gain in expected accuracy compared to common baselines.
# Introduction
Deep and especially convolutional neural networks are the current base for the majority of state-of-the-art approaches in vision. Their ability to learn very effective representations of visual data has led to several breakthroughs in important applications, such as scene un- derstanding for autonomous driving [1], object detection [6], and robotics [4]. The main obstacle for their application is still the computational cost during prediction for a new test image. Many previous works have focused on speeding up DNN inference in general achiev- ing constant speed-ups for a certain loss in prediction accuracy [10, 16].
In contrast, we focus on inference with dynamic time budgets. Our networks provide a series of predictions with increasing computational cost and accuracy. This allows for (1) dynamic interruption of the prediction in time-critical applications (anytime ability, Figure 1 left), or for (2) predictions with a dynamic time budget individually given for each test im- age a-priori (Figure 1 right). Dynamic budget approaches can for example deal with varying energy resources, a property especially useful for real-time visual inference in robotics [17]. Furthermore, early predictions allow for immediate action selection in reinforcement learn- ing scenarios [21].
© 2016. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms.
1
2
MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS
@ a) a) >) () © < © « © © c < s Ss Ss s Ss S S 3 3 g X i 3 3 3 3 5 3 3 3 2 2 2 © 2 2 2 Ed Ed 5 5 Ey ES ES I TL tT v 2 . THGaNT HAT GRIN tH $s Q Q s time > time > ~ interruptable CNN CNN with dynamic budget (anytime ability) given a-priori
# Fr
3
§
Figure 1: Illustration of convolutional neural network prediction in dynamic budget scenar- ios: (left) prediction can be interrupted at any time or (right) the budget is given before each prediction.
The main idea of our approach is to formulate the learning of dynamic budget predictors as a generalized risk minimization that involves the distribution of budgets provided for the application. The distribution of possible budgets has been either previously neglected or as- sumed to be uniform [12]. However, we show that such an easily available prior information can signiï¬cantly help to improve the expected accuracy.
Our formulation leads to a straight-forward modiï¬cation of convolutional neural network (CNN) architectures and their training. In particular, we add several early prediction and loss layers along the standard processing pipeline of a DNN (Figure 1 and Figure 2). Accord- ing to our risk minimization framework for dynamic budget predictors, all of these layers need to be learned jointly with a weighted combination derived from a time-budget distri- bution. Whereas this strategy is directly related to DNN learning strategies, such as deep supervision [24] and inception architectures [23], we demonstrate its usefulness for adapting to varying resources during testing.
The paper is structured as follows. After discussing related work, we deï¬ne dynamic budget predictors and derive a new learning framework based on risk minimization with budget distributions (Sect. 2). Our framework can be directly applied to deep and especially convolutional neural networks as described in Sect. 3. Experiments in Sect. 4 show the advantages of our approach for different architectures, datasets, and budget distributions.
Related work on anytime prediction The work of Karayev et al. [12] presented an ap- proach that iteratively and dynamically selects feature representations to maximize the area above an entropy vs. cost curve. Our approach however focuses on a static order of predic- tors and is able to incorporate budget distributions expected for the application. Fröhlich et al. [5] proposed a semantic segmentation approach with anytime classiï¬cation capability. Their method is based on random decision forests learned in a layer-wise fashion. Xu et al. [26] considers anytime classiï¬cation with unknown budgets by combining a cost-sensitive support vector machine with feature learning. Similar to [5], their predictors are learned in a greedy fashion and not learned jointly as in our case. Learning all of the predictors with shared parameters jointly allows us to share computations while directly optimizing with respect to expected accuracy during training. The paper of [25] presents an algorithm for
MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS
learning tree ensembles with a constrained time budget available during training. case, the whole distribution budgets is given during training.
Related work on deep supervision and DNNs with multiple losses There are multiple methods that use a similar architecture of deep neural networks than ours characterized by multiple loss layers and joint training of them. For example, [24] refers to such a training strategy as âdeep supervisionâ and shows that it allows for training deeper networks in a robust fashion. A very similar technique has been used in [7] for improved scene recognition. Furthermore, multiple loss layers are often used for multi-task learning, where the goal is to jointly predict various outputs [27].
In contrast to these works, our paper focuses on the impact of such an architecture on the ability of DNNs to deal with dynamic time budgets during inference. Furthermore, we show that such an architectural design can be directly derived from a very general risk minimiza- tion framework for predictors with dynamic budgets.
Related work on speeding up convolutional neural networks There are multiple works that focus on speeding up DNNs and the special case of convolutional neural networks (CNNs). Applied and adapted techniques range from low-rank approximations [2, 6, 10] to FFT computations of the involved convolutions [19]. The Fast R-CNN method of [6] speeds up fully-connected layers by simple SVD approximation. Similar techniques have been presented by [2] and [10]. The paper of [8] provides an empirical study of the effects of CNN architectural design choices on the computation time and the achieved recognition performance. A straightforward technique to speed up convolutions with large ï¬lter sizes uses Fast Fourier Transforms as studied by [19]. Furthermore, efï¬cient ï¬ltering techniques, such as the Winograd transformation [14], are applicable as well.
Our approach also tries to speed up inference of deep neural networks, i.e. a forward pass. However, instead of approximating different operations performed in single layers, we achieve a signiï¬cant speed-up by allowing the algorithm to deal with dynamic time budgets. Therefore, our research is orthogonal to the one brieï¬y described and combining them is straightforward.
# 2 Learning Dynamic Budget Predictors
In this section, we derive a simple yet powerful learning scheme for dynamic budget predic- tors. Without loss of generality, we focus on time budgets in the following.
Speciï¬cation of dynamic budgets An important challenge for dynamic budget approaches is that the budget available for inference during testing is not known during training and for anytime scenarios even not known during inference itself. For anytime tasks, we need to learn algorithms that can be interrupted at several time steps and balance the trade-off be- tween calculating direct predictions of an output y for an example xxx or performing calculating intermediate outputs that help later on for further reï¬nements of the predictions.
This trade-off is without any further speciï¬cation, ill-posed. However, in many appli- cations, we know something about the distribution p(t | xxx, y) of time budgets t available to the algorithm for a given input-output pair (xxx, y). In the following, we assume that this distribution is either given or can be modeled for an application.
3
MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS
4
Risk minimization with budget distributions In the following, we develop a framework for learning dynamic budget predictors using risk minimization. We consider inference al- gorithms f that provide predictions y â Y for input examples xxx â X at different times t â R, i.e. we have f : X Ã R â Y.
Learning the parameters θθθ of f is done by minimizing the following regularized risk:
argminθθθ tâR yâY xxxâX L( f (xxx,t; θθθ ), y) · p(xxx, y,t) dxxx dy dt + R(θθθ ) , (1)
with L being a suitable loss function, R(θθθ ) being a regularization term, and p(xxx, y,t) being the joint distribution of an input-output pair (xxx, y) and the available time t. This formulation does not require any differentiation between a-priori given budget or anytime scenarios.
We further assume that the time available is independent of the actual example and itâs label. This is a reasonable assumption, since the available time is in most applications just based on a limitation of hardware or data transfer resources. Since we are given a training set D = (xxxi, yi)n
n argming [ LLU: 8).99) POG + REE) . Q) JtER jay
The predictor f is an algorithm performing a ï¬nite sequence of atomic operations. There-
fore, the prediction output will be only changing at discrete time steps t1, . . . ,tK: f (xxx,t; θθθ ) = f (xxx,tk; θθθ ) def.= fk(xxx; θθθ k), f (xxx,t; θθθ ) = fK(xxx; θθθ K) .
Furthermore, before t1, no output estimate is available. Since this leads to a constant additive term independent of θθθ , we can ignore this aspect in the following. In total, Eq. (2) simpliï¬es as follows:
K n argming )â we: (Zecieac00.09) +R(8) , ©) k=] i=l
with weights wz = Se p(t)dt for 1 <k < K and wx = fr p(t)dt. As can be seen we have a simple learning objective, which is a weighted combination of the learning objectives of each of the individual predictors f;. If some of the parameters are shared between the predictors, which is the case for our approach presented in Sect. 3, each term in the objective can not be optimized independently and joint optimization is necessary. Sharing parameters is essential for optimizing shared computations towards maximizing the expected accuracy of the complete model.
The information about the time-budget distribution deï¬nes the weights of the loss terms in an intuitive manner: if there is a high probability of the time budget being between tk and tk+1, the loss of fk has a strong impact on the overall learning objective and the parameters θθθ k including the shared ones should be tuned towards reducing the loss of fk rather than contributing signiï¬cantly to other predictors.
# 3 Learning Impatient DNNs with Early Prediction Layers
In this section, we show how a single deep neural network with additional prediction layers is well suited for providing a series of prediction models.
(3)
(4)
MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS
inputs labels _time-budget distribution batch convi. architectures a 4 norm. Tat [eariyipreun TossT for early prediction layers I pooll I batch conve early prediction (fe only) i norm. relu2__|{early pred.2}{__ loss2 ~L_ie 7 I pool2 i batch conv3 | \_ i norm. relu3__}{early pred. 3 Toss3 T ~ combined early prediction (ava) batch conv4 | NN loss âspatial avg norm E ool | Le} - relud_|{early pred. 4}{~ loss I batch conv E norm. relu5, éarly pred. 5 T0855 I pools early prediction (avg 4x4) To Spatial avg T | pool 4x4 â>[_fe relu6 I 7 T Veta] \ I 18 Toss6
Figure 2: (Left) Modiï¬cation of the AlexNet architecture for dynamic budgets and early predictions. (Right) Possible architectures for early prediction.
Early prediction layers To obtain a series of predictions, we add K additional layers to a common DNN architecture as illustrated in Figure 2. We refer to these layers as early predic- tion (EP) layers in the following. The output fk(xxx) of these layers has as many dimensions as y. Already after the ï¬rst layers, our approach is able to perform predictions with only a very few number of computational operations. The layered architecture of a DNN has an important advantage, since all fk naturally share a large set of their parameters and also a large number of computations. Anytime approaches require a forward pass to go through all early prediction layers that can be processed until interruption. In case of non-parallel computation, the computational overhead of the early prediction layers should therefore be reduced as much as possible.
The right part of Figure 2 shows different choices for EP layers we experimented with: (1) FC only, which is a simple single fully-connected (FC) layer followed by a softmax layer, (2) AVG, which performs average pooling across the spatial dimensions of previous layer before a fully-connected layer, which leads to a signiï¬cantly reduced number of parameters for the EP layers, and (3) AVG 4 à 4, which allows for preserving rough spatial information by performing average pooling in 4 à 4 = 16 uniformly-sized regions.
Learning with weighted losses For learning, each of the EP layers is connected to a loss layer. The overall loss during training is exactly the weighted combination we derived in the previous section in Eq. (2).
In theory, training our Impatient DNNs does not require any further modiï¬cations and learning can be done with standard back-propagation and gradient-descent. However, we observed in experiments that batch normalization [9] leads to a signiï¬cantly more robust training and is even required to achieve convergence at all in most cases.
5
6
MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS
to, EQ 1 uN 1 POLY Lo, NORM o| o.| ol ol o.6| o.6| o.s| o.6| oa oa oa oa o.2} o.2} oa} 02] coer 2 rr a re as o 56 (OOD 3 3 > 7 4 5 EP layer EP layer EP layer EP layer
Figure 3: Types of time-budget distributions we consider in our paper.
Weighting schemes In our experiments, we are interested in the effect of different time- budget distributions provided during learning. To simulate them, we consider the following schemes for early prediction layer weights w;,...,wx: (STD) standard DNN training, i.e. only the last prediction matters: wx = 1 and w; = 0 otherwise, (EQ) uniform weights for uniform time-budget distributions: w, = t, (LIN) linearly increasing weights, i.e. small time budgets are unlikely: wx « k, (POLY) polynomially increasing weights: wy « kY with y> 1, CLIN, IPOLY) decreasing weights, i.e. small time budgets are likely: wz, = Wrel-k for weights w), of the former schemes, and (NORM) small and large time budgets are rare and layers in the middle of the architecture are given a high weight: w; « exp(âB - (k â Kat)?) with B = 0.34. All of these schemes are simulating different budget specifications of an application. An illustration of several instances is given in Figure 3.
# 4 Experiments
In the following, we evaluate our approach with respect to different dynamic budget schemes and compare with standard DNN training and other relevant baselines.
Experimental setup and datasets For evaluation, we conducted experiments on two ob- ject classiï¬cation datasets. The 15-Scenes [15] dataset comprises a total of 4,485 images covering categories from kitchen and living room to suburban and industrial. Each category contains between 200 and 400 images each, from which we took 100 images for training, as suggested by [15], and the remaining ones for testing. The training set is further divided into 90 images for actual training and 10 images for validation. The MIT-67 [20] indoor scenes database is comprised of 67 categories. We follow the procedure of [20] and take 80 images for training and 20 for testing. Again, the training set is split in order to obtain a validation set of 8 images per class.
Since our datasets are too small for DNN training from scratch, we perform ï¬ne-tuning of different models pre-trained on ImageNet, e.g. AlexNet [13] and VGG19 [22]. The positions of EP layers for AlexNet are given in Figure 2. For VGG19, we add EP layers after each block of convolutional layers. Please note that the last âearlyâ prediction layer is always the output layer of the original CNN architecture.
Analysis of learning Impatient DNNs In the following, we show that for learning Impa- tient DNNs care has to be taken to ensure convergence. For example, an adequate learning rate has to be determined to ensure convergence of the network while avoiding saturation at low accuracy. This becomes much more important when dealing with losses of multiple branches, since the gradients at shared layers accumulate leading to the network training
MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS
0.7 es g¢ 2 9g we 5 © 07 Prediction 1 Prediction 2 Prediction 3 Prediction 4 Prediction 5 Prediction 6 accuracy on validation set © 0.0 0 500 1000 1500 2000 0 20 40 60 80 100 epochs epochs
accuracy on validation set
Figure 4: Convergence during learning an Impatient AlexNet trained on MIT-67 with (right) and without (left) batch normalization: Different colors indicate individual early prediction layers and it can be clearly seen that batch normalization signiï¬cantly improves stability during training.
being more fragile. Especially in the case of deeper network architectures, e.g. VGG, we observed that convergence can not be achieved at all without proper normalization.
Therefore, we made use of batch normalization [9] which rectiï¬es the covariate shift in the input data distribution of each convolution layer. This technique allows for training with much higher learning rates ensuring faster convergence and in our case convergence at all. In Figure 4 (left), an example of optimizing an Impatient AlexNet is shown where the validation accuracy for early prediction layers saturates very slow at a low value caused by a highly decreased learning rate of 10â4. Even no convergence is achieved for very early layers after running 2000 epochs of training. In contrast, adding batch normalization (right- hand side) allows for a 100à higher learning rate resulting in very fast convergence at a high level of validation accuracy for all prediction layers.
Evaluation of early prediction architectures As presented in Sect. 3, several architec- tures are possible for early prediction. The straightforward approach of connecting FC layers directly to each convolutional layer leads to a huge amount of additional parameters to be optimized. These layers are prone to overï¬tting. This can be seen in the learning statistics for MIT67 with a VGG19 base architecture shown in Figure 5. The training loss is near zero together with a moderate validation accuracy for early layers. We also experimented with multiple FC layers. However, learning of these architectures failed to converge in all cases independently from the choice of hyperparameters. By applying spatial pooling lay- ers, validation accuracy is substantially improved, which can be seen in Figure 5 (AVG and AVG4x4). Especially AVG4x4 provides rough spatial information which helps to improve performance even further. Therefore, we use this architecture in the following experiments. In the last two columns of Table 1, average computation times according to the particular weighting schemes and budget distributions are presented for a single image. If inference is performed up to a particular prediction layer known in advance, previous prediction layers do not have to be assessed and we achieve low prediction times tB without additional overhead. Interruptable prediction in anytime scenario A (tA) requires inference of all intermediate In the worst case, i.e. the prediction layers caused by the potential sudden interruption. forward pass includes all prediction layers, average computation time increases compared to
7
MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS
10 408 mmm FC 2 o7| {i FC Mmm AVG S$ o6|| El AVG We AVG4x4|} 5 05|| HEE AVG4x4 S o4 FS 5 o3 a P02 5 oa 107 0.0 EPL =P2 =P3 era EPS EPL eP2 eP3 EPS eS eG Early Prediction Layer Early Prediction Layer
Figure 5: Comparison of different early prediction architectures of an Impatient VGG19 trained on MIT-67. Replacing fully-connected layers (FC) by spatial average pooling (AVG & AVG4x4) reduces the effect of overï¬tting resulting in higher validation accuracy.
the scenario with a-priori given budgets. All experiments were performed on an NVIDIA GeForce GTX 970 GPU.
Does joint training of EP layers help? The most interesting question, however, is whether our joint training scheme motivated in Sect. 2 provides superior results compared to learning predictors independently. To answer this question, we compared our approach with different baselines that learn several SVM classiï¬ers based on extracted CNN features [3] at each early prediction layer. We optimize SVM hyperparameters on the validation set to allow fair comparison. The underlying networks, on the contrary, differ in the sense that we made use of an original CNN pre-trained on ImageNet and a pre-trained CNN ï¬ne-tuned on the current dataset.
In Table 1, the evaluation for different time-budget distributions is presented where each result shows the expected accuracy according to the particular weighting scheme and budget distribution. It can be clearly seen that the original CNN (ORIG) without the adaptation to the current dataset performs worst. By applying ï¬ne-tuning (FT), however, accuracy can be noticeably increased for all early prediction SVMs.
Our joint learning of the EP layers provides superior results in almost all scenarios. Es- pecially in the case of small time budgets our method beneï¬ts from taking the budget distri- bution during learning into account resulting in an improvement of almost 10% on MIT-67 and 6% on 15-Scenes for an Impatient VGG19 compared to the best performing baseline. For extreme weighting schemes with high priority on later predictions (POLY ), ï¬ne-tuning of the original networks provides slightly better results compared to our approach. This is not surprising since in this case training is very similar to that of standard DNNs with only one ï¬nal loss layer.
In Table 2, we compared our approach to state-of-the-art results for MIT-67 and 15- Scenes. Although the focus of this paper is rather on anytime capability while running It should be the risk of dropping accuracy at ï¬nal layers, we achieved superior results. noted that only the last layer is used to obtain predictions, since we assume to have no budget restrictions. Especially for the jointly trained Impatient VGG19 on MIT-67, it was even possible to outperform the standard ï¬ne-tuned CNN, which supports the idea of âdeep supervisionâ [24].
Cascaded prediction Apart from both scenarios presented in Figure 1, efï¬cient classiï¬ca- tion constitutes another interesting application of our approach. The task here isâfor a given set of examplesâto reach a desired accuracy within a minimal but not ï¬xed amount of time.
MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS
VGG19 BUDGET SCHEME MIT-67 ORIG FT OURS 15-Scenes ORIG FT OURS â
tB [ms] â
tA [ms] EQ LIN POLY ILIN IPOLY NORM 46.65 54.19 62.82 37.25 25.63 47.53 48.07 56.52 67.07 37.71 25.65 47.90 53.93 60.55 69.66 45.62 35.11 55.38 83.37 85.87 88.71 77.56 70.14 84.46 84.28 87.47 91.71 77.73 69.85 84.74 85.63 88.02 90.88 80.87 75.93 86.67 1.11 1.37 1.72 0.82 0.50 1.07 1.19 1.47 1.84 0.86 0.51 1.15 ALEXNET BUDGET SCHEME MIT-67 ORIG FT OURS 15-Scenes ORIG FT OURS â
tB [ms] â
tA [ms] EQ LIN POLY ILIN IPOLY NORM 41.75 45.19 48.50 36.64 28.69 43.97 46.19 50.96 56.29 39.59 30.17 47.80 48.40 52.13 55.76 42.91 36.14 49.93 82.56 83.73 85.56 78.10 72.38 83.25 84.28 86.19 88.98 79.03 72.48 84.82 85.11 85.94 87.38 81.87 77.85 84.89 0.68 0.79 0.96 0.54 0.40 0.65 0.75 0.89 1.09 0.59 0.42 0.72
Table 1: Comparison of Impatient AlexNet (top) and VGG19 (bottom) CNNs with sev- eral baselines. Performance is measured by expected accuracy in % based on the particular budget distribution.
Dataset Orig FT Ours (eq) Ours (poly) PlacesCNN [28] [18]â MIT-67 15-Scenes 65.0% 71.04% 67.23% 88.30% 92.83% 92.13% 71.71% 91.45% 68.24% 90.19% 71.5% -
Table 2: How good are our VGG19 Impatient Networks when there are no budget restrictions during testing? The table shows the accuracy of the last prediction layer also compared to state-of-the-art results. â The method of [18] requires more than 4s per image.
In particular, interrupting the network at a certain depth might already provide the correct decision which renders further computation unnecessary. To implement the idea of efï¬cient inference, an adequate stopping criterion has to be deï¬ned. Since each early prediction layer provides probabilistic outputs, we applied uncertainty-based decision making by calculating the ratio between the two highest class probabilities, which is known as 1-vs-2 strategy [11]. If the current prediction of class probabilities is characterized by a high ratio, inference can be interrupted.
The analysis of the proposed criterion can be seen in Figure 6 showing time-accuracy plots. Thereby, one point on the red graph is obtained by a ï¬xed ratio threshold which determines whether an early layer prediction already reaches sufï¬cient certainty and thus provides the ï¬nal decision. The blue graph, however, represents classiï¬cation results of each early prediction layer itself, i.e., the ï¬nal decision is made at always the same depth, inde- pendently of the underlying ratio. As can be seen, by using uncertainty-based predictions, accuracy can be increased substantially in a lot of cases with the same computational efforts. For example, by interrupting the AlexNet network at the ï¬fth prediction layer consistently takes â¼ 1 ms per image for MIT-67 (second-last plot in Figure 6). In contrast, using the proposed criterion, accuracy can be increased from 53% up to 57% while still requiring ex- actly the same computation time on average. An entropy-based criterion achieved inferior performance in our experiments.
Qualitative results In Figure 7, qualitative results for the task of scene recognition (class âbathroomâ from MIT-67) are shown. Different numbers in each image indicate the early
9
10
MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS
Boe & g accuracy on test set gg [= Ours (uncertainty) ours (uncertainty) â ours (anytime £9) â ours (anytime £9) ours (uncenainyy (anytime EQ) [= ours (uncertainty yytime EQ) average time per image in ms average time per image in ms average time per image in ms average time per image in ms
Figure 6: Evaluation of uncertainty-based predictions compared to early layer predictions. From left to right: Impatient AlexNet on 15-Scenes, Impatient VGG19 on 15-Scenes, Impa- tient AlexNet on MIT-67, and Impatient VGG19 on MIT-67.
Figure 7: Images of the MIT-67 ï¬rst correctly classiï¬ed as âbathroomâ at different early prediction layers of an Impatient VGG19 CNN. The position of the layers is highlighted as a number and a uniquely colored border.
prediction layer in which the particular example was ï¬rst correctly classiï¬ed. It can be clearly seen that the examples already decided at EP1 are white colored bathrooms with clearly visible toilet bowl, shower, and sink. With increasing complexity of the scene, layer depth increases as well to provide correct decisions. For example, the right most images in the second row of Figure 7 shows extraordinary bathrooms of unusual colored walls and furnishings increasing the likelihood of confusion with other classes, e.g. children room.
# 5 Conclusions
In this paper, we presented impatient deep neural networks that tackle the problem of classi- ï¬cation with dynamic time budgets during application. Compared to standard DNNs which suffer from a high computational demand during inference, we showed that our approach allows for anytime prediction, i.e. a possible interruption at multiple stages while still pro- viding output estimates which renders our method suitable even for real-time applications. We presented a novel general framework of learning dynamic budget predictors based on risk minimization, which we adapted directly to state-of-the-art convolutional neural network ar- chitectures by branching additional early prediction layers with weighted losses. Based on a set of object classiï¬cation datasets and architectures, we showed that our approach pro- vides superior results for different time budget distributions. Furthermore, we developed an uncertainty-based prediction framework allowing for reducing computational costs while still providing the same accuracy.
MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS
# References
[1] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. arXiv preprint arXiv:1604.01685, 2016.
[2] Emily Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Ex- ploiting linear structure within convolutional networks for efï¬cient evaluation. CoRR, abs/1404.0736, 2014.
[3] Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. arXiv preprint arXiv:1310.1531, 2013.
[4] Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, and Pieter Abbeel. Deep spatial autoencoders for visuomotor learning. In ICRA, 2016.
[5] Björn Fröhlich, Erik Rodner, and Joachim Denzler. As time goes by: Anytime semantic segmentation with iterative context forests. In Symposium of the German Association for Pattern Recognition (DAGM), pages 1â10, 2012.
[6] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierar- chies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580â587, 2014.
[7] Sheng Guo, Weilin Huang, and Yu Qiao. Locally-supervised deep hybrid model for scene recognition. arXiv preprint arXiv:1601.07576, 2016.
[8] Kaiming He and Jian Sun. Convolutional neural networks at constrained time cost. CoRR, abs/1412.1710, 2014. URL http://arxiv.org/abs/1412.1710.
[9] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
[10] Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. arXiv preprint arXiv:1405.3866, 2014.
[11] Ajay J Joshi, Fatih Porikli, and Nikolaos Papanikolopoulos. Multi-class active learning In Computer Vision and Pattern Recognition, 2009. CVPR for image classiï¬cation. 2009. IEEE Conference on, pages 2372â2379. IEEE, 2009.
[12] Sergey Karayev, Mario Fritz, and Trevor Darrell. Anytime recognition of objects and scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pages 572â579, 2014.
[13] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with In Advances in neural information processing deep convolutional neural networks. systems, pages 1097â1105, 2012.
[14] Andrew Lavin. Fast algorithms for convolutional neural networks. abs/1509.09308, 2015. CoRR,
MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS
[15] Svetlana Lazebnik, Cordelia Schmid, and Jean Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, volume 2, pages 2169â2178. IEEE, 2006.
[16] Vadim Lebedev and Victor Lempitsky. Fast convnets using group-wise brain damage. arXiv preprint arXiv:1506.02515, 2015.
[17] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015.
[18] Lingqiao Liu, Chunhua Shen, and Anton van den Hengel. The treasure beneath con- volutional layers: Cross-convolutional-layer pooling for image classiï¬cation. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4749â4757, 2015.
[19] Michael Mathieu, Mikael Henaff, and Yann LeCun. Fast training of convolutional networks through ffts. arXiv preprint arXiv:1312.5851, 2013.
In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 413â 420. IEEE, 2009.
[21] David Silver, J Andrew Bagnell, and Anthony Stentz. Learning autonomous driving In Experimental Robotics, pages styles and maneuvers from expert demonstration. 371â386. Springer, 2013.
[22] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large- scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[23] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1â9, 2015.
[24] Liwei Wang, Chen-Yu Lee, Zhuowen Tu, and Svetlana Lazebnik. Training deeper convolutional networks with deep supervision. arXiv preprint arXiv:1505.02496, 2015.
[25] Zhixiang Xu, Kilian Weinberger, and Olivier Chapelle. The greedy miser: Learning under test-time budgets. arXiv preprint arXiv:1206.6451, 2012.
[26] Zhixiang Xu, Matt Kusner, Gao Huang, and Kilian Q Weinberger. Anytime repre- sentation learning. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 1076â1084, 2013.
[27] Z. Zhang, P. Luo, C. C. Loy, and X. Tang. Learning deep representation for face align- ment with auxiliary attributes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(5):918â930, May 2016. ISSN 0162-8828. doi: 10.1109/TPAMI.2015. 2469286.
[28] Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, and Aude Oliva. In Advances in Learning deep features for scene recognition using places database. neural information processing systems, pages 487â495, 2014. | {
"id": "1502.03167"
} |
1610.02357 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | 7 1 0 2
r p A 4 ] V C . s c [
3 v 7 5 3 2 0 . 0 1 6 1 : v i X r a
# Xception: Deep Learning with Depthwise Separable Convolutions
# Franc¸ois Chollet Google, Inc. fchollet@google.com
# Abstract
We present an interpretation of Inception modules in con- volutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and signiï¬cantly outper- forms Inception V3 on a larger image classiï¬cation dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of param- eters as Inception V3, the performance gains are not due to increased capacity but rather to a more efï¬cient use of model parameters.
as GoogLeNet (Inception V1), later reï¬ned as Inception V2 [7], Inception V3 [21], and most recently Inception-ResNet [19]. Inception itself was inspired by the earlier Network- In-Network architecture [11]. Since its ï¬rst introduction, Inception has been one of the best performing family of models on the ImageNet dataset [14], as well as internal datasets in use at Google, in particular JFT [5].
The fundamental building block of Inception-style mod- els is the Inception module, of which several different ver- sions exist. In ï¬gure 1 we show the canonical form of an Inception module, as found in the Inception V3 architec- ture. An Inception model can be understood as a stack of such modules. This is a departure from earlier VGG-style networks which were stacks of simple convolution layers.
While Inception modules are conceptually similar to con- volutions (they are convolutional feature extractors), they empirically appear to be capable of learning richer repre- sentations with less parameters. How do they work, and how do they differ from regular convolutions? What design strategies come after Inception?
# 1.1. The Inception hypothesis
# 1. Introduction
Convolutional neural networks have emerged as the mas- ter algorithm in computer vision in recent years, and de- veloping recipes for designing them has been a subject of considerable attention. The history of convolutional neural network design started with LeNet-style models [10], which were simple stacks of convolutions for feature extraction and max-pooling operations for spatial sub-sampling. In 2012, these ideas were reï¬ned into the AlexNet architec- ture [9], where convolution operations were being repeated multiple times in-between max-pooling operations, allowing the network to learn richer features at every spatial scale. What followed was a trend to make this style of network increasingly deeper, mostly driven by the yearly ILSVRC competition; ï¬rst with Zeiler and Fergus in 2013 [25] and then with the VGG architecture in 2014 [18].
A convolution layer attempts to learn ï¬lters in a 3D space, with 2 spatial dimensions (width and height) and a chan- nel dimension; thus a single convolution kernel is tasked with simultaneously mapping cross-channel correlations and spatial correlations.
This idea behind the Inception module is to make this process easier and more efï¬cient by explicitly factoring it into a series of operations that would independently look at cross-channel correlations and at spatial correlations. More precisely, the typical Inception module ï¬rst looks at cross- channel correlations via a set of 1x1 convolutions, mapping the input data into 3 or 4 separate spaces that are smaller than the original input space, and then maps all correlations in these smaller 3D spaces, via regular 3x3 or 5x5 convolutions. This is illustrated in ï¬gure 1. In effect, the fundamental hy- pothesis behind Inception is that cross-channel correlations and spatial correlations are sufï¬ciently decoupled that it is preferable not to map them jointly 1.
At this point a new style of network emerged, the Incep- tion architecture, introduced by Szegedy et al. in 2014 [20]
1A variant of the process is to independently look at width-wise corre-
Consider a simpliï¬ed version of an Inception module that only uses one size of convolution (e.g. 3x3) and does not include an average pooling tower (ï¬gure 2). This Incep- tion module can be reformulated as a large 1x1 convolution followed by spatial convolutions that would operate on non- overlapping segments of the output channels (ï¬gure 3). This observation naturally raises the question: what is the ef- fect of the number of segments in the partition (and their size)? Would it be reasonable to make a much stronger hypothesis than the Inception hypothesis, and assume that cross-channel correlations and spatial correlations can be mapped completely separately?
Figure 1. A canonical Inception module (Inception V3).
Concat 3x3 conv I 3x3 conv 3x3 conv 3x3 conv I I I 1x1 conv 1x1 conv Avg Pool 1x1 conv Input
Figure 2. A simpliï¬ed Inception module.
3x3 conv 3x3 conv 3x3 conv Input
# 1.2. The continuum between convolutions and sep- arable convolutions
An âextremeâ version of an Inception module, based on this stronger hypothesis, would ï¬rst use a 1x1 convolution to map cross-channel correlations, and would then separately map the spatial correlations of every output channel. This is shown in ï¬gure 4. We remark that this extreme form of an Inception module is almost identical to a depthwise sepa- rable convolution, an operation that has been used in neural
lations and height-wise correlations. This is implemented by some of the modules found in Inception V3, which alternate 7x1 and 1x7 convolutions. The use of such spatially separable convolutions has a long history in im- age processing and has been used in some convolutional neural network implementations since at least 2012 (possibly earlier).
Figure 3. A strictly equivalent reformulation of the simpliï¬ed In- ception module.
3x3 conv Input Output channels
Figure 4. An âextremeâ version of our Inception module, with one spatial convolution per output channel of the 1x1 convolution.
Ae ee ee ee Output channels 1x1 conv Input
network design as early as 2014 [15] and has become more popular since its inclusion in the TensorFlow framework [1] in 2016.
A depthwise separable convolution, commonly called âseparable convolutionâ in deep learning frameworks such as TensorFlow and Keras, consists in a depthwise convolution, i.e. a spatial convolution performed independently over each channel of an input, followed by a pointwise convolution, i.e. a 1x1 convolution, projecting the channels output by the depthwise convolution onto a new channel space. This is not to be confused with a spatially separable convolution, which is also commonly called âseparable convolutionâ in the image processing community.
Two minor differences between and âextremeâ version of an Inception module and a depthwise separable convolution would be:
⢠The order of the operations: depthwise separable con- volutions as usually implemented (e.g. in TensorFlow) perform ï¬rst channel-wise spatial convolution and then perform 1x1 convolution, whereas Inception performs the 1x1 convolution ï¬rst.
⢠The presence or absence of a non-linearity after the ï¬rst operation. In Inception, both operations are fol- lowed by a ReLU non-linearity, however depthwise
separable convolutions are usually implemented with- out non-linearities.
We argue that the ï¬rst difference is unimportant, in par- ticular because these operations are meant to be used in a stacked setting. The second difference might matter, and we investigate it in the experimental section (in particular see ï¬gure 10).
We also note that other intermediate formulations of In- ception modules that lie in between regular Inception mod- ules and depthwise separable convolutions are also possible: in effect, there is a discrete spectrum between regular convo- lutions and depthwise separable convolutions, parametrized by the number of independent channel-space segments used for performing spatial convolutions. A regular convolution (preceded by a 1x1 convolution), at one extreme of this spectrum, corresponds to the single-segment case; a depth- wise separable convolution corresponds to the other extreme where there is one segment per channel; Inception modules lie in between, dividing a few hundreds of channels into 3 or 4 segments. The properties of such intermediate modules appear not to have been explored yet.
Having made these observations, we suggest that it may be possible to improve upon the Inception family of archi- tectures by replacing Inception modules with depthwise sep- arable convolutions, i.e. by building models that would be stacks of depthwise separable convolutions. This is made practical by the efï¬cient depthwise convolution implementa- tion available in TensorFlow. In what follows, we present a convolutional neural network architecture based on this idea, with a similar number of parameters as Inception V3, and we evaluate its performance against Inception V3 on two large-scale image classiï¬cation task.
# 2. Prior work
The present work relies heavily on prior efforts in the following areas:
⢠Convolutional neural networks [10, 9, 25], in particular the VGG-16 architecture [18], which is schematically similar to our proposed architecture in a few respects.
⢠The Inception architecture family of convolutional neu- ral networks [20, 7, 21, 19], which ï¬rst demonstrated the advantages of factoring convolutions into multiple branches operating successively on channels and then on space.
⢠Depthwise separable convolutions, which our proposed architecture is entirely based upon. While the use of spa- tially separable convolutions in neural networks has a long history, going back to at least 2012 [12] (but likely even earlier), the depthwise version is more recent. Lau- rent Sifre developed depthwise separable convolutions
during an internship at Google Brain in 2013, and used them in AlexNet to obtain small gains in accuracy and large gains in convergence speed, as well as a signiï¬cant reduction in model size. An overview of his work was ï¬rst made public in a presentation at ICLR 2014 [23]. Detailed experimental results are reported in Sifreâs the- sis, section 6.2 [15]. This initial work on depthwise sep- arable convolutions was inspired by prior research from Sifre and Mallat on transformation-invariant scattering [16, 15]. Later, a depthwise separable convolution was used as the ï¬rst layer of Inception V1 and Inception V2 [20, 7]. Within Google, Andrew Howard [6] has introduced efï¬cient mobile models called MobileNets using depthwise separable convolutions. Jin et al. in 2014 [8] and Wang et al. in 2016 [24] also did related work aiming at reducing the size and computational cost of convolutional neural networks using separable convolutions. Additionally, our work is only possible due to the inclusion of an efï¬cient implementation of depthwise separable convolutions in the TensorFlow framework [1].
⢠Residual connections, introduced by He et al. in [4], which our proposed architecture uses extensively.
# 3. The Xception architecture
We propose a convolutional neural network architecture based entirely on depthwise separable convolution layers. In effect, we make the following hypothesis: that the map- ping of cross-channels correlations and spatial correlations in the feature maps of convolutional neural networks can be entirely decoupled. Because this hypothesis is a stronger ver- sion of the hypothesis underlying the Inception architecture, we name our proposed architecture Xception, which stands for âExtreme Inceptionâ.
A complete description of the speciï¬cations of the net- work is given in ï¬gure 5. The Xception architecture has 36 convolutional layers forming the feature extraction base of the network. In our experimental evaluation we will ex- clusively investigate image classiï¬cation and therefore our convolutional base will be followed by a logistic regression layer. Optionally one may insert fully-connected layers be- fore the logistic regression layer, which is explored in the experimental evaluation section (in particular, see ï¬gures 7 and 8). The 36 convolutional layers are structured into 14 modules, all of which have linear residual connections around them, except for the ï¬rst and last modules.
In short, the Xception architecture is a linear stack of depthwise separable convolution layers with residual con- nections. This makes the architecture very easy to deï¬ne and modify; it takes only 30 to 40 lines of code using a high- level library such as Keras [2] or TensorFlow-Slim [17], not unlike an architecture such as VGG-16 [18], but rather un-
like architectures such as Inception V2 or V3 which are far more complex to deï¬ne. An open-source implementation of Xception using Keras and TensorFlow is provided as part of the Keras Applications module2, under the MIT license.
# 4. Experimental evaluation
We choose to compare Xception to the Inception V3 ar- chitecture, due to their similarity of scale: Xception and Inception V3 have nearly the same number of parameters (table 3), and thus any performance gap could not be at- tributed to a difference in network capacity. We conduct our comparison on two image classiï¬cation tasks: one is the well-known 1000-class single-label classiï¬cation task on the ImageNet dataset [14], and the other is a 17,000-class multi-label classiï¬cation task on the large-scale JFT dataset.
# 4.1. The JFT dataset
JFT is an internal Google dataset for large-scale image classiï¬cation dataset, ï¬rst introduced by Hinton et al. in [5], which comprises over 350 million high-resolution images annotated with labels from a set of 17,000 classes. To eval- uate the performance of a model trained on JFT, we use an auxiliary dataset, FastEval14k.
FastEval14k is a dataset of 14,000 images with dense annotations from about 6,000 classes (36.5 labels per im- age on average). On this dataset we evaluate performance using Mean Average Precision for top 100 predictions (MAP@100), and we weight the contribution of each class to MAP@100 with a score estimating how common (and therefore important) the class is among social media images. This evaluation procedure is meant to capture performance on frequently occurring labels from social media, which is crucial for production models at Google.
# 4.2. Optimization conï¬guration
A different optimization conï¬guration was used for Ima- geNet and JFT:
⢠On ImageNet:
â Optimizer: SGD
â Momentum: 0.9
â Initial learning rate: 0.045
â Learning rate decay: decay of rate 0.94 every 2 epochs
⢠On JFT:
â Optimizer: RMSprop [22]
â Momentum: 0.9
â Initial learning rate: 0.001
2https://keras.io/applications/#xception
â Learning rate decay: decay of rate 0.9 every 3,000,000 samples
For both datasets, the same exact same optimization con- ï¬guration was used for both Xception and Inception V3. Note that this conï¬guration was tuned for best performance with Inception V3; we did not attempt to tune optimization hyperparameters for Xception. Since the networks have dif- ferent training proï¬les (ï¬gure 6), this may be suboptimal, es- pecially on the ImageNet dataset, on which the optimization conï¬guration used had been carefully tuned for Inception V3.
Additionally, all models were evaluated using Polyak averaging [13] at inference time.
# 4.3. Regularization conï¬guration
⢠Weight decay: The Inception V3 model uses a weight decay (L2 regularization) rate of 4e â 5, which has been carefully tuned for performance on ImageNet. We found this rate to be quite suboptimal for Xception and instead settled for 1e â 5. We did not perform an extensive search for the optimal weight decay rate. The same weight decay rates were used both for the ImageNet experiments and the JFT experiments.
⢠Dropout: For the ImageNet experiments, both models include a dropout layer of rate 0.5 before the logistic regression layer. For the JFT experiments, no dropout was included due to the large size of the dataset which made overï¬tting unlikely in any reasonable amount of time.
⢠Auxiliary loss tower: The Inception V3 architecture may optionally include an auxiliary tower which back- propagates the classiï¬cation loss earlier in the network, serving as an additional regularization mechanism. For simplicity, we choose not to include this auxiliary tower in any of our models.
# 4.4. Training infrastructure
All networks were implemented using the TensorFlow framework [1] and trained on 60 NVIDIA K80 GPUs each. For the ImageNet experiments, we used data parallelism with synchronous gradient descent to achieve the best classi- ï¬cation performance, while for JFT we used asynchronous gradient descent so as to speed up training. The ImageNet experiments took approximately 3 days each, while the JFT experiments took over one month each. The JFT models were not trained to full convergence, which would have taken over three month per experiment.
Figure 5. The Xception architecture: the data ï¬rst goes through the entry ï¬ow, then through the middle ï¬ow which is repeated eight times, and ï¬nally through the exit ï¬ow. Note that all Convolution and SeparableConvolution layers are followed by batch normalization [7] (not included in the diagram). All SeparableConvolution layers use a depth multiplier of 1 (no depth expansion).
Entry flow 299x299x3 images Conv 32, 3x3, stride=2x2 ReLU ReLU Middle flow 19x19x728 feature maps Exit flow 19x19x728 feature maps ReLU Conv 64, 3x3 SeparableConv 728, 3x3 SeparableConv 728, 3x3 ReLU SeparableConv 128, 3x3 I ReLU SeparableConv 128, 3x3 Conv 1x1 stride=2x2 MaxPooling 3x3, stride=2x2 SeparableConv 256, 3x3 Conv 1x1 stride=2x2 SeparableConv 256, 3x3 MaxPooling 3x3, stride=2x2 ReLU SeparableConv 728, 3x3 I Conv 1x1 ReLU stride=2x2 SeparableConv 728, 3x3 MaxPooling 3x3, stride=2x2 19x19x728 feature maps SeparableConv 728, 3x3 SeparableConv 728, 3x3 19x19x728 feature maps Repeated 8 times Conv 1x1 ReLU SeparableConv 1024, 3x3 stride=2x2 MaxPooling 3x3, stride=2x2 SeparableConv 1536, 3x3 SeparableConv 2048, 3x3 ReLU GlobalAveragePooling 2048-dimensional vectors Optional fully-connected Layer(s) Logistic regression
# 4.5. Comparison with Inception V3
152 [4].
# 4.5.1 Classiï¬cation performance
All evaluations were run with a single crop of the inputs images and a single model. ImageNet results are reported on the validation set rather than the test set (i.e. on the non-blacklisted images from the validation set of ILSVRC 2012). JFT results are reported after 30 million iterations (one month of training) rather than after full convergence. Results are provided in table 1 and table 2, as well as ï¬gure 6, ï¬gure 7, ï¬gure 8. On JFT, we tested both versions of our networks that did not include any fully-connected layers, and versions that included two fully-connected layers of 4096 units each before the logistic regression layer.
On ImageNet, Xception shows marginally better results than Inception V3. On JFT, Xception shows a 4.3% rel- ative improvement on the FastEval14k MAP@100 metric. We also note that Xception outperforms ImageNet results reported by He et al. for ResNet-50, ResNet-101 and ResNet-
Table 1. Classiï¬cation performance comparison on ImageNet (sin- gle crop, single model). VGG-16 and ResNet-152 numbers are only included as a reminder. The version of Inception V3 being benchmarked does not include the auxiliary tower.
Top-1 accuracy Top-5 accuracy VGG-16 ResNet-152 Inception V3 Xception 0.715 0.770 0.782 0.790 0.901 0.933 0.941 0.945
The Xception architecture shows a much larger perfor- mance improvement on the JFT dataset compared to the ImageNet dataset. We believe this may be due to the fact that Inception V3 was developed with a focus on ImageNet and may thus be by design over-ï¬t to this speciï¬c task. On the other hand, neither architecture was tuned for JFT. It is likely that a search for better hyperparameters for Xception on ImageNet (in particular optimization parameters and reg-
Table 2. Classiï¬cation performance comparison on JFT (single crop, single model).
Inception V3 - no FC layers Xception - no FC layers Inception V3 with FC layers Xception with FC layers FastEval14k MAP@100 6.36 6.70 6.50 6.78
Figure 6. Training proï¬le on ImageNet
Xception Inception V3 ImageNet validation accuracy E © a 3 2 a a 0:50 20000 40000 60000 80000 100000 120000 Gradient descent steps
Figure 7. Training proï¬le on JFT, without fully-connected layers
Xception Inception V3 5.5 FastEvali4k MAP@100 (no FC layers) 58.0 os 1.0 15 20 25 3.0 Gradient descent steps 307
ularization parameters) would yield signiï¬cant additional improvement.
# 4.5.2 Size and speed
Table 3. Size and training speed comparison.
Inception V3 Xception Parameter count 23,626,728 22,855,952 Steps/second 31 28
In table 3 we compare the size and speed of Inception
Figure 8. Training proï¬le on JFT, with fully-connected layers
'Xception' FastEvali4k MAP@100 (with FC layers) 580 05 1.0 15 2.0 25 3.0 Gradient descent steps
V3 and Xception. Parameter count is reported on ImageNet (1000 classes, no fully-connected layers) and the number of training steps (gradient updates) per second is reported on ImageNet with 60 K80 GPUs running synchronous gradient descent. Both architectures have approximately the same size (within 3.5%), and Xception is marginally slower. We expect that engineering optimizations at the level of the depthwise convolution operations can make Xception faster than Inception V3 in the near future. The fact that both architectures have almost the same number of parameters indicates that the improvement seen on ImageNet and JFT does not come from added capacity but rather from a more efï¬cient use of the model parameters.
# 4.6. Effect of the residual connections
Figure 9. Training proï¬le with and without residual connections.
0.8 o7 Xception 06 Xception - Non-residual a a 0.4 0.3 0.2 ImageNet validation accuracy 0.1 0% 20000 40000 60000 80000 100000 120000 Gradient descent steps
To quantify the beneï¬ts of residual connections in the Xception architecture, we benchmarked on ImageNet a mod- iï¬ed version of Xception that does not include any residual
connections. Results are shown in ï¬gure 9. Residual con- nections are clearly essential in helping with convergence, both in terms of speed and ï¬nal classiï¬cation performance. However we will note that benchmarking the non-residual model with the same optimization conï¬guration as the resid- ual model may be uncharitable and that better optimization conï¬gurations might yield more competitive results.
Additionally, let us note that this result merely shows the importance of residual connections for this speciï¬c architec- ture, and that residual connections are in no way required in order to build models that are stacks of depthwise sepa- rable convolutions. We also obtained excellent results with non-residual VGG-style models where all convolution layers were replaced with depthwise separable convolutions (with a depth multiplier of 1), superior to Inception V3 on JFT at equal parameter count.
# 4.7. Effect of an intermediate activation after point- wise convolutions
Figure 10. Training proï¬le with different activations between the depthwise and pointwise operations of the separable convolution layers.
0.80 No intermediate activation Intermediate ELU Intermediate ReLU ImageNet validation accuracy 0-000 40000 60000 80000 100000 120000 140000 160000 Gradient descent steps
We mentioned earlier that the analogy between depth- wise separable convolutions and Inception modules suggests that depthwise separable convolutions should potentially in- clude a non-linearity between the depthwise and pointwise operations. In the experiments reported so far, no such non- linearity was included. However we also experimentally tested the inclusion of either ReLU or ELU [3] as intermedi- ate non-linearity. Results are reported on ImageNet in ï¬gure 10, and show that the absence of any non-linearity leads to both faster convergence and better ï¬nal performance.
This is a remarkable observation, since Szegedy et al. re- port the opposite result in [21] for Inception modules. It may be that the depth of the intermediate feature spaces on which spatial convolutions are applied is critical to the usefulness those of the non-linearity: for deep feature spaces (e.g.
found in Inception modules) the non-linearity is helpful, but for shallow ones (e.g. the 1-channel deep feature spaces of depthwise separable convolutions) it becomes harmful, possibly due to a loss of information.
# 5. Future directions
We noted earlier the existence of a discrete spectrum be- tween regular convolutions and depthwise separable convo- lutions, parametrized by the number of independent channel- space segments used for performing spatial convolutions. In- ception modules are one point on this spectrum. We showed in our empirical evaluation that the extreme formulation of an Inception module, the depthwise separable convolution, may have advantages over regular a regular Inception mod- ule. However, there is no reason to believe that depthwise separable convolutions are optimal. It may be that intermedi- ate points on the spectrum, lying between regular Inception modules and depthwise separable convolutions, hold further advantages. This question is left for future investigation.
# 6. Conclusions
We showed how convolutions and depthwise separable convolutions lie at both extremes of a discrete spectrum, with Inception modules being an intermediate point in be- tween. This observation has led to us to propose replacing Inception modules with depthwise separable convolutions in neural computer vision architectures. We presented a novel architecture based on this idea, named Xception, which has a similar parameter count as Inception V3. Compared to Inception V3, Xception shows small gains in classiï¬cation performance on the ImageNet dataset and large gains on the JFT dataset. We expect depthwise separable convolutions to become a cornerstone of convolutional neural network architecture design in the future, since they offer similar properties as Inception modules, yet are as easy to use as regular convolution layers.
# References
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghe- mawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Man´e, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Van- houcke, V. Vasudevan, F. Vi´egas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. Tensor- Flow: Large-scale machine learning on heterogeneous sys- tems, 2015. Software available from tensorï¬ow.org. [2] F. Chollet. Keras. https://github.com/fchollet/keras, 2015. [3] D.-A. Clevert, T. Unterthiner, and S. Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289, 2015.
[4] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
[5] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network, 2015.
[6] A. Howard. Mobilenets: Efï¬cient convolutional neural net- works for mobile vision applications. Forthcoming.
[7] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of The 32nd International Conference on Machine Learning, pages 448â456, 2015.
[8] J. Jin, A. Dundar, and E. Culurciello. Flattened convolutional neural networks for feedforward acceleration. arXiv preprint arXiv:1412.5474, 2014.
Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
[10] Y. LeCun, L. Jackel, L. Bottou, C. Cortes, J. S. Denker, H. Drucker, I. Guyon, U. Muller, E. Sackinger, P. Simard, et al. Learning algorithms for classiï¬cation: A comparison on handwritten digit recognition. Neural networks: the statistical mechanics perspective, 261:276, 1995.
[11] M. Lin, Q. Chen, and S. Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013.
[12] F. Mamalet and C. Garcia. Simplifying ConvNets for Fast Learning. In International Conference on Artiï¬cial Neural Networks (ICANN 2012), pages 58â65. Springer, 2012. [13] B. T. Polyak and A. B. Juditsky. Acceleration of stochas- tic approximation by averaging. SIAM J. Control Optim., 30(4):838â855, July 1992.
[14] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Ima- genet large scale visual recognition challenge. 2014. [15] L. Sifre. Rigid-motion scattering for image classiï¬cation,
2014. Ph.D. thesis.
[16] L. Sifre and S. Mallat. Rotation, scaling and deformation invariant scattering for texture discrimination. In 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, June 23-28, 2013, pages 1233â1240, 2013.
[17] N. Silberman and S. Guadarrama. Tf-slim, 2016. [18] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016.
[20] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1â9, 2015. [21] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015.
[22] T. Tieleman and G. Hinton. Divide the gradient by a run- ning average of its recent magnitude. COURSERA: Neural
Networks for Machine Learning, 4, 2012. Accessed: 2015- 11-05.
[23] V. Vanhoucke. Learning visual representations at scale. ICLR, 2014.
[24] M. Wang, B. Liu, and H. Foroosh. Factorized convolutional neural networks. arXiv preprint arXiv:1608.04337, 2016.
[25] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In Computer VisionâECCV 2014, pages 818â833. Springer, 2014. | {
"id": "1608.04337"
} |
1610.01644 | Understanding intermediate layers using linear classifier probes | Neural network models have a reputation for being black boxes. We propose to
monitor the features at every layer of a model and measure how suitable they
are for classification. We use linear classifiers, which we refer to as
"probes", trained entirely independently of the model itself.
This helps us better understand the roles and dynamics of the intermediate
layers. We demonstrate how this can be used to develop a better intuition about
models and to diagnose potential problems.
We apply this technique to the popular models Inception v3 and Resnet-50.
Among other things, we observe experimentally that the linear separability of
features increase monotonically along the depth of the model. | http://arxiv.org/pdf/1610.01644 | Guillaume Alain, Yoshua Bengio | stat.ML, cs.LG | null | null | stat.ML | 20161005 | 20181122 | 8 1 0 2
v o N 2 2 ] L M . t a t s [
4 v 4 4 6 1 0 . 0 1 6 1 : v i X r a
# Understanding intermediate layers using linear classiï¬er probes
# Guillaume Alain Mila, University of Montreal guillaume.alain.umontreal@gmail.com
Yoshua Bengio Mila, University of Montreal
# Abstract
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classiï¬cation. We use linear classiï¬ers, which we refer to as âprobesâ, trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of fea- tures increase monotonically along the depth of the model.
# 1 Introduction
The recent history of deep neural networks features an impressive number of new methods and technological improvements to allow the training of deeper and more powerful networks.
Deep neural networks still carry some of their original reputation of being black boxes, but many efforts have been made to understand better what they do, what is the role of each layer (Yosinski et al., 2014), how we can interpret them (Zeiler and Fergus, 2014) and how we can fool them (Biggio et al., 2013; Szegedy et al., 2013).
In this paper, we take the features of each layer separately and we ï¬t a linear classiï¬er to predict the original classes. We refer to these linear classiï¬ers as âprobesâ and we make sure that we never inï¬uence the model itself by taking measurements with probes. We suggest that the reader think of those probes as thermometers used to measure the temperature simultaneously at many different locations.
More broadly speaking, the core of the idea is that there are interesting quantities that we can report based on the features of many independent layers if we allow the âmeasuring instrumentsâ to have their own trainable parameters (provided that they do not inï¬uence the model itself).
In the context of this paper, we are working with convolutional neural networks on image classiï¬ca- tion tasks on the MNIST and ImageNet (Russakovsky et al., 2015) datasets. Naturally, we ï¬t linear classiï¬er probes to predict those classes, but in general it is possible to monitor the performance of the features on any other objective.
Our contributions in this paper are twofold.
Firstly, we introduce these âprobesâ as a general tool to understand deep neural networks. We show how they can be used to characterize different layers, to debug bad models, or to get a sense of how the training is progressing in a well-behaved model. While our proposed idea shares commonalities with Montavon et al. (2011), our analysis is very different.
Secondly, we observe that the measurements of the probes are surprizingly monotonic, which means that the degree of linear separability of the features of layers increases as we reach the deeper layers. The level of regularity with which this happens is surprizing given that this is not technically part of the training objective. This helps to understand the dynamics of deep neural networks.
# 2 Related Work
Many researchers have come up with techniques to analyze certain aspects of neural networks which may guide our intuition and provide a partial explanation as to how they work.
In this section we will provide a survey of the literature on the subject, with a little more focus on papers related our current work.
# 2.1 Linear classiï¬cation with kernel PCA
In our paper we investigate the linear separability of the features found at intermediate layers of a deep neural network.
A similar starting point is presented by Montavon et al. (2011). In that particular case, the authors use kernel PCA to project the features of a given layer onto a new representation which will then be used to ï¬t the best linear classiï¬er. They use a radial basis function as kernel, and they choose to project the features of individual layers by using the d leading eigenvectors of the kernel PCA decomposition. They investigate the effects that d has on the quality of the linear classiï¬er.
Naturally, for a sufï¬ciently large d, it would be possible to overï¬t on the training set (given how easy this is with a radial basis function), so they consider the situation where d is relatively small. They demonstrate that, for deeper layers in a neural network, they can achieve good performance with smaller d. This suggests that the features of the original convolution neural network are indeed more âabstractâ as we go deeper, which corresponds to the general intuition shared by many researchers.
They explore convolution networks of limited depth with a restricted subset of 10k training samples of MNIST and CIFAR-10.
# 2.2 Generalization and transferability of layers
There are good arguments to support the claim that the ï¬rst layers of a convolution network for image recognition contain ï¬lters that are relatively âgeneralâ, in the sense that they would work great even if we switched to an entirely different dataset of images. The last layers are speciï¬c to the dataset being used, and have to be retrained when using a different dataset. In Yosinski et al. (2014) the authors try to pinpoint the layer at which this transition occurs, but they show that the exact transition is spread across multiple layers. In Donahue et al. (2014) the authors study the transfer of features from the last few layers of a model to a novel generic task. In Zeiler and Fergus (2014) the authors show that the ï¬lters are picking up certain patterns that make sense to us visually, and they show a method to visually inspect the ï¬lters as input images.
# 2.3 Relevance Propagation
In Bach et al. (2015), the authors introduce the idea of Relevance Propagation as a way to identify which pixels of the input space are the most important to the classiï¬er on the ï¬nal layer. Their approach frames the ârelevanceâ as a kind of quantity that is to be preserved across the layers, as a sort of shared responsibility to be divided among the features of a given layer.
In Binder et al. (2016) the authors apply the concept of Relevance Propagation to a larger family of models. Among other things, they provide a nice experiment where they study the effects of corrupting the pixels deemed the most relevant, and they show how this affects performance more than corrupting randomly-selected pixels (see Figure 2 of their paper). See also Lapuschkin et al. (2016). Other research dealing with Relevance Propagation includes Arras et al. (2017) where this is applied to RNN in text.
We would also note that a good number of papers on interpretability of neural networks deals with âinterpretationsâ taking the form of regions of the original image being identiï¬ed, or where the
2
pixels in the original image receive a certain value of how relevant they are (e.g. a heat map of relevance).
In those cases we rely on the human user to parse the regions of the image with their vision so as to determine whether the region indeed makes sense or whether the information contained within is irrelevant to the task at hand. This is analogous to the way that image-captioning attention (Xu et al., 2015) can highlight portions of the input image that inspired speciï¬c segments of the caption.
An interesting approach is presented in Mahendran and Vedaldi (2015, 2016); Dosovitskiy and Brox (2016) where the authors analyze the set of âequivalentâ inputs in the sense that some of the features total at a given layer should be preserved. Given a layer to study, they apply a regularizer (e.g. variation) and use gradient descent in order to reconstruct the pre-image that yields the same features at that layer, but for which the regularizer would be minimized. This procedure yields pre-images that are of the same format as the input image, and which can be used to get a sense of what are the components of the original image that are preserved. For certain tasks, one may be surprised as to how many details of the input image are being completely discarded by the time we reach the fully-connected layers at the end of a convolution neural network.
# 2.4 SVCCA
In Raghu et al. (2017a,b) the authors study the question of whether neural networks are trained from the ï¬rst to the last layer, or the other way around (i.e. âbottom upâ vs âtop downâ). The concept is rather intuitive, but it still requires a proper deï¬nition of what they mean. They use Canonical Correlation Analysis (CCA) to compare two instances of a given model trained separately. Given that two different instances of the same model might assign entirely different roles to their neurons (on corresponding layers), this is a comparison that is normally impossible to even attempt.
On one side, they take a model that has already been optimized. On the other side, they take multiple snapshots of a model during training. Every layer of one model is being compared with every other layer of the other. The values computed by CCA allows them to report the correlation between every pair of layers. This shows how quickly a given layer of the model being trained is going to achieve a conï¬guration equivalent to the one of the optimized model. They ï¬nd that the early layers reach their ï¬nal conï¬guration, so to speak, much earlier than layers downstream.
Given that any two sets of features can be compared using CCA, they also compare the correlation between any intermediate layer and the ground truth. This gives a sense of how easy it would be to predict the target label using the features of any intermediate layer instead of only using the last layer (as convnet usually do). Refer to Figure 6 of Raghu et al. (2017b) for more details. This aspect of Raghu et al. (2017b) is very similar to our own previous work (Alain and Bengio, 2016).
# 3 Monitoring with probes
# Information theory, and monotonic improvements to linear separability
The initial motivation for linear classiï¬er probes was related to a reï¬ection about the nature of information (in the entropy sense of the word) passing from one layer to the next.
New information is never added as we propagate forward in a model. If we consider the typical image classiï¬cation problem, the representation of the data is transformed over the course of many layers, to be ï¬nally used by a linear classiï¬er at the last layer.
In the case of a binary classiï¬er (say, detecting the presence or absence of a lion in a picture of the savannah like in Figure 1), we could say that there was at most one bit of information to be uncovered in the original image. Lion or no lion ? Here we are not interested in measuring the information about the pixels of an image that we want to reconstruct. That would be a different problem.
This is illustrated in a formal way by the Data Processing Inequality. It states that, for a set of three random variables satisfying the dependency
X â Y â Z
then we have that
I(X; Z) ⤠I(X; Y )
3
where I(X, Y ) is the mutual information.
e1n90|32 2 45 70 a7 SB EF 53 BI 32 @tach|32 7A CP 3E DB 7D 31 4D 99 BB elatal@e cb 2 1D ba 9F BC 2F 50 EF eipse|12 ee 6F SF 73 21 Op 7F BA De 17 14 6p 25 SE 7A 91 7: sipe7|z3 23 3a AC EA A AO 55 eibps|27 ca 9a 74 21 St AT 68 eibes|54 7P 48 38 E6 30 5A DT sicié|27 50 05 p2 32 Fa Fé Ag Os co cB eicts|ag cB 74 4D 78 31 85 Ce cl aD 34 8ic72|R0 Fe 47 1D D7 AS EB BI BO BO ED BF 13 1 96 AB FA 65 9B AE? eicai|20 co 8B D3 98 C6 GB SE 63 CB F7 65 22 BF 42 5A 44 4 90 21 49 0 @iedd|1a on SD ED A} 69 A9 65 BY C2 S415 a2 24 09 DF 67 D7 DR 91 38 Bi eicte|cs ze 43 SE 2p 59 DE DA 76 42 2a 52 47 1D 80 27 OD TE BO IF D3 DA DT eiaze|09 FD FA 6C GD 78 44 27 85 ED 00 C7 e1asalce dc ag 32 52 BE 55 CE DE BB £3 Dd Slate|51 $F 89 02 7D B1 D3 45 83 17 95 BD 70 eiapb|62 ee 5 iF 1c 99 1B 01 5p 96 81 2 BldeaAP ec 35 19 42 AB 25 8c PO e1e19|99 20 2D aS Ee DE 8A BA 24 14 7E D3 D125 2C Ad 13 Cl 29 D3 09 32 D3 Bled8l56 cc BA AA 57 9E OD #A 67 11 AD 71 04 05 7A GF 4F FS BI DP 66 £3 9C
(a) hex dump of picture of a lion
(b) same lion in human-readable format
Figure 1: The hex dump represented at the left has more information contents than the image at the right. Only one of them can be processed by the human brain in time to save their lives. Computational convenience matters. Not just entropy.
The task of a deep neural network classiï¬er is to come up with a representation for the ï¬nal layer that can be easily fed to a linear classiï¬er (i.e. the most elementary form of useful classiï¬er). The cross-entropy loss applies a lot of pressure directly on the last layer to make it linearly separable. Any degree of linear separability in the intermediate layers happens only as a by-product.
On one hand, we have that every layer has less information than its parent layer. On the other hand, we observe experimentally in Section 3.5, 4.1 and 4.2 that features from deeper layers work better with linear classiï¬ers to predict the target labels. At ï¬rst glance this might seem like a contradiction.
One of the important lessons is that neural networks are really about distilling computationally- useful representations, and they are not about information contents as described by the ï¬eld of Information Theory.
# 3.2 Linear classiï¬er probes
Consider the common scenario in deep learning in which we are trying to classify the input data X to produce an output distribution over D classes. The last layer of the model is a densely-connected map to D values followed by a softmax, and we train by minimizing cross-entropy.
At every layer we can take the features Hk from that layer and try to predict the correct labels y using a linear classiï¬er parameterized as
fe: Hy > (0,1)? hy +> softmax (Why + b) .
where hk â H are the features of hidden layer k, [0, 1]D is the space of categorical distributions of the D target classes, and (W, b) are the probe weights and biases to be learned so as to minimize the usual cross-entropy loss. Let Ltrain k deï¬ne Lvalid
be the empirical loss of that linear classiï¬er fk evaluated over the training set. We can also k by exporting the same linear classiï¬er on the validation and test sets.
# k
Without making any assumptions about the model itself being trained, we can nevertheless assume that these fk are themselves optimized so that, at any given time, they reï¬ect the currently optimal thing that can be done with the features present.
We refer to those linear classiï¬ers as âprobesâ in an effort to clarify our thinking about the model. These probes do not affect the model training. They only measure the level of linear separability of the features at a given layer. Blocking the backpropagation from the probes to the model itself can be achieved by using tf.stop gradient in Tensorï¬ow (or its Theano equivalent), or by managing the probe parameters separately from the model parameters.
Note that we can avoid the issue of local minima because training a linear classiï¬er using softmax cross-entropy is a convex problem.
4
In this paper, we study
how Lk decreases as k increases (see Section 3.1), ⢠the usefulness of Lk as a diagnostic tool (see Section 5.1).
# 3.3 Practical concern : Ltrain
# k
# vs Lvalid k
The reason why we care about optimality of the probes in Section 3.2 is because it abstracts away the problem of optimizing them. When a general function g(x) has a unique global minimum, we can talk about that minimum without ambiguity even though, in practice, we are probably going to use only a convenient approximation of the minimum.
This is acceptable in a context where we are seeking better intuition about deep learning models by using linear classiï¬er probes. If a researcher judges that the measurements are useful to further their understanding of their model (and act on that intuition), then they should not worry too much about how close they are to optimality. This applies also to the question of whether we should prioritize Ltrain Lvalid k might not be easy to track Lvalid
# k
Moreover, for the purposes of many of the experiments in this paper we chose to report the classi- ï¬cation error instead of the cross-entropy, since this is ultimately often the quantity that matters the most. Reporting the top5 classiï¬cation error could also have been possible.
# 3.4 Practical concern : Dimension reduction on features
Another practical problem can arise when certain layers of a neural network have an exceedingly large quantity of features. The ï¬rst few layers of Inception v3, for example, have a few million features when we multiply height, width and channels. This leads to parameters for a single probe taking upwards of a few gigabytes of storage, which is disproportionately large when we consider that the entire set of model parameters takes less space than that.
In those cases, we have three possible suggestions for trimming down the space of features on which we ï¬t the probes.
⢠Use only a random subset of the features (but always the same ones). This is used on the Inception v3 model in Section 4.2.
⢠Project the features to a lower-dimensional space. Learn this mapping. This is probably a worse idea than it sounds because the projection matrix itself can take a lot of storage (even more than the probe parameters).
⢠When dealing with features in the form of images (height, width, channels), we can perform 2D pooling along the (height, width) of each channel. This reduces the number of features to the number of channels. This is used on the ResNet-50 model in Section 4.1.
In practice, when using linear classiï¬er probes on any serious model (i.e. not MNIST) we have to choose a way to reduce the number of features used.
Note that we also want to avoid a situation where our probes are simply overï¬tting on the features because there are too many features. It was recently demonstrated that very large models can ï¬t random labels on ImageNet (Zhang et al., 2016). This is a situation that we want to avoid because the probe measurements would be entirely meaningless in that situation. Dimensionality reduction helps with this concern.
# 3.5 Basic example on MNIST
In this section we run the MNIST convolutional model provided by the tensorflow/models github repository (image/mnist/convolutional.py). We selected that model for reproducibility and to demonstrate how to easily peek into popular models by using probes.
We start by sketching the model in Figure 2. We report the results at the beginning and the end of training on Figure 3. One of the interesting dynamics to be observed there is how useful the ï¬rst
5
layers are, despite the fact that the model is completely untrained. Random projections can be useful to classify data, and this has been studied by others (Jarrett et al., 2009).
conv 5x5 maxpool conv 5x5 maxpool 32 filters ReLU 2x2 64 filters ReLU 2x2 matmul ReLU matmul input output images logits convolution convolution fully-connected fully-connected layer layer layer layer
Figure 2: This graphical model represents the neural network that we are going to use for MNIST. The model could be written in a more compact form, but we represent it this way to expose all the locations where we are going to insert probes. The model itself is simply two convolutional layers followed by two fully-connected layer (one being the ï¬nal classiï¬er). However, we insert probes on each side of each convolution, activation function, and pooling function. This is a bit overzealous, but the small size of the model makes this relatively easy to do.
(a) After initialization, no training. (b) After training for 10 epochs.
Figure 3: We represent here the test prediction error for each probe, at the beginning and at the end of training. This measurement was obtained through early stopping based on a validation set of 104 elements. The probes are prevented from overï¬tting the training data. We can see that, at the beginning of training (on the left), the randomly-initialized layers were still providing useful trans- formations. The test prediction error goes from 8% to 2% simply using those random features. The biggest impact comes from the ï¬rst ReLU. At the end of training (on the right), the test prediction error is improving at every layer (with the exception of a minor kink on fc1 preact).
# 3.6 Other objectives
Note that it would be entirely possible to use linear classiï¬er probes on a different set of labels. For the same reason as it is possible to transfer many layers from one vision task to another (e.g. with different classes), we are not limited to ï¬tting probes using the same domain.
Inserting probes at many different layers of a model is essentially a way to ask the following ques- tion:
Is there any information about factor present in this part of the model ?
# 4 Experiments with popular models
# 4.1 ResNet-50
The family of ResNet models (He et al.|/2016) are characterized by their large quantities of residual layers mapping essentially x > «+ r(x Se hay have been very successful and there are various
6
papers seeking to understand better how they work (Veit et al., 2016; Larsson et al., 2016; Singh et al., 2016).
Here we are going to show how linear classiï¬er probes might be able to help us a little to shed some light into the ResNet-50 model. We used the pretrained model from the github repo (fchollet/deep-learning-models) of the author of Keras (Chollet et al., 2015).
One of the questions that comes up when discussing ResNet models is whether the successive layers are essentially performing the same operation over many times, reï¬ning the representation just a little more each time, or whether there is a more fundamental change of representation happening.
In particular, we can point to certain places in ResNet-50 where the image size diminishes and we increase the number of channels. This happens at three places in the model (identiï¬ed with blank lines in Table 4a).
layer name topology probe valid prediction error input 1 (224, 224, 3) 0.99 add 1 add 2 add 3 (28, 28, 256) (28, 28, 256) (28, 28, 256) 0.94 0.89 0.88 add 4 add 5 add 6 add 7 add 8 add 9 add 10 add 11 add 12 add 13 add 14 add 15 add 16 (28, 28, 512) (28, 28, 512) (28, 28, 512) (28, 28, 512) (14, 14, 1024) (14, 14, 1024) (14, 14, 1024) (14, 14, 1024) (14, 14, 1024) (14, 14, 1024) (7, 7, 2048) (7, 7, 2048) (7, 7, 2048) 0.87 0.82 0.79 0.76 0.77 0.69 0.67 0.62 0.57 0.51 0.41 0.39 0.31
7 == == model top layer os GLUT] PIL H ex punsneas igi pioateny gap iseaQiQaseray eis{esipsaieeaegipiesiQa4t) 02 valid prediction error a se vee eyy - ei Ej a | 3 2
(a) Validation errors for probes. layers. Comparing Pre-trained on ImageNet dataset.
# different ResNet-50
(b) Inserting probes at meaningful layers of ResNet-50. This plot shows the rightmost column of the table in Figure 4a. Reporting the validation error for probes (magenta) and comparing it with the validation error of the pre-trained model (green).
Figure 4: For the ResNet-50 model trained on ImageNet, we can see deeper features are better at predicting the output classes. More importantly, the relationship between depth and validation prediction error is almost perfectly monotonic. This suggests a certain âgreedyâ aspect of the repre- sentations used in deep neural networks. This property is something that comes naturally as a result of conventional training, and it is not due to the insertion of probes in the model.
# 4.2 Inception v3
We have performed an experiment using the Inception v3 model on the ImageNet dataset (Szegedy et al., 2015; Russakovsky et al., 2015). We show using colors in Figure 5 how the predictive error of each layer can be measured using probes. This can be computed at many different times of training, but here we report only after minibatch 308230, which corresponds to about 2 weeks of training.
7
This model has a few particularities, one of which is that it features an auxiliary branch that con- tributes to training the model (it can be discarded afterwards, but not necessarily). We wanted to investigate whether this branch is âleading trainingâ, in the sense that its classiï¬er might have lower prediction error than the main head for the ï¬rst part of the training.
This is something that we conï¬rmed by looking at the prediction errors for the probes, but the difference was not very large. The auxiliary branch was ahead of the main branch by just a little.
The smooth gradient of colors in Figure 5 shows how the linear separability increases monotonically as we probe layers deeper into the network.
Refer to the Appendix Section C for a comparison at four different moments of training, and for some more details about how we reduced the dimensionality of the feature to make this more tractable.
Te 0.0 probe training error 1.0 308230 main head auxiliary head
Figure 5: Inception v3 model after 2 weeks of training. Red is bad (high prediction error) and green/blue is good (low prediction error). The smooth color gradient shows a very gradual transition in the degree of linear separability (almost perfectly monotonic).
# 5 Diagnostics for failing models
# 5.1 Pathological behavior on skip connections
In this section we show an example of a situation where we can use probes to diagnose a training problem as it is happening.
We purposefully selected a model that was pathologically deep so that it would fail to train under normal circumstances. We used 128 fully-connected layers of 128 hidden units to classify MNIST, which is not at all a model that we would recommend. We thought that something interesting might happen if we added a very long skip connection that bypasses the ï¬rst half of the model completely (Figure 6a).
With that skip connection, the model became trainable through the usual SGD. Intuitively, we thought that the latter portion of the model would see use at ï¬rst, but then we did not know whether the ï¬rst half of the model would then also become useful.
Using probes we show that this solution was not working as intended, because half of the model stays unused. The weights are not zero, but there is no useful signal passing through that segment. The skip connection left a dead segment and skipped over it.
The lesson that we want to show the reader is not that skip connections are bad. Our goal here is to show that linear classiï¬cation probes are a tool to understand what is happening internally in such situations. Sometimes the successful minimization of a loss fails to capture important details.
# 6 Discussion and future work
We have presented a combination of both a small convnet on MNIST and larger popular convnets Inception v3 and ResNet-50. It would be nice to continue this work and look at ResNet-101, ResNet- 151, VGG-16 and VGG-19. A similar thing could be done with popular RNNs also.
To apply linear classiï¬er probes to a different context, we could also try any setting where either Gen- erative Adversarial Networks (Goodfellow et al., 2014) or adversarial examples are used (Szegedy et al., 2013).
8
ae
(a) Model with 128 layers. A skip connec- tion goes from the beginning straight to the middle of the graph.
# (b) probes after 500 mini- batches
(c) probes after 2000 mini- batches
Figure 6: Pathological skip connection being diagnosed. Refer to Appendix Section A for explana- tions about the special notation for probes using the âdiodeâ symbol.
The idea of multi-layer probes has been suggested to us on multiple occasions. This could be seen as a natural extension of the linear classiï¬er probes. One downside to this idea is that we lose the convexity property of the probes. It might be worth pursuing in a particular setting, but as of now we feel that it is premature to start using multi-layer probes. This also leads to the convoluted idea of having a regular probe inside a multi-layer probe.
One completely new direction would be to train a model in a way that actively discourages certain internal layers to be useful to linear classiï¬ers. What would be the consequences of this constraint? Would it handicap a given model or would the model simply adjust without any trouble? At that point, we are no longer dealing with non-invasive probes, but we are feeding a strange kind of signal back to the model.
Finally, we think that it is rather interesting that the probe prediction errors are almost perfectly monotonically decreasing. We suspect that this warrants a deeper investigation into the reasons why that it happens, and it may lead to the discovery of fundamental concepts to understand better deep neural networks (in relation to their optimization). This is connected to the work done by Jastrzebski et al. (2017).
# 7 Conclusion
In this paper we introduced the concept of the linear classiï¬er probe as a conceptual tool to better understand the dynamics inside a neural network and the role played by the individual intermediate layers.
We have observed experimentally that an interesting property holds : the level of linear separabil- ity increases monotonically as we go to deeper layers. This is purely an indirect consequence of enforcing this constraint on the last layer.
We have demonstrated how these probes can be used to identify certain problematic behaviors in models that might not be apparent when we traditionally have access to only the prediction loss and error.
We are now able to ask new questions and explore new areas.
We hope that the notions presented in this paper can contribute to the understanding of deep neural networks and guide the intuition of researchers that design them.
# Acknowledgments
Yoshua Bengio is a senior CIFAR Fellow. The authors would like to acknowledge the support of the following agencies for research funding and computing support: NSERC, FQRNT, Calcul Qu´ebec, Compute Canada, the Canada Research Chairs and CIFAR. Thanks to Nicolas Ballas for fruitful discussions, to Reyhane Askari and Mohammad Pezeshki for proofreading and comments, and to all the reviewers for their comments.
9
# References
Alain, G. and Bengio, Y. (2016). Understanding intermediate layers using linear classiï¬er probes. arXiv preprint arXiv:1610.01644.
Arras, L., Montavon, G., M¨uller, K.-R., and Samek, W. (2017). Explaining recurrent neural network predictions in sentiment analysis. arXiv preprint arXiv:1706.07206.
Bach, S., Binder, A., Montavon, G., Klauschen, F., M¨uller, K.-R., and Samek, W. (2015). On pixel- wise explanations for non-linear classiï¬er decisions by layer-wise relevance propagation. PloS one, 10(7), e0130140.
Biggio, B., Corona, I., Maiorca, D., Nelson, B., ËSrndi´c, N., Laskov, P., Giacinto, G., and Roli, F. (2013). Evasion attacks against machine learning at test time. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 387â402. Springer.
Binder, A., Montavon, G., Lapuschkin, S., M¨uller, K.-R., and Samek, W. (2016). Layer-wise rele- vance propagation for neural networks with local renormalization layers. In International Con- ference on Artiï¬cial Neural Networks, pages 63â71. Springer.
Chollet, F. et al. (2015). Keras. https://github.com/fchollet/keras.
Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., and Darrell, T. (2014). Decaf: A deep convolutional activation feature for generic visual recognition. In International conference on machine learning, pages 647â655.
Dosovitskiy, A. and Brox, T. (2016). Inverting visual representations with convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4829â4837.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems, pages 2672â2680.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770â 778.
Jarrett, K., Kavukcuoglu, K., Lecun, Y., et al. (2009). What is the best multi-stage architecture for object recognition? In 2009 IEEE 12th International Conference on Computer Vision, pages 2146â2153. IEEE.
Jastrzebski, S., Arpit, D., Ballas, N., Verma, V., Che, T., and Bengio, Y. (2017). Residual connections encourage iterative inference. arXiv preprint arXiv:1710.04773.
Lapuschkin, S., Binder, A., Montavon, G., M¨uller, K.-R., and Samek, W. (2016). Analyzing clas- In Proceedings of the IEEE Conference on siï¬ers: Fisher vectors and deep neural networks. Computer Vision and Pattern Recognition, pages 2912â2920.
Larsson, G., Maire, M., and Shakhnarovich, G. (2016). Fractalnet: Ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648.
Mahendran, A. and Vedaldi, A. (2015). Understanding deep image representations by inverting them. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5188â5196.
Mahendran, A. and Vedaldi, A. (2016). Visualizing deep convolutional neural networks using natural pre-images. International Journal of Computer Vision, 120(3), 233â255.
Montavon, G., Braun, M. L., and M¨uller, K.-R. (2011). Kernel analysis of deep networks. Journal of Machine Learning Research, 12(Sep), 2563â2581.
Raghu, M., Yosinski, J., and Sohl-Dickstein, J. (2017a). Bottom up or top down? dynamics of deep representations via canonical correlation analysis. arxiv.
10
Raghu, M., Gilmer, J., Yosinski, J., and Sohl-Dickstein, J. (2017b). Svcca: Singular vector canonical correlation analysis for deep understanding and improvement. arXiv preprint arXiv:1706.05806.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., and Fei-Fei, L. (2015). ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3), 211â252.
Singh, S., Hoiem, D., and Forsyth, D. (2016). Swapout: Learning an ensemble of deep architectures. In Advances In Neural Information Processing Systems, pages 28â36.
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. (2013). Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1â9.
Veit, A., Wilber, M. J., and Belongie, S. (2016). Residual networks behave like ensembles of relatively shallow networks. In Advances in Neural Information Processing Systems, pages 550â 558.
Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R., and Bengio, Y. (2015). Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning, pages 2048â2057.
Yosinski, J., Clune, J., Bengio, Y., and Lipson, H. (2014). How transferable are features in deep neural networks? In Advances in neural information processing systems, pages 3320â3328.
Zeiler, M. D. and Fergus, R. (2014). Visualizing and understanding convolutional networks. European conference on computer vision, pages 818â833. Springer. In
Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. (2016). Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530.
# A Diode notation
We have the following suggestion for extending traditional graphical models to describe where probes are being inserted in a model. See Figure 7.
Due to the fact that probes do not contribute to backpropagation, but they still consume the features during the feed-forward step, we thought that borrowing the diode symbol from electrical engineer- ing might be a good idea. A diode is a one-way valve for electrical current.
This notation could be useful also outside of this context with probes, whenever we want to sketch a graphical model and highlight the fact that the gradient backpropagation signal is being blocked.
sooner
Figure 7: Probes being added to every layer of a model. These additional probes are not supposed to change the training of the model, so we add a little diode symbol through the arrows to indicate that the gradients will not backpropagate through those connections.
11
# B Training probes with ï¬nished model
Sometimes we do not care about measuring the probe losses/accuracy during training, but we have a model that is already trained and we want to report the measurements on that static model.
In that case, it is worth considering whether we really want to augment the model by adding the probes and training the probes by iterating through the training set. Sometimes the model itself is computationally expensive to run and we can only do 150 images per second. If we have to do multiple passes over the training set in order to train probes, then it might be more efï¬cient to run the whole training set and extract the features to the local hard drive. Experimentally, in the case for the pre-trained model Resnet-50 (Section 4.1) we found that we could process approximately 100 training samples per second when doing forward propagation, but we could run through 6000 training samples per second when reading from the local hard drive. This makes it a lot easier to do multiple passes over the training set.
# C Inception v3
In Section 3.4 we showed results from an experiment using the Inception v3 model on the ImageNet dataset (Szegedy et al., 2015; Russakovsky et al., 2015). The results shown were taken from the last training step only.
Here we provide in Figure 8 a sketch of the original Inception v3 model, and in Figure 9 we show results from 4 particular moments during training. These are spread over the 2 weeks of training so that we can get a sense of progression.
Figure 8: Sketch of the Inception v3 model. Note the structure with the âauxiliary headâ at the bottom, and the âinception modulesâ with a common topology represented as blocks that have 3 or 4 sub-branches.
As discussed in Section 3.4, we had to resort to a technique to limit the number of features used by the linear classiï¬er probes. In this particular experiment, we have had the most success by taking 1000 random features for each probe. This gives certain layers an unfair advantage if they start with 4000 features and we kept 1000, whereas in other cases the probe insertion point has 426, 320 features and we keep 1000. There was no simple âfairâ solution. That being said, 13 out of the 17 probes have more than 100, 000 features, and 11 of those probes have more than 200, 000 features, so things were relatively comparable.
12
Inception v3
7 main head probe training prediction error minibatches auxiliary head main head minibatches 050389 auxiliary head main head minibatches 100876 auxiliary head main head minibatches 308230 auxiliary head
Figure 9: Inserting a probe at multiple moments during training the Inception v3 model on the ImageNet dataset. We represent here the prediction error evaluated at a random subset of 1000 features. As expected, at ï¬rst all the probes have a 100% prediction error, but as training progresses we see that the model is getting better. Note that there are 1000 classes, so a prediction error of 50% is much better than a random guess. The auxiliary head, shown under the model, was observed to have a prediction error that was slightly better than the main head. This is not necessarily a condition that will hold at the end of training, but merely an observation. Red is bad (high prediction error) and green/blue is good (low prediction error).
13 | {
"id": "1706.05806"
} |
1609.09106 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | 6 1 0 2
c e D 1 ] G L . s c [
4 v 6 0 1 9 0 . 9 0 6 1 : v i X r a
# HYPERNETWORKS
David Ha; Andrew Dai, Quoc V. Le Google Brain {hadavid, adai, qvl1}@google.com
# ABSTRACT
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernet- works provide an abstraction that is similar to what is found in nature: the relation- ship between a genotype â the hypernetwork â and a phenotype â the main net- work. Though they are also reminiscent of HyperNEAT in evolution, our hyper- networks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as re- laxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art re- sults on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
# 1 INTRODUCTION
In this work, we consider an approach of using a small network (called a âhypernetwork") to generate the weights for a larger network (called a main network). The behavior of the main network is the same with any usual neural network: it learns to map some raw inputs to their desired targets; whereas the hypernetwork takes a set of inputs that contain information about the structure of the weights and generates the weight for that layer (see Figure 1).
>| hy > wy W2 layer index and other information about the weight
Figure 1: A hypernetwork generates the weights for a feedforward network. Black connections and parameters are associated the main network whereas orange connections and parameters are associated with the hypernetwork.
HyperNEAT (Stanley et al., 2009) is an example of hypernetworks where the inputs are a set of virtual coordinates for each weight in the main network. In this work, we will focus on a more pow- erful approach where the input is an embedding vector that describes the entire weights of a given layer. Our embedding vectors can be fixed parameters that are also learned during end-to-end train- ing, allowing approximate weight-sharing within a layer and across layers of the main network. In
âWork done as a member of the Google Brain Residency program (g.co/brainresidency).
addition, our embedding vectors can also be generated dynamically by our hypernetwork, allowing the weights of a recurrent network to change over timesteps and also adapt to the input sequence.
We perform experiments to investigate the behaviors of hypernetworks in a range of contexts and find that hypernetworks mix well with other techniques such as batch normalization and layer nor- malization. Our main result is that hypernetworks can generate non-shared weights for LSTM that work better than the standard version of LSTM (Hochreiter & Schmidhuber, 1997). On language modelling tasks with Character Penn Treebank, Hutter Prize Wikipedia datasets, hypernetworks for LSTM achieve near state-of-the-art results. On a handwriting generation task with IAM handwrit- ing dataset, Hypernetworks for LSTM achieves high quantitative and qualitative results. On image classification with CIFAR-10, hypernetworks, when being used to generate weights for a deep con- vnet (LeCun et al., 1990), obtain respectable results compared to state-of-the-art models while hav- ing fewer learnable parameters. In addition to simple tasks, we show that Hypernetworks for LSTM offers an increase in performance for large, production-level neural machine translation models.
# 2 MOTIVATION AND RELATED WORK
Our approach is inspired by methods in evolutionary computing, where it is difficult to directly operate in large search spaces consisting of millions of weight parameters. A more efficient method is to evolve a smaller network to generate the structure of weights for a larger network, so that the search is constrained within the much smaller weight space. An instance of this approach is the work on the HyperNEAT framework (Stanley et al., 2009). In the HyperNEAT framework, Compositional Pattern-Producing Networks (CPPNs) are evolved to define the weight structure of much larger main network. Closely related to our approach is a simplified variation of HyperNEAT, where the structure is fixed and the weights are evolved through Discrete Cosine Transform (DCT) is called Compressed Weight Search (Koutnik et al., 2010). Even more closely related to our approach are Differentiable Pattern Producing Networks (DPPNs), where the structure is evolved but the weights are learned (Fernando et al., 2016), and ACDC-Networks (Moczulski et al., 2015), where linear layers are compressed with DCT and the parameters are learned.
Most reported results using these methods, however, are in small scales, perhaps because they are both slow to train and require heuristics to be efficient. The main difference between our approach and HyperNEAT is that hypernetworks in our approach are trained end-to-end with gradient descent together with the main network, and therefore are more efficient.
In addition to end-to-end learning with gradient descent, our approach strikes a good balance be- tween Compressed Weight Search and HyperNEAT in terms of model flexibility and training sim- plicity. First, it can be argued that Discrete Cosine Transform used in Compressed Weight Search may be too simple and using the DCT prior may not be suitable for many problems. Second, even though HyperNEAT is more flexible, evolving both the architecture and the weights in HyperNEAT is often an overkill for most practical problems.
Even before the work on HyperNEAT and DCT, Schmidhuber (1992; 1993) has suggested the con- cept of fast weights in which one network can produce context-dependent weight changes for a second network. Small scale experiments were conducted to demonstrate fast weights for feed for- ward networks at the time, but perhaps due to the lack of modern computational tools, the recurrent network version was mentioned mainly as a thought experiment (Schmidhuber, 1993). A subse- quent work demonstrated practical applications of fast weights (Gomez & Schmidhuber, 2005), where a generator network is learnt through evolution to solve an artificial control problem. The concept of a network interacting with another network is central to the work of (Jaderberg et al., 2016; Andrychowicz et al., 2016), and especially (Denil et al., 2013; Yang et al., 2015; Bertinetto et al., 2016; De Brabandere et al., 2016), where certain parameters in a convolutional network are predicted by another network. These studies however did not explore the use of this approach to recurrent networks, which is a main contribution of our work.
The focus of this work is to generate weights for practical architectures, such as convolutional net- works and recurrent networks by taking layer embedding vectors as inputs. However, our hypernet- works can also be utilized to generate weights for a fully connected network by taking coordinate information as inputs similar to DPPNs. Using this setting, hypernetworks can approximately re-
cover the convolutional architecture without explicitly being told to do so, a similar result obtained by âConvolution by Evolution" (Fernando et al., 2016). This result is described in Appendix A.1.
# 3 METHODS
In this paper, we view convolutional networks and recurrent networks as two ends of a spectrum. On one end, recurrent networks can be seen as imposing weight-sharing across layers, which makes them inflexible and difficult to learn due to vanishing gradient. On the other end, convolutional networks enjoy the flexibility of not having weight-sharing, at the expense of having redundant parameters when the networks are deep. Hypernetworks can be seen as a form of relaxed weight- sharing, and therefore strikes a balance between the two ends. See Appendix A.2 for conceptual diagrams of Static and Dynamic Hypernetworks.
3.1 STATIC HYPERNETWORK: A WEIGHT FACTORIZATION APPROACH FOR DEEP CONVOLUTIONAL NETWORKS
First we will describe how we construct a hypernetwork for the purpose of generating the weights of a feedforward convolutional network. In a typical deep convolutional network, the majority of model parameters are in the kernels of convolutional layers. Each kernel contain Nj, x Nouz filters and each filter has dimensions f.;e < fsize. Letâs suppose that these parameters are stored in a matrix KJ ¢ RNinfsizexNourSsize for each layer 7 = 1,..,D, where D is the depth of the main convolutional network. For each layer j, the hypernetwork receives a layer embedding z/ ⬠RY* as input and predicts Aâ, which can be generally written as follows:
Ki =g(zâ), Vj=1,..,D (dy
We note that this matrix Aâ can be broken down as N;,, slices of a smaller matrix with dimensions fsize X Nout fsize, each Slice of the kernel is denoted as kK} ⬠RfsizexNoutfsize Therefore, in our ap- proach, the hypernetwork is a two-layer linear network. The first layer of the hypernetwork takes the input vector z/ and linearly projects it into the N;,, inputs, with N;,, different matrices W; ⬠RIXNz and bias vectors B; ⬠IR¢, where d is the size of the hidden layer in the hypernetwork. For our pur- pose, we fix d to be equal to N, although they can be different. The final layer of the hypernetwork is a linear operation which takes an input vector a; of size d and linearly projects that into A; using acommon tensor Woy, ⬠Rfsize*NoutSsizeX@ and bias matrix Bou, ⬠Rfeize*Nouefsize, The final kernel K will be a concatenation of every K?. Thus g(z/) can be written as follows:
a} = W;2z) + B;, Vi =1,.., Nin, Vj = 1,...,D K} = (Wow,a}) | + Bout, Vi=1,..,Nin, Vj =1,..,D (2) Ki=(K] Kho. K} . Kk,,), Vj =1,.,D
In our formulation, the learnable parameters are W;, Bj, Wout, Bout together with all zââs. During inference, the model simply takes the layer embeddings z/ learned during training to reproduce the kernel weights for layer 7 in the main convolutional network. As a side effect, the number of learnable parameters in hypernetwork will be much lower than the main convolutional network. In fact, the total number of learnable parameters in hypernetwork is N, x D +d x (Nz +1) x Ni + fsize X Nout X fsize X (d+ 1) compared to the D x Nin X fsize X Nout X fsize parameters for the kernels of the main convolutional network.
Our approach of constructing g(.) is similar to the hierarchically semiseparable matrix approach proposed by Xia et al. (2010). Note that even though it seems redundant to have a two-layered linear hypernetwork as that is equivalent to a one-layered hypernetwork, the fact that Wou¢ and Bout are shared makes our two-layered hypernetwork more compact than a one-layered hypernetwork. More concretely, a one-layered hypernetwork would have N, x Nin X fsize X Nout X fsize learnable parameters which is usually much bigger than a two-layered hypernetwork does.
âTensor dot product between W ⬠Râ¢*"â¢* and a ⬠R*. Result (W,a) ⬠Râ¢*â
The above formulation assumes that the network architecture consists of kernels with same dimen- sions. In practice, deep convolutional network architectures consists of kernels of varying dimen- sions. Typically, in many designs, the kernel dimensions are integer multiples of a basic size. This is indeed the case in the residual network family of architectures (He et al., 2016a) that we will be experimenting with later is an example of such a design. In our experiments, although the kernels of a residual network do not share the same dimensions, the N; and N,,,; dimensions for each kernel are integer multiples of 16. To modify our approach to work with this architecture, we have our hypernetwork generate kernels for this basic size of 16, and if we require a larger kernel for a certain layer, we will concatenate multiple basic kernels together to form the larger kernel. K, Ky Ks kK) @) K32 x64 = ( Ks Kg Ky Ks For example, if we need to generate a kernel with N; = 32 and Nou: = 64, we will tile eight basic kernels together. Each basic kernel is generated by a unique z embedding, hence the larger kernel will be expressed with eight embeddings. Therefore, kernels that are larger in size will require a proportionally larger number of embedding vectors. For visualizations of concatenated kernels, please see Appendix A.2.1. Figure 2 shows the similarity between kernels learned by a ConvNet to classify MNIST digits and those learned by a hypernetwork generating weights for a ConvNet. Figure 2: Kernels learned by a ConvNet to classify MNIST digits (left). Kernels learned by a hypernetwork generating weights for the ConvNet (right). 3.2. DYNAMIC HYPERNETWORK: ADAPTIVE WEIGHT GENERATION FOR RECURRENT NETWORKS In the previous section, we outlined a procedure for using a hypernetwork to generate the weights for a deep convolutional network. In this section, we will use a recurrent network to dynamically gener- ate weights for another recurrent network, such that the weights can vary across many timesteps. In this context, hypernetworks are called dynamic hypernetworks, and can be seen as a form of relaxed weight-sharing, a compromise between hard weight-sharing of traditional recurrent networks, and no weight-sharing of convolutional networks. This relaxed weight-sharing approach allows us to control the trade off between the number of model parameters and model expressiveness. Our dynamic hypernetworks can be used to generate weights for RNN and LSTM. When a hyper- network is used to generate the weights for an RNN, it is called HyperRNN. At every time step t, a HyperRNN takes as input the concatenated vector of input x; and the hidden states of the main RNN /;_1, it then generates as output the vector h,. This vector is then used to generate the weights for the main RNN at the same timestep. Both the HyperRNN and the main RNN are trained jointly with backpropagation and gradient descent. In the following, we will give a more formal description of the model. The standard formulation of a Basic RNN is given by: he = (Waht-1 + Wee + b) (4)
where h; is the hidden state, @ is a non-linear operation such as tanh or relu, and the weight matrices and bias W;, ⬠RN»*Ne, W, ⬠RN*Ne bh © RN» is fixed each timestep for an input sequence X = (a, %2,...,27).
Figure 3: An overview of HyperRNNs. Black connections and parameters are associated basic RNNs. Orange connections and parameters are introduced in this work and associated with Hyper- RNNs. Dotted arrows are for parameter generation.
In HyperRNN, we allow W), and W,, to float over time by using a smaller hypernetwork to generate these parameters of the main RNN at each step (see Figure 3). More concretely, the parameters W,, Wz, b of the main RNN are different at different time steps, so that h; can now be computed as:
hy = (Wr (zn )heâa + Wi. (zx) + b(z)), where Wrhl(zn) = (Whz, Zn) W. (22) = (Woz, Zn) (20) = W220 + bo
Where Wp, ⬠RNa*NnXNe W, ⬠RNaxNeXNe Wy, ⬠RNaXN2 by © RNo and 2p, Zn, 22 ⬠RY. We use a recurrent hypernetwork to compute z),, z,, and z, as a function of x; and hy_,:
5 (he a= ( 2) hy = b(Wyhe_1 + Wet + b) zn = Wj, lu-1 +b tn = Wy, hi-1 +6 ha 2 = Wayht-1 (6) huh he
Where Wi, ⬠RNA*Na, We ⬠RNAX(Nn+N2) b © RNA, and Wj,,,Wi,,Wiy, ⬠RN?*N* and ban» Ong ⬠R=. This HyperRNN Cell has Nj, hidden units. Typically Nj, is much smaller than Nj. hh? Pha
As the embeddings z;,, z,, and z, are of dimensions N,, which is typically smaller than the hidden state size Nj; of the HyperRNN cell, a linear network is used to project the output of the HyperRNN cell into the embeddings in Equation 6. After the embeddings are computed, they will be used to generate the full weight matrix of the main RNN.
The above is a general formulation of a /inear dynamic hypernetwork applied to RNNs. However, we found that in practice, Equation 5 is often not practical because the memory usage becomes too large for real problems. The amount of memory required in the system described in Equation 5 will be N, times the memory of a Basic RNN, which limits the number of hidden units we can use in many practical applications.
We can modify the dynamic hypernetwork system described in Equation 5 so that it can be much more scalable and memory efficient. Our approach borrows from the static hypernetwork section and we will use an intermediate hidden vector d(z) ⬠R%* to parametrize a weight matrix, where d(z) will be a linear projection of z. To dynamically modify a weight matrix W, we will allow each
(5)
row of this weight matrix to be scaled linearly by an element in vector d. We refer d as a weight scaling vector. Below is the modification to W (z):
do(z) Wo Wie)=w(a)) =| BOM (7) dy, (2)Wn,
While we sacrifice the ability to construct an entire weight matrix from a linear combination of N, matrices of the same size, we are able to linearly scale the rows of a single matrix with N, degrees of freedom. We find this to be a good trade off, as this formulation of converting W(z) into W (d(z)) decreases the amount of memory required by the dynamic hypernetwork. Rather than requiring Nz times the memory of a Basic RNN, we will only be using memory in the order NV, times the number of hidden units, which is an acceptable amount of extra memory usage that is often available in many applications. In addition, the row-level operation in Equation 7 can be shown to be equivalent to an element-wise multiplication operator and hence computationally much more efficient in practice. Below is the more memory efficient version of the setup of Equation 5:
hy = (dn (Zn) © Wrhe-1 + de(Zx) © Weve + b(z0)), where dn(2n) = Whz2h dy (22) = Waz%x b(zp) = Woz2n + bo (8)
This formulation of the HyperRNN has some similarities to Recurrent Batch Normalization (Cooij- mans et al., 2016) and Layer Normalization (Ba et al., 2016). The central idea for the normalization techniques is to calculate the first two statistical moments of the inputs to the activation function, and to linearly scale the inputs to have zero mean and unit variance. An additional set of fixed parameters are learned to unscale the activations if required. This element-wise operation also has similarities to the Multiplicative RNN (Sutskever et al., 2011) and Multiplicative Integration RNN (Wu et al., 2016) where it was demonstrated that the multiplication-operation encouraged better gradient flow.
Since the HyperRNN cell can indirectly modify the rows of each weight matrix and also the bias of the main RNN, it is implicitly also performing a linear scaling to the inputs of the activation function. The difference here is that the linear scaling parameters can be different for each timestep and also for for each input sample. It will be interesting to compare the scaling policy that the HyperRNN cell comes up with, to the hand engineered statistical-moments based scaling approaches. In addition, we note that the existing normalization approaches can work together with the HyperRNN approach, where the HyperRNN cell will be tasked with discovering a better dynamical scaling policy to complement normalization. We will also explore this combination in our experiments.
The Long Short-Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) is usually better than the Basic RNN at storing and retrieving information over longer time steps. In our ex- periments, we will focus on this LSTM version of the HyperRNN, called the HyperLSTM. The details of the HyperLSTM architecture is described in Appendix A.2.2, along with specific imple- mentation details in Appendix A.2.3. We want to know whether the HyperLSTM cell can learn a weight adjustment policy that can rival statistical moments-based normalization methods, hence Layer Normalization will be one of our baseline methods. We will therefore conduct experiments on two versions of HyperLSTM, one with and one without the application of Layer Normalization.
# 4 EXPERIMENTS
In the following experiments, we will benchmark the performance of static hypernetworks on im- age recognition with MNIST and CIFAR-10, and the performance of dynamic hypernetworks on language modelling with Penn Treebank and Hutter Prize Wikipedia (enwik8) datasets and hand- writing generation.
4.1 USING STATIC HYPERNETWORKS TO GENERATE FILTERS FOR CONVOLUTIONAL NETWORKS AND MNIST
We start by applying a hypernetwork to generate the filters for a convolutional network on MNIST. Our main convolutional network is a small two layer network and the hypernetwork is used to gener- ate the kernel for the second layer (7x7x 16x16), which contains the bulk of the trainable parameters in the system. Our weight matrix will be summarized by an embedding of size N, = 4. See Appendix A.3.1 for further experimental setup details.
For this task, the hypernetwork achieved a test accuracy of 99.24%, comparable to the 99.28% for the conventional method. In this example, a kernel consisting of 12,544 weights is represented by an embedding vector of only 4 parameters, generated by a hypernetwork that has 4240 parameters. We can see the weight matrix this network produced by the hypernetwork in Figure 2. Now the question is whether we can also train a deep convolutional network, using a single hypernetwork generating a set of weights for each layer, on a dataset more challenging than MNIST.
4.2 STATIC HYPERNETWORKS FOR RESIDUAL NETWORK ARCHITECTURE AND CIFAR-10
The residual network architectures (He et al., 2016a; Zagoruyko & Komodakis, 2016) are popular for image recognition tasks, as they can accommodate very deep networks while maintaining effective gradient flow across layers using skip connections. The original resnet and subsequent derivatives (Zhang et al., 2016; Huang et al., 2016a) achieved state-of-the-art image recognition performance on a variety of public datasets. While residual networks can be be very deep, and in some experi- ments as deep as 1001 layers ((He et al., 2016b), it is important to understand whether some these layers share common properties and can be reduced effectively by introducing weight sharing. If we enforce weight-sharing across many layers of a deep feed forward network, the network may share many properties to that of a recurrent network. In this experiment, we want to explore this idea of enforcing relaxed weight sharing across all of the layers of a deep residual network. We will take a simple version of residual network, use a single hypernetwork to generate the weights of all of its layers for image classification task on the CIFAR-10 dataset.
group name | output size block type conv 1 32 x 32 [3x3, 16] 3x3, 16xk conv2 32x32 3x3, 16xk N 3x3, 32xk conv3 16x16 3x3, 32xk N 3x3, 64xk conv4 8x8 3x3, 64xk Jos avg-pool 1x1 [8 x 8]
Table 1: Structure of Wide Residual Networks in Zagoruyko & Komodakis (2016). N determines the number of residual blocks in each group. Network width is determined by factor k.
Our experiment will use a version of the wide residual network (Zagoruyko & Komodakis, 2016), described in Table 1, a popular and simple variant of the family of residual network architectures, and we will focus configurations (NV = 6, = 1) and(N = 6, K = 2), referred to as WRN 40-1 and WRN 40-2 respectively. In this setup, we will use a hypernetwork to generate all of the kernels in conv2, conv3, and conv4, so we will generate 36 layers of kernels in total. The WRN architecture uses a filter size of 3 for every kernel. We use the method outlined in the Methods section to deal with kernels of varying sizes, and use the an embedding size of N, = 64 in our experiments. See Appendix A.3.2 for further experimental setup details.
We obtained similar classification accuracy numbers as reported in (Zagoruyko & Komodakis, 2016) with our own implementation. We also note that the weights generated by the hypernetwork are used in a batch normalization setting without modification to the original model. In principle, hypernet- works can also be applied to the newer variants of residual networks with more skip connections, such as DenseNets and ResNets of Resnets.
From the results, we see that enforcing a relaxed weight sharing constraint to the deep residual network cost us ~ 1.25-1.5% in classification accuracy, while drastically reducing the number of
Model Test Error Param Count Network in Network (Lin et al., 2014) 8.81% FitNet (Romero et al., 2014) 8.39% Deeply Supervised Nets (Lee et al., 2015) 8.22% Highway Networks (Srivastava et al., 2015) 7.12% ELU (Clevert et al., 2015) 6.55% Original Resnet-110 (He et al., 2016a) 6.43% 17M Stochastic Depth Resnet-110 (Huang et al., 2016b) 5.23% 17M Wide Residual Network 40-1 (Zagoruyko & Komodakis, 2016) 6.85% 0.6M Wide Residual Network 40-2 (Zagoruyko & Komodakis, 2016) 5.33% 2.2M Wide Residual Network 28-10 (Zagoruyko & Komodakis, 2016) 4.17% 36.5 M ResNet of ResNet 58-4 (Zhang et al., 2016) 3.77% 13.3M DenseNet (Huang et al., 2016a) 3.74% 27.2M Wide Residual Network 40-1? 6.73% 0.563 M Hyper Residual Network 40-1 (ours) 8.02% 0.097 M Wide Residual Network 40-2? 5.66% 2.236 M Hyper Residual Network 40-2 (ours) 7.23% 0.148 M
Table 2: CIFAR-10 Classification with hypernetwork generated weights.
parameters in the model as a trade off. One reason for this reduction in accuracy is because different layers of a deep network is trained to extract different levels of features, and require different kinds of filters to perform optimally. The hypernetwork enforces some commonality between every layer, but offers each layer 64 degrees of freedom to distinguish itself from the other layers. While the network is no longer able to learn the optimal set of filters for each layer, it will learn the best set of filters given the constraints, and the resulting number of model parameters is drastically reduced.
4.3. HYPERLSTM FOR CHARACTER-LEVEL PENN TREEBANK LANGUAGE MODELLING
The HyperLSTM model is evaluated on character level prediction task on the Penn Treebank corpus (Marcus et al., 1993) using the train/validation/test split outlined in (Mikolov et al., 2012). As the dataset is quite small is prone to over fitting, we apply dropout on both input and output layers with a keep probability of 0.90. Unlike previous approaches (Graves, 2013; Ognawala & Bayer, 2014) of applying weight noise during training, we instead also apply dropout to the recurrent layer (Henaff et al., 2016) with the same dropout probability.
We compare our model to the basic LSTM cell, stacked LSTM cells (Graves, 2013), and LSTM with layer normalization applied. In addition, we also experimented with applying layer normalization to HyperLSTM. Using the setup in (Graves, 2013), we use networks with 1000 units and train the network to predict the next character. In this task, the HyperLSTM cell has 128 units and a signal size of 4. As the HyperLSTM cell has more trainable parameters compared to the basic LSTM Cell, we also experimented with an LSTM Cell with 1250 units as well. For more details regarding experimental setup, please refer to Appendix A.3.3
It is interesting to note that combining Recurrent Dropout with a basic LSTM cell achieves quite formidable performance. Our implementation of Recurrent Dropout Basic LSTM cell reproduced similar results as (Semeniuta et al., 2016), where they have also experimented with different dropout settings. We also found that Layer Norm LSTM performed quite well when combined with recurrent dropout, making it both a formidable baseline and also an extension for HyperLSTM.
In addition to outperforming both the larger or deeper version of the LSTM network, HyperLSTM also achieved similar performance of Layer Norm LSTM. This suggests by dynamically adjusting the weight scaling vectors, the HyperLSTM cell has learned a policy of scaling inputs to the ac- tivation functions that is as efficient as the statistical moments-based strategy employed by Layer Norm, and that the required extra computation required is embedded inside the extra 128 units in- side the HyperLSTM cell. When we combine HyperLSTM with Layer Norm, we see an additional performance gain, implying that the HyperLSTM cell learned an adjustment policy that goes be- yond moments-based regularization. We also demonstrate that increasing the size of the embedding vector or stacking HyperLSTM layers together can also increase its performance.
Model! Test Validation Param Count ME n-gram (Mikolov et al., 2012) 1.37 Batch Norm LSTM (Cooijmans et al., 2016) 1.32 Recurrent Dropout LSTM (Semeniuta et al., 2016) 1.301 1.338 Zoneout RNN (Krueger et al., 2016) 1.27 HM-LSTM? (Chung et al., 2016) 1.27 LSTM, 1000 units ? 1.312 1.347 4.25M LSTM, 1250 units? 1.306 = 1.340 6.57M 2-Layer LSTM, 1000 unitsâ 1.281 1.312 12.26M Layer Norm LSTM, 1000 unitsâ 1.267 1.300 4.26 M HyperLSTM (ours), 1000 units 1.265 = 1.296 491M Layer Norm HyperLSTM, 1000 units (ours) 1.250 1.281 4.92 M Layer Norm HyperLSTM, 1000 units, Large Embedding (ours) 1.233 1.263 5.06 M 2-Layer Norm HyperLSTM, 1000 units 1.219 = 1.245 14.41M
Table 3: Bits-per-character on the Penn Treebank test set.
4.4. HYPERLSTM FOR HUTTER PRIZE WIKIPEDIA LANGUAGE MODELLING
We train our model on the larger and more challenging Hutter Prize Wikipedia dataset, also known as enwik8 (Hutter, 2012) consisting of a sequence of 100M characters composed of 205 unique characters. Unlike Penn Treebank, enwik8 contains some foreign words (Latin, Arabic, Chinese), indented XML, metadata, and internet addresses, making it a more realistic and practical dataset to test character language models. For more details regarding experimental setup, please refer to Appendix A.3.4. Examples of these mixed variety of text samples that our HyperLSTM model can generate is in Appendix A.4.
Model! enwiks Param Count Stacked LSTM (Graves, 2013) 1.67 27.0M MRNN (Sutskever et al., 2011) 1.60 GF-RNN (Chung et al., 2015) 1.58 20.0 M Grid-LSTM (Kalchbrenner et al., 2016) 1.47 16.8M LSTM (Rocki, 2016b) 1.45 MI-LSTM (Wu et al., 2016) 1.44 Recurrent Highway Networks (Zilly et al., 2016) 1.42 8.0M Recurrent Memory Array Structures (Rocki, 2016a) 1.40 HM-LSTM& (Chung et al., 2016) 1.40 Surprisal Feedback LSTM* (Rocki, 2016b) 1.37 LSTM, 1800 units, no recurrent dropout? 1.470 14.81 M LSTM, 2000 units, no recurrent dropoutâ 1.461 18.06 M Layer Norm LSTM, 1800 unitsâ 1.402 14.82 M HyperLSTM (ours), 1800 units 1.391 18.71 M Layer Norm HyperLSTM, 1800 units (ours) 1.353 18.78 M Layer Norm HyperLSTM, 2048 units (ours) 1.340 26.54 M
Table 4: Bits-per-character on the enwik8 test set.
We see that HyperLSTM is once again competitive to Layer Norm LSTM, and if we combine both techniques, the Layer Norm HyperLSTM achieves respectable results. The version of HyperLSTM that uses 2048 hidden units achieve near state-of-the-art performance for this task. In addition, HyperLSTM converges quicker per training step compared to LSTM and Layer Norm LSTM. Please refer to Figure 6 for the loss graphs.
'We do not compare against methods that use dynamic evaluation.
# implementation.
Our
3Based on results of version 2 at the time of writing. http: //arxiv.org/abs/1609.01704v2 âThis method uses information about test errors during inference for predicting the next characters, hence it is not directly comparable to other methods that do not use this information.
In 1955-37 most American and Europeans signed into the sea. An absence of [[Japan (Korea city) |Japan]], the Mayotte like Constantino 7 i. H . H . . . . an moe _â . . . - om : : Co | bh : .- ple (in its first week, in [[880]]) that served as the mother of emperors, as the Corinthians, Bernard on his continued sequel toget _ 8 2 H : Po : . i. =o. Hl : : . : cE ta : : af : her ordered [ [Operation Moabili]]. The Gallup churches in the army promulgated the possessions sitting at the reservation, and [ [Mel 2 ito de la Vegeta Provine|Felix]] had broken Diocletian desperate from the full victory of Augustus, cited by Stephen I. Alexander Se on oe sae rt = . . Pa - . a: : : fa = me Ch : nate became Princess Cartara, an annual ruler of war (777-184) and founded numerous extremiti of justice practitioners. -
Figure 4: Example text generated from HyperLSTM model. We visualize how four of the main RNNâs weight matrices (W;,, Wi, W, if , |||) effectively change over time by plotting the norm of the changes below each generated character. High intensity represent large changes being made to weights of main RNN.
When we use this prediction model as a generative model to sample a text passage, we use main RNN to model a probability distribution over possible characters conditioned over the preceding characters. In the case of the HyperRNN, we allow the model parameters of this generative model to vary over time, so in effect the HyperRNN cell is choosing the best model at any given time to generate a probability distribution to sample from. We can demonstrate this by visualizing how the weight scaling vectors of the main RNN change during the character sampling process. In Figure 4, we examine a sample text passage generated by HyperLSTM after training on enwik8 along with the weight differences below the text. We see that in regions of low intensity, where the weights of the main RNN are relatively static, the types of phrases generated seem more deterministic. For example, the weights do not change much during the words Europeans, possessions and reservation. The regions of high intensity is when the HyperRNN cell is making relatively large changes to the weights of the main RNN. These tend to happen in the areas between words, or sometimes during brackets.
One might also wonder whether the HyperLSTM cell (without Layer Norm), via dynamically tuning the weight scaling vectors, has developed a policy that is similar to the statistics-based approach used by Layer Norm, given that both methods have similar performance. One way to see this effect is to look at the histogram of the hidden states in the network. In Figure 5, we examine the histograms of (cr), the hidden state of the LSTM before applying the output gate.
os os os os 02s 02s 02s 02s a2 a2 02 02 as ons ors ars a oa on os lll er vt lin a Mee toll, ull [iti 07-025 025075 075-025 025075 â075-025, 025 O75 07-025 025 O75 ist⢠Layer Norm LSTÂ¥ Hyporist⢠Layer Norm Hyper STM
Figure 5: Normalized Histogram plots of $(c;) for different models during sampling.
We see that the normalization process employed by Layer Norm reduces the saturation effects com- pared to the vanilla LSTM. However, for the case of the HyperLSTM, we notice that most of the time the cell is saturated. The HyperLSTM cellâs dynamic weight adjustment policy appears to be doing something very different compared to statistical normalization, although the policy it came up with ended up providing similar performance as Layer Norm. It is interesting to see that when we combine both methods, the HyperLSTM cell will need to determine an adjustment policy in spite of the normalization forced upon it by Layer Norm. An interesting question is whether there are problems where statistical normalization may actually be a setback to the policy developed by the HyperLSTM, and the best strategy is to ignore it.
10
2.25 -800 215 LSTM âLSTM 20s Ss 850 â2 Layer LSTM 108 âLayer Norm LSTM 2 -900 â Layer Norm LSTM o " g â & 1.85 â HyperLSTM 2 -950 HyperLSTM & 1.75 â Layer Norm HyperLSTM Z -1000 S 1.65 = -1050 S zB S 185 âs -1100 1.45 > 1.35 â1150 1.25 -1200 0 10 20 30 40 50 60 70 80 25 22.5 42.5 62.5 82.5 102.5 Training Step (x1000) Training Step (x1000)
Figure 6: Loss Graph for enwik8 (left). Loss Graph for Handwriting Generation (right)
4.5 HYPERLSTM FOR HANDWRITING SEQUENCE GENERATION
In addition to modelling discrete sequential data, we want to see how the model performs when modelling sequences of real valued data. We will train our model on the IAM online handwrit- ing database (Liwicki & Bunke, 2005) and have our model predict pen strokes as per Section 4.2 of (Graves, 2013). The dataset has contains 12179 handwritten lines from 221 writers, digitally recorded from a tablet. We will model the (x, y) coordinate of the pen location at each recorded time step, along with a binary indicator of pen-up/pen-down. The average sequence length is around 700 steps and the longest around 1900 steps, making the training task particularly challenging as the network needs to retain information about both the stroke history and also the handwriting style in order to predict plausible future handwriting strokes. For experimental setup details, please refer to Appendix A.3.5.
Model Log-Loss Param Count LSTM, 900 units (Graves, 2013) -1,026 3-Layer LSTM, 400 units (Graves, 2013) -1,041 3-Layer LSTM, 400 units, adaptive weight noise (Graves, 2013) -1,058 LSTM, 900 units, no dropout, no data augmentation.! -1,026 3.36M 3-Layer LSTM, 400 units, no dropout, no data augmentation.! -1,039 3.26 M LSTM, 900 units? -1,055 3.36M LSTM, 1000 units? -1,048 414M 3-Layer LSTM, 400 unitsâ -1,068 3.26M 2-Layer LSTM, 650 unitsâ -1,135 5.16M Layer Norm LSTM, 900 units? -1,096 3.37M Layer Norm LSTM, 1000 units? -1,106 4.14M Layer Norm HyperLSTM, 900 units (ours) -1,067 3.95 M HyperLSTM (ours), 900 units -1,162 3.94 M
Table 5: Log-Loss of IAM Online DB validation set.
In this task, we note that data augmentation and applying recurrent dropout improved the perfor- mance of all models, compared to the original setup by (Graves, 2013). In addition, for the LSTM model, increasing unit count per layer may not help the performance compared to increasing the layer depth. We notice that a 3-layer 400 unit LSTM outperforms a 1-layer 900 unit one, and we found that a 2-layer 650 unit LSTM outperforming most configurations. While layer norm helps with the performance, we found that in this task, layer norm does not combine well with HyperL- STM, and in this task the 900 unit HyperLSTM without layer norm achieved the best performance.
Unlike the language modelling task, perhaps statistical normalization is far from the optimal ap- proach for a weight adjustment policy. The policy learned by the HyperLSTM cell not only per-
âOur implementation, to replicate setup of (Graves, 2013).
Our implementation, with data augmentation, dropout and recurrent dropout.
11
formed well against the baseline, its convergence rate is also as fast as the 2-layer LSTM model. Please refer to Figure 6 for the loss graphs.
In Appendix A.5, we display three sets of handwriting samples generated from LSTM, Layer Norm LSTM, and HyperLSTM, corresponding to log-loss scores of -1055, -1096, and -1162 nats respec- tively in Table 5. Qualitative assessments of handwriting quality is always subjective, and depends an individualâs taste in calligraphy. From looking at the examples produced by the three models, our opinion is that the samples produced by LSTM is noisier than the other two models. We also find HyperLSTMâs samples to be a bit more coherent than the samples produced by Layer Norm LSTM. We leave to the reader to judge which model produces handwriting samples of higher quality.
joa cencslourc ucit te al gsereoum Semenlo ejay Maki ON LA A a
Figure 7: Handwriting sample generated from HyperLSTM model. We visualize how four of the main RNNâs weight matrices (WW, wf , |\)/) effectively change over time, by plotting norm of changes made to them over time.
Similar to the earlier character generation experiment, we show a generated handwriting sample from the HyperLSTM model in Figure 7, along with a plot of how the weight scaling vectors of the main RNN is changing over time below the sample. For a more detailed interactive demonstration of handwriting generation using HyperLSTM, visit http: //blog.otoro.net/2016/09/28/ hyper-networks/.
We see that the regions of high intensity seem to be concentrated at many discrete instances, rather than slowly varying over time. This implies that the weights experience regime changes rather than gradual slow adjustments. We can see that many of these weight changes occur between the written words, and sometimes between written characters. While the LSTM model alone already does a formidable job of generating time-varying parameters of a Mixture Gaussian distribution used to generate realistic handwriting samples, the ability to go one level deeper, and to dynamically generate the generative model is one of the key advantages of HyperRNN over a normal RNN.
4.6 HYPERLSTM FOR NEURAL MACHINE TRANSLATION
We experiment with the Neural Machine Translation task using the same experimental setup outlined in (Wuet al., 2016). Our model is the same wordpiece model architecture with a vocabulary size of 32k, but we replace the LSTM cells with HyperLSTM cells. We benchmark the modified model on WMTâ 14 En-+Fr using the same test/validation set split described in the GNMT paper (Wu et al., 2016). Please refer to Appendix A.3.6 for experimental setup details.
Model Test BLEU Log Perplexity Deep-Att + PosUnk (Zhou et al., 2016) 39.2 GNMT WPM-32K, LSTM (Wu et al., 2016) 38.95 1.027 GNMT WPM-32K, ensemble of 8 LSTMs (Wu et al., 2016) 40.35 GNMT WPM-32K, HyperLSTM (ours) 40.03 0.993
Table 6: Single model results on WMT En--+Fr (newstest2014)
The HyperLSTM cell improves the performance of the existing GNMT model, achieving state- of-the-art single model results for this dataset. In addition, we demonstrate the applicability of hypernetworks to large-scale models used in production systems. Please see Appendix A.6 for actual translation samples generated from both models for a qualitative comparison.
12
# 5 CONCLUSION
In this paper, we presented a method to use a hypernetwork to generate weights for another neural network. Our hypernetworks are trained end-to-end with backpropagation and therefore are effi- cient and scalable. We focused on two use cases of hypernetworks: static hypernetworks to generate weights for a convolutional network, dynamic hypernetworks to generate weights for recurrent net- works. We found that the method works well while using fewer parameters. On image recognition, language modelling and handwriting generation, hypernetworks are competitive to or sometimes better than state-of-the-art models.
ACKNOWLEDGMENTS
We thank Jeff Dean, Geoffrey Hinton, Mike Schuster and the Google Brain team for their help with the project.
# REFERENCES
Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Gre- gory S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian J. Good- fellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Gor- don Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul A. Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda B. Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. CoRR, abs/1603.04467, 2016. URL http: //arxiv.org/abs/1603.04467.
M. Andrychowicz, M. Denil, S. Gomez, M. W. Hoffman, D. Pfau, T. Schaul, and N. de Freitas. Learning to learn by gradient descent by gradient descent. arXiv preprint arXiv: 1606.04474, 2016.
Jimmy L. Ba, Jamie R. Kiros, and Geoffrey E. Hinton. Layer normalization. NIPS, 2016.
Luca Bertinetto, Joao F. Henriques, Jack Valmadre, Philip H. S. Torr, and Andrea Vedaldi. Learning feed-forward one-shot learners. In NJPS, 2016.
Christopher M. Bishop. Mixture density networks. Technical report, 1994.
Junyoung Chung, Caglar Giilgehre, Kyunghyun Cho, and Yoshua Bengio. Gated feedback recurrent neural networks. arXiv preprint arXiv: 1502.02367, 2015.
Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural net- works. arXiv preprint arXiv: 1609.01704, 2016.
Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (ELUs). arXiv preprint arXiv: 1511.07289, 2015.
Tim Cooijmans, Nicolas Ballas, Cesar Laurent, and Caglar Gulcehre. Recurrent Batch Normaliza- tion. arXiv:1603.09025, 2016.
Bert De Brabandere, Xu Jia, Tinne Tuytelaars, and Luc Van Gool. Dynamic filter networks. In NIPS, 2016.
Misha Denil, Babak Shakibi, Laurent Dinh, Marcâ Aurelio Ranzato, and Nando de Freitas. Predicting Parameters in Deep Learning. In NIPS, 2013.
Chrisantha Fernando, Dylan Banarse, Malcolm Reynolds, Frederic Besse, David Pfau, Max Jader- berg, Marc Lanctot, and Daan Wierstra. Convolution by evolution: Differentiable pattern produc- ing networks. In GECCO, 2016.
Faustino Gomez and Jiirgen Schmidhuber. Evolving modular fast-weight networks for control. In ICANN, 2005.
13
Alex Graves. Generating sequences with recurrent neural networks. arXiv: 1308.0850, 2013.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In CVPR, 2016a.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv: 1603.05027, 201 6b.
Mikael Henaff, Arthur Szlam, and Yann LeCun. Orthogonal RNNs and long-memory tasks. In ICML, 2016.
Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdi- nov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012.
Sepp Hochreiter and Juergen Schmidhuber. Long short-term memory. Neural Computation, 1997.
Gao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolutional networks. arXiv preprint arXiv: 1608.06993, 2016a.
Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochas- tic depth. arXiv preprint arXiv: 1603.09382, 201 6b.
Marcus Hutter. The human knowledge compression contest. 2012. URL http://prize. hutterl.net/.
Max Jaderberg, Wojciech Marian Czarnecki, Simon Osindero, Oriol Vinyals, Alex Graves, and Koray Kavukcuoglu. Decoupled Neural Interfaces using Synthetic Gradients. arXiv preprint arXiv: 1608.05343, 2016.
Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. Grid long short-term memory. In JCLR, 2016.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In JCLR, 2015.
Jan Koutnik, Faustino Gomez, and Jiirgen Schmidhuber. Evolving neural networks in compressed weight space. In GECCO, 2010.
David Krueger, Tegan Maharaj, Janos Kramar, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron Courville, et al. Zoneout: Regular- izing RNNs by randomly preserving hidden activations. arXiv preprint arXiv: 1606.01305, 2016.
Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Handwritten digit recognition with a back-propagation network. In N/PS, 1990.
Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply- supervised nets. In AISTATS, volume 2, pp. 6, 2015.
Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. In JCLR, 2014.
Marcus Liwicki and Horst Bunke. IAM-OnDB - an on-line English sentence database acquired from handwritten text on a whiteboard. In JCDAR, 2005.
Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313-330, 1993.
Tomas Mikolov, Ilya Sutskever, Anoop Deoras, Hai-Son Le, Stefan Kombrink, and Jan Cernocky. Subword language modeling with neural networks. preprint, 2012.
Marcin Moczulski, Misha Denil, Jeremy Appleyard, and Nando de Freitas. ACDC: A Structured Efficient Linear Layer. arXiv preprint arXiv: 1511.05946, 2015.
Saahil Ognawala and Justin Bayer. Regularizing recurrent networks-on injected noise and norm- based methods. arXiv preprint arXiv:1410.5684, 2014.
Kamil Rocki. Recurrent memory array structures. arXiv preprint arXiv: 1607.03085, 2016a.
14
Kamil Rocki. Surprisal-driven feedback in recurrent networks. arXiv preprint arXiv: 1608.06027, 2016b.
Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv: 1412.6550, 2014.
Jiirgen Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrent networks. Neural Computation, 4(1):131-139, 1992.
Jiirgen Schmidhuber. A âself-referentialâ weight matrix. In JCANN, 1993.
Stanislaw Semeniuta, Aliases Severyn, and Erhardt Barth. Recurrent dropout without memory loss. arXiv: 1603.05118, 2016.
Rupesh Srivastava, Klaus Greff, and Jiirgen Schmidhuber. Training very deep networks. In NIPS, 2015.
Kenneth O. Stanley, David B. DâAmbrosio, and Jason Gauci. A hypercube-based encoding for evolving large-scale neural networks. Artificial Life, 15(2):185-212, 2009.
Ilya Sutskever, James Martens, and Geoffrey E. Hinton. Generating text with recurrent neural net- works. In JCML, 2011.
YY. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, J. Klingner, A. Shah, M. Johnson, X. Liu, L. Kaiser, S. Gouws, Y. Kato, T. Kudo, H. Kazawa, K. Stevens, G. Kurian, N. Patil, W. Wang, C. Young, J. Smith, J. Riesa, A. Rudnick, O. Vinyals, G. Corrado, M. Hughes, and J. Dean. Googleâs Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. ArXiv e-prints, 2016.
Yuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio, and Ruslan Salakhutdinov. On multi- plicative integration with recurrent neural networks. NIPS, 2016.
Jianlin Xia, Shivkumar Chandrasekaran, Ming Gu, and Xiaoye S. Li. Fast algorithms for hierarchi- cally semiseparable matrices. Numerical Linear Algebra with Applications, 2010.
Z. Yang, M. Moczulski, M. Denil, N. de Freitas, A. Smola, L. Song, and Z. Wang. Deep Fried Convnets. In JCCV, 2015.
Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In BMVC, 2016.
Ke Zhang, Miao Sun, Tony X. Han, Xingfang Yuan, Liru Guo, and Tao Liu. Residual networks of residual networks: Multilevel residual networks. arXiv preprint arXiv: 1608.02908, 2016.
Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. Deep recurrent models with fast- forward connections for neural machine translation. CoRR, abs/1606.04199, 2016. URL http: //arxiv.org/abs/1606.04199.
Julian Zilly, Rupesh Srivastava, Jan Koutnik, and Jiirgen Schmidhuber. Recurrent highway networks. arXiv preprint arXiv: 1607.03474, 2016.
15
A APPENDIX A.1 HYPERNETWORKS TO LEARN FILTERS FOR A FULLY CONNECTED NETWORKS ee || sn ââ al apelin .â a i ee a re = = ee A A | 4446S BS elena. Figure 8: Filters learned to classify MNIST digits in a fully connected network (left). Filters learned by a hypernetwork (right). We ran an experiment where the hypernetwork receives the x, y locations of both the input pixel and the weight, and predicts the value of the hidden weight matrix in a fully connected network that learns to classify MNIST digits. In this experiment, the fully connected network (784-256-10) has one hidden layer of 16 x 16 units, where the hypernetwork is a pre-defined small feedforward net- work. The weights of the hidden layer has 784 x 256 = 200704 parameters, while the hypernetwork is a 801 parameter four layer feed forward relu network that would generate the 786 x 256 weight matrix. The result of this experiment is shown in Figure 8. We want to emphasize that even though the network can learn convolutional-like filters during end-to-end training, its performance is rather poor: the best accuracy is 93.5%, compared to 98.5% for the conventional fully connected network. We find that the virtual coordinates-based approach to hypernetworks that is used by HyperNEAT and DPPN has its limitations in many practical tasks, such as image recognition and language mod- elling, and therefore developed our embedding vector approach in this work. 16
A.2 CONCEPTUAL DIAGRAMS OF STATIC AND DYNAMIC HYPERNETWORKS
input output > Wo > Wy {We =} Was > Wy > output output outputs output output, Ho Hy He Hes He > ow > ow SW Lew > ow > x0 I x x2 I Xe I x
Figure 9: Feedforward Network (top) and Recurrent Network (bottom)
output > input >| wiz) >| Wee) >| Wize) -â-----â>| Wana) >} ween) ne ee eee ee % 2 2 ZN zy Z â+ we ââ> |W Next Nin X Nout
Figure 10: Static Hy; pernetwork generating weights for Feedforward Network
output output; output output. output, Ho Hy He Hes H > Wize) > Wea) >| Wap) ------->} Whar) > Wee) > x0 x xp Xe x 2 2 20 Ze a ho I hy I hy I I hes I he >| we > We > we >! w, > We > Ho Xo Hy xy Ho xo, Hea Xt He xt
Figure 11: Dynamic Hypernetwork generating weights for Recurrent Network
17
A.2.1 FILTER VISUALIZATIONS FOR RESIDUAL NETWORKS In Figures 12 and 13 are example visualizations for various kernels in a deep residual network. Note that the 32x32x3x3 kernel generated by the hypernetwork was constructed by concatenating 4 basic kernels together. Figure 13: Generated 16x16x3x3 kernel (left). Generated 32x32x3x3 kernel (right). 18
# A.2.2. HYPERLSTM
In this section we will discuss extension of HyperRNN to LSTM. Our focus will be on the basic version of the LSTM architecture Hochreiter & Schmidhuber (1997), given by:
ip = Wily + Wie +0 ge = Woy + Win, + 09 fr = Whi + WEa, +d! on = Wrht-1 + Wea, + 0° cr = o( fr) © G1 + (tt) © O(G) hy = a(04) © O(c) (9)
where Wj! ⬠RN**Ne Wy ⬠RN»*Ne by ⬠RN», o is the sigmoid operator, ¢ is the tanh operator. For brevity, y is one of {i, g, f, o}.!
Similar to the previous section, we will make the weights and biases a function of an embedding, and the embedding for each {i, g, f, o} will be generated from a smaller HyperLSTM cell. As discussed earlier, we will also experiment with adding the option to use a Layer Normalization layer in the HyperLSTM. The HyperLSTM Cell is given by:
w=("0) ip = LN(Wihy-1 + Wie + 6) Ge = LN(Wohy1 + War + 64) = LN(Wliu_a + Wie, +8) (10) ( 6: = LN(Woin_1 + W288, + 6°) h & 1) © G1 + a(t) © (Gt) 64) © 6(LN(4)) ht =o
The weight matrices for each of the four {i, g, f, 0} gates will be a function of a set of embeddings Zz, Zh, and Z unique to each gates, just like the HyperRNN. These embeddings are linear projections of the hidden states of the HyperLSTM Cell. For brevity, y is one of {i, 9, f,o} to avoid writing four sets of identical equations:
a= Wii, lte-1 _ ohn a=W? fy + bY (1) he 4 = W} hea
As in the memory efficient version of the HyperRNN, we will focus on the efficient version of the HyperLSTM, where we use weight scaling vectors d to modify the rows of the weight matrices:
ye = LN (dj © Wihy-1 + d4 © W¥a, + DY (z})), where di (zn) = Ween (2h) he*h (12) d! (20) = Wize bY (z}) = Wak + bf
In our implementation, the cell and hidden state update equations for the main LSTM will incorpo- rate a single dropout (Hinton et al., 2012) gate, as developed in Recurrent Dropout without Memory Loss (Semeniuta et al., 2016), as we found this to help regularize the entire model during training:
cr = o(ft) © cr-1 + a(t) © DropOut(d(ge)) hy = o(0%4) © O(LN(cr)) (13)
'In practice, all eight weight matrices are concatenated into one large matrix for computational efficiency.
19
This dropout operation is generally only applied inside the main LSTM, not in the smaller HyperL- STM cell. For larger size systems we can apply dropout to both networks.
A.2.3. IMPLEMENTATION DETAILS AND WEIGHT INITIALIZATION FOR HYPERLSTM
This section may be useful to readers who may want to implement their own version of the Hyper- LSTM Cell, as we will discuss initialization of the parameters for Equations 10 to 13. We recom- mend implementing the HyperLSTM within the same interface as a normal recurrent network cell so that using the HyperLSTM will not be any different than using a normal RNN. These initial- ization parameters have been found to work well with our experiments, but they may be far from optimal depending on the task at hand. A reference implementation developed using the Tensor- Flow (Abadi et al., 2016) framework can be found at http: //blog.otoro.net/2016/09/ 28/hyper-networks/.
Tl ie HyperLSTM Cell will be located inside the HyperLSTM, as described in Equation 10. It is a normal LSTM cell with Layer Normalization. The inputs to the HyperLSTM Cell will be the con- catenation of the input signal and the hidden units of the main LSTM cell. The biases in Equation 10 are initialized to zero and Orthogonal Initialization (Henaff et al., 2016) is performed for all weights.
The embedding vectors are produced by the HyperLSTM Cell at each timestep by linear projection described in Equation 11. The weights for the first two equations are initialized to be zero, and the biases are initialized to one. The weights for the third equation are initialized to be a small normal random variable with standard deviation of 0.01.
The weight scaling vectors that modify the weight matrices are generated from these embedding vectors, as per Equation 12. Orthogonal initialization is applied to the W), and W,,, while bo is initialized to zero. W,, is also initialized to zero. For the weight scaling vectors, we used a method described in Recurrent Batch Normalization (Cooijmans et al., 2016) where the scaling vectors are initialized to 0.1 rather than 1.0 and this has shown to help gradient flow. Therefore, for weight matrices W;,. and W,,., we initialize to a constant value of 0.1/N, to maintain this property.
The only place we use dropout is in the single location in Equation 13, developed in Recurrent Dropout without Memory Loss (Semeniuta et al., 2016). We can use this dropout gate like any other normal dropout gate in a feed-forward network.
A.3 EXPERIMENT SETUP DETAILS AND HYPER PARAMETERS
A.3.1 USING STATIC HYPERNETWORKS TO GENERATE FILTERS FOR CONVOLUTIONAL NETWORKS AND MNIST
We train the network with a 55000 / 5000 / 10000 split for the training, validation and test sets and use the 5000 validation samples for early stopping, and train the network using Adam (Kingma & Ba, 2015) with a learning rate of 0.001 on mini-batches of size 1000. To decrease over fitting, we pad MNIST training images to 30x30 pixels and random crop to 28x28.!
Model Test Error Params of 2"' Kernel Normal Convnet 0.72% 12,544 Hyper Convnet 0.76% 4,244
Table 7: MNIST Classification with hypernetwork generated weights.
A.3.2 STATIC HYPERNETWORKS FOR RESIDUAL NETWORK ARCHITECTURE AND CIFAR-10
We train both the normal residual network and the hypernetwork version using a 45000 / 5000 / 10000 split for training, validation, and test set. The 5000 validation samples are randomly chosen and isolated from the original 50000 training samples. We train the entire setup with a mini-batch
âAn [Python notebook demonstrating the MNIST Hypernetwork experiment is available at this website: http://blog.otoro.net/2016/09/28/hyper-networks/.
20
size of 128 using Nesterov Momentum SGD for the normal version and Adam for the hypernetwork version, both with a learning rate schedule. We apply L2 regularization on the kernel weights, and also on the hypernetwork-generated kernel weights of 0.0005%. To decrease over fitting, we apply light data augmentation pad training images to 36x36 pixels and random crop to 32x32, and perform random horizontal flips.
Table 8: Learning Rate Schedule for Nesterov Momentum SGD
<step learning rate 28,000 0.10000 56,000 0.02000 84,000 0.00400 112,000 0.00080 140,000 0.00016
Table 9: Learning Rate Schedule for Hyper Network / Adam
<step learning rate 168,000 0.00200 336,000 0.00100 504,000 0.00020 672,000 0.00005
A.3.3 CHARACTER-LEVEL PENN TREEBANK
The hyper-parameters of all the experiments were selected through non-extensive grid search on the validation set. Whenever possible, we used reported learning rates and batch sizes in the literature that had been used for similar experiments performed in the past.
For Character-level Penn Treebank, we use mini-batches of size 128, to train on sequences of length 100. We trained the model using Adam (Kingma & Ba, 2015) with a learning rate of 0.001 and gra- dient clipping of 1.0. During evaluation, we generate the entire sequence, and do not use information about previous test errors for prediction, e.g., dynamic evaluation (Graves, 2013; Rocki, 2016b). As mentioned earlier, we apply dropout to the input and output layers, and also apply recurrent dropout with a keep probability of 90%. For baseline models, Orthogonal Initialization (Henaff et al., 2016) is performed for all weights.
We also experimented with a version of the model using a larger embedding size of 16, and also with a lower dropout keep probability of 85%, and reported results with this âLarge Embedding" model in Table 3. Lastly, we stacked two layers of this âLarge Embedding" model together to measure the benefits of a multi-layer version of HyperLSTM, with a dropout keep probability of 80%.
# A.3.4 HUTTER PRIZE WIKIPEDIA
As enwik8 is a bigger dataset compared to Penn Treebank, we will use 1800 units for our networks. In addition, we perform training on sequences of length 250. Our normal HyperLSTM Cell consists of 256 units, and we use an embedding size of 64.
Our setup is similar in the previous experiment, using the same mini-batch size, learning rate, weight initialization, gradient clipping parameters and optimizer. We do not use dropout for the input and output layers, but still apply recurrent dropout with a keep probability of 90%. For baseline models, Orthogonal Initialization (Henaff et al., 2016) is performed for all weights.
As in (Chung et al., 2015), we train on the first 90M characters of the dataset, use the next 5M as a validation set for early stopping, and the last 5M characters as the test set.
In this experiment, we also experimented with a slightly larger version of HyperLSTM with 2048 hidden units. This version of of the model uses 2048 hidden units for the main network, inline with similar models for this experiment in other works. In addition, its HyperLSTM Cell consists of 512
21
units with an embedding size of 64. Given the larger number of nodes in both the main LSTM and HyperLSTM cell, recurrent dropout is also applied to the HyperLSTM Cell of this model, where we use a lower dropout keep probability of 85%, and train on an increased sequence length of 300.
# A.3.5 HANDWRITING SEQUENCE GENERATION
We will use the same model architecture described in (Graves, 2013) and use a Mixture Density Network layer (Bishop, 1994) to generate a mixture of bi-variate Gaussian distributions to model at each time step to model the pen location. We normalize the data and use the same train/validation split as per (Graves, 2013) in this experiment. We remove samples less than length 300 as we found these samples contain a lot of recording errors and noise. After the pre-processing, as the dataset is small, we introduce data augmentation of chosen uniformly from +/- 10% and apply a this random scaling a the samples used for training.
One concern we want to address is the lack of a test set in the data split methodology devised in (Graves, 2013). In this task, qualitative assessment of generated handwriting samples is arguably just as important as the quantitative log likelihood score of the results. Due to the small size of the dataset, we want to use as large as possible the portion of the dataset to train our models in order to generate better quality handwriting samples so we can also judge our models qualitatively in addition to just examining the log-loss numbers, so for this task we will use the same training / validation split as (Graves, 2013), with a caveat that we may be somewhat over fitting to the validation set in the quantitative results. In future works, we will explore using larger datasets to conduct a more rigorous quantitative analysis.
For model training, will apply recurrent dropout and also dropout to the output layer with a keep probability of 0.95. The model is trained on mini-batches of size 32 containing sequences of variable length. We trained the model using Adam (Kingma & Ba, 2015) with a learning rate of 0.0001 and gradient clipping of 5.0. Our HyperLSTM Cell consists of 128 units and a signal size of 4. For baseline models, Orthogonal Initialization (Henaff et al., 2016) is performed for all weights.
# A.3.6 NEURAL MACHINE TRANSLATION
Our experimental procedure follows the procedure outlined in Sections 8.1 to 8.4 of the GNMT paper (Wu et al., 2016). We only performed experiments with a single model and did not conduct experiments with Reinforcement Learning or Model Ensembles as described in Sections 8.5 and 8.6 of the GNMT paper.
The GNMT paper outlines several methods for the training procedure, and investigated several ap- proaches including combining Adam and SGD optimization methods, in addition to weight quanti- zation schemes. In our experiment, we used only the Adam (Kingma & Ba, 2015) optimizer with the same hyperparameters described in the GNMT paper. We did not employ any quantization schemes.
We replaced LSTM cells in the GNMT WPM-32K architecture, with LayerNorm HyperLSTM cells with the same number of hidden units. In this experiment, our HyperLSTM Cell consists of 128 units with an embedding size of 32.
22
A.4_ EXAMPLES OF GENERATED WIKIPEDIA TEXT
The eastern half of Russia varies from Modern to Central Europe. Due to similar lighting and the extent of the combination of long tributaries to the [[Gulf of Boston]], it is more of a private warehouse than the [[Austro-Hungarian Orthodox Christian and Soviet Union]].
==Demographic data base==
# controversial
# ââAustrian
# Spellingââ]]
[[Image:Auschwitz map.png|frame|The [[Image:Czech Middle East SSR chief state 103.JPG|thumb|Serbian Russia movement]] [[1593]]&ndash;[[1719]], and set up a law of [[ parliamentary sovereignty]] and unity in Eastern churches.
In medieval Roman Catholicism Tuba and Spanish controlled it until the reign of Burgundian kings and resulted in many changes in multiculturalism, though the [[Crusades]], usually started following the [[Treaty of Portugal]], shored the title of three major powers only a strong part.
[[French Marines]] (prompting a huge change in [[President of the Council of the Empire]], only after about [[1793]], the Protestant church, fled to the perspective of his heroic declaration of government and, in the next fifty years, [[Christianity|Christian]] and [[Jutland]]. Books combined into a well-published work by a single R. (Sch. M. ellipse poem) tradition in St Peter also included 7:1, he dwell upon the apostle, scripture and the latter of Luke; totally unknown, a distinct class of religious congregations that describes in number of [[remor]]an traditions such as the [[Germanic tribes]] (Fridericus or Lichteusen and the Wales). Be introduced back to the [[14th century]], as related in the [[New Testament]] and in its elegant [[ Anglo-Saxon Chronicle]], although they branch off the characteristic traditions which Saint [[Philip of Macedon]] asserted.
Ae also in his native countries.
In [[1692]], Seymour was barged at poverty of young English children, which cost almost the preparation of the marriage to him.
Burkeâs work was a good step for his writing, which was stopped by clergy in the Pacific, where he had both refused and received a position of successor to the throne. Like the other councillors in his will, the elder Reinhold was not in the Duke, and he was virtually non-father of Edward I, in order to recognize [[Henry II of England|Queen Enrie
]]
# of
# Parliament.
The Melchizedek Minister Qut]] signed the [[Soviet Union]], and forced Hoover to provide [[Hoover (disambiguation) |hoover]]s in [[1844]], [[1841]].
His work on social linguistic relations is divided to the several times of polity for educatinnisley is 760 Li Italians. After Zaitiâs death , and he was captured August 3, he witnessed a choice better by public, character, repetitious, punt, and future.
Figure 14: enwik8 sample generated from 2048-unit Layer Norm HyperLSTM
23
== Quatitis==
:/âMain article: [[sexagesimal]]ââ
Sexual intimacy was traditionally performed by a male race of the [[ mitochondria]] of living things. The next geneme is used by ââ Clitoronââ into short forms of [[sexual reproduction]]. When a maternal suffeach-Lashe]] to the myriad of a "masterâs character ". He recognizes the associated reflection of [[force call carriers]], the [[Battle of Pois except fragile house and by historians who have at first incorporated his father.
==Geography==
The island and county top of Guernsey consistently has about a third of its land, centred on the coast subtained by mountain peels with mountains, squares, and lakes that cease to be links with the size and depth of sea level and weave in so close to lowlands. Strategically to the border of the country also at the southeast corner of the province of Denmark do not apply, but sometimes west of dense climates of coastal Austria and west Canada, the Flemish area of the continent actually inhabits [[tropical geographical transition ]] and transitions from [[soil]] to [[snow]] residents.]]
==Definition==
The symbols are ââquotationalââ and âââdistinctâââ or advanced. {{ref| no_1}} Older readings are used for [[phrase]]s, especially, [[ancient Greek]], and [[Latin]] in their development process. Several varieties of permanent systems typically refer to [[primordial pleasure]] (for example, [[Pleistocene]], [[Classical antenni|Ctrum ]]), but its claim is that it holds the size of the coci, but is historically important both for import: brewing and commercial use.
majority of cuisine specifically refers to this period, where the southern countries developed in the 19th century. Scotland had a cultural identity of or now a key church who worked between the 8th and 60th through 6 (so that there are small single authors of detailed recommendations for them and at first) rather than
# A
,
# [[Adoptionism|adoptionists]]
# often
started
# inscribed
# with
appearing the words
distinct from two types. On the group definition the adjective fightingââ is until Crown Violence Association]], in which the higher education [[motto]] (despite the resulting attack on [[medical treatment]]) peaked on [[15 December]], [[2005]]. At 30 percent, up to 50% of the electric music from the period was created by Voltaire, but Newton promoted the history of his life.
'â
Publications in the Greek movie ââ[[The Great Theory of Bertrand Russell J]ââ, also kept an important part into the inclusion of ââ[[The Beast for the Passage of Study]]ââ, began in [[1869]], opposite the existence of racial matters. Many of Maryâs religious faiths ( including the [[Mary Sue Literature]] in the United States) incorporated much of Christianity within Hispanic [[Sacred text]]s.
But controversial belief must be traced back to the 1950s stated that their anticolonial forces required the challenge of even lingering wars tossing nomon before leaves the bomb in paint on the South Island, known as [[Quay]], facing [[Britain]], though he still holds to his ancestors a strong ancestor of Orthodoxy. Others explain that the process of reverence occurred from [[Common Hermitage]], when the [[Crusade|Speakers]] laid his lifespan in [[Islam]] into the north of Israel. At the end of the [[14th century BCE]], the citadel of [[ Israel]] set Eisenace itself in the [[Abyssinia]]n islands, which was Faroeâs Dominican Republic claimed by the King.
Figure 15: enwik8 sample generated from 2048-unit Layer Norm HyperLSTM
24
A.5 EXAMPLES OF RANDOMLY CHOSEN GENERATED HANDWRITING SAMPLES
A Yar - Fen h a , . Peob ontrend A Ihe OFS td oceray 2 ehrstalent LOuies
Laerp ol; ybebe web rtlos polorigile Leach Haber cL As iw
Rta wis aim Xe rere Pdp Lescol yg golin rat 2hi5 Chew
odd ⢠Cores boon. ~ Perr ereticllor Coon roles âaan
RUD, ony Sree ponbiteme BI pes tHDIre &, wile
onlsiScad Oy dfowk, Lp plc hel Co oue y â pt Ha Hae real, lew
4 > Le hic st Who / OF STec > = Co Abus yheeaore athintspscon at roth ret
duer qansORe Wve flow bars tmotante ply ics
couwtk edaris (orrien - Brenmre lancer torengar il fey
Merpurre Heer asch trenoed ah. ene imaylil, Py
BY) once gin) route lerl iy Wk deafore beara runs: Ohngad
glee PEP fopeavasta The 6, ME | Net royterd neg W.Glaar ¢
Figure 16: Handwriting samples generated from LSTM
25
conn! wot Hidtte fan perSye Broa ancighMinwy ok
Ure {rag loa moth y Ab wed jean youn, wclO\Ahwunc:
Nip Waiveis Wielysteresgrn dak Che Sercel an pox Mang
Yio seper Whe bh 22 Aved endhne ron ldo foc ie gears ~elutce ow
Lk cor rhode hevs, isons Pear ouek fest hourmrae ie
Ko. Cre! C0 whand eh Colbed Rome cron exc LP oremip
WOK fo Pteco. AS @@{iSF Woere ) iuel-alceanvere
sevinkbepree?@ Hug rears Sol sealyriu dech pi rel Baleâ
Ae pe gate vey hd we bce Lugey cs, Cope yA le
Ihe 0 boy: fraccusysene â en err So fs, y Sare, \ReS aH
; pecikâs tha bowngred , 12 Idetohseal Su Qaseborr cf fren, can L
ibe J thd foke atc |) woe vcd Wig Wi'nede. Testing ao
Figure 17: Handwriting samples generated from Layer Norm LSTM
26
Tahal sion wor Hm iM me, gel yedtica AM Cony Urns
# So the
# lomboe
# ae
# ety theble- sy
fore Hoon aderpebecs lone!
protsusioveriste waby caduetm cul Pol 4 OMâ Sy 4/2n0)
edicesale ed atl ayer Wopanes: foay org BUN ol
wt we wang Hresl Hem coteas Shim melthe- bed fone
C [)igteuiclenhuta pert prone mat Car hos Cred, cl
. MA; Rabo ove dhe ithe woopasaniics 4
(hoon pore in Ko Tho & Wom % Ove, Felcesy yor Mead tha
pew Piugu | lea b eveeledy [, Sous cle [are jth Rebird
Iprb lity. fo
# r
aA) Ved meee & co a5r CIOS rearthe Cv ecQune 3 Eo .
paniter yronhe pins (de by lit Mhorgectdrly tr
Figure 18: Handwriting samples generated from HyperLSTM
27
;
A.6 EXAMPLES OF RANDOMLY CHOSEN MACHINE TRANSLATION SAMPLES
We randomly selected translation samples generated from both LSTM baseline and HyperLSTM models from the WMTâ 14 EnâFr Test Set. Given an English phrase, we can compare between the correct French translation, the LSTM translation, and the HyperLSTM translation.
English Input I was expecting to see gnashing of teeth and a fight breaking out at the gate French (Ground Truth) Je mâ attendais a voir des grincements de dents et une bagarre éclater a la porte LSTM Translation Je mâ attendais a voir des larmes de dents et un combat a la porte HyperLSTM Translation Je mâ attendais a voir des dents grincer des dents et une bataille éclater a la porte
English Input
French (Ground Truth)
LSTM Translation
HyperLSTM Translation
English Input
Prosecuting , Anne Whyte said : " If anyone should know not to the break the law , it is a criminal solicitor . "
French (Ground Truth)
Le procureur Anne Whyte a déclaré : « Si quelquâ savoir quâ il ne faut pas violer la loi , câ est avocat pénaliste . »
LSTM Translation
Prosecuting , Anne Whyte a dit : « Si quelquâ un doit savoir quâ il ne faut pas enfreindre la loi , câ est un solicitor criminel
HyperLSTM Translation
En poursuivant , Anne Whyte a dit : « Si quelquâ un doit savoir ne pas enfreindre la loi , câ est un avocat criminel
# English Input
According to her , the CSRS was invited to a mediation and she asked for an additional period for consideration
French (Ground Truth)
Selon elle , la CSRS a été invitée a une médiation et elle a demandé un délai supplémentaire pour y réfléchir
LSTM Translation
Selon elle , le SCRS a été invité a une médiation et elle a demandé un délai supplémentaire
HyperLSTM Translation
Selon elle , le SCRS a été invité a une médiation et elle a demandé une période de réflexion supplémentaire
28
# English Input
Relations between the US and Germany have come under strain following claims that the NSA bugged Chancellor Angela âs Merkel ââs phone
French (Ground Truth)
Les relations entre les Etats-Unis et 1â Allemagne ont été mises a rude épreuve a la suite de plaintes selon lesquelles la NSA avait mis sur écoute le téléphone portable de la chanceliére allemande Angela Merkel
# LSTM Translation
Les relations entre les Etats-Unis et 1â Allemagne ont été mises a rude épreuve suite aux affirmations selon lesquelles la NSA aurait pris le téléphone de Merkel de la chanceliére Angela
HyperLSTM Translation
Les relations entre les Etats-Unis et 1â Allemagne ont été mises a rude épreuve aprés que la NSA a attaqué le téléphone de la chanceliére Angela Angela
# English Input
Germany âs BfV advises executives to consider using simple prepaid mobiles when on foreign trips because of the risk that smart phones are compromised
French (Ground Truth)
Le BfV dâ Allemagne conseille a ses dirigeants dâ envisager dâ utiliser de simples téléphones portables prépayés lors de leurs voyages a 1â étranger en raison du risque dâ atteinte a 1â intégrité des smartphones
LSTM Translation
Le BfV allemand conseille aux dirigeants dâ envisager 1â utilisation de mobiles prépayés simples lors de voyages a 1â étranger en raison du risque de compromission des téléphones intelligents
HyperLSTM Translation
Le BfV allemand conseille aux dirigeants dâ envisager 1â utilisation de téléphones mobiles prépayés simples lors de voyages a 1â étranger en raison du risque que les téléphones intelligents soient compromis
English Input I was on the mid-evening news that same evening , and on TV the following day as well French (Ground Truth) Le soir-méme , je suis au 20h , le lendemain aussi je suis a la télé LSTM Translation Jâ @étais au milieu de 1â actualité le soir méme , et a la télévision le lendemain également HyperLSTM Translation Jâ étais au milieu de la soirée ce soir-la et a la télévision le lendemain
29 | {
"id": "1603.09025"
} |
1609.08675 | YouTube-8M: A Large-Scale Video Classification Benchmark | Many recent advancements in Computer Vision are attributed to large datasets.
Open-source software packages for Machine Learning and inexpensive commodity
hardware have reduced the barrier of entry for exploring novel approaches at
scale. It is possible to train models over millions of examples within a few
days. Although large-scale datasets exist for image understanding, such as
ImageNet, there are no comparable size video classification datasets.
In this paper, we introduce YouTube-8M, the largest multi-label video
classification dataset, composed of ~8 million videos (500K hours of video),
annotated with a vocabulary of 4800 visual entities. To get the videos and
their labels, we used a YouTube video annotation system, which labels videos
with their main topics. While the labels are machine-generated, they have
high-precision and are derived from a variety of human-based signals including
metadata and query click signals. We filtered the video labels (Knowledge Graph
entities) using both automated and manual curation strategies, including asking
human raters if the labels are visually recognizable. Then, we decoded each
video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to
extract the hidden representation immediately prior to the classification
layer. Finally, we compressed the frame features and make both the features and
video-level labels available for download.
We trained various (modest) classification models on the dataset, evaluated
them using popular evaluation metrics, and report them as baselines. Despite
the size of the dataset, some of our models train to convergence in less than a
day on a single machine using TensorFlow. We plan to release code for training
a TensorFlow model and for computing metrics. | http://arxiv.org/pdf/1609.08675 | Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan | cs.CV | 10 pages | null | cs.CV | 20160927 | 20160927 | 6 1 0 2
p e S 7 2 ] V C . s c [ 1 v 5 7 6 8 0 . 9 0 6 1 : v i X r a
# YouTube-8M: A Large-Scale Video Classiï¬cation Benchmark
# Sami Abu-El-Haija haija@google.com
# Nisarg Kothari ndk@google.com
# Joonseok Lee joonseok@google.com
# Paul Natsev natsev@google.com
# George Toderici gtoderici@google.com
# Balakrishnan Varadarajan balakrishnanv@google.com
# Sudheendra Vijayanarasimhan svnaras@google.com
# Google Research
ABSTRACT Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learn- ing and inexpensive commodity hardware have reduced the bar- rier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Al- though large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classiï¬cation datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classiï¬cation dataset, composed of â¼8 million videosâ500K hours of videoâannotated with a vocabulary of 4800 visual en- tities. To get the videos and their (multiple) labels, we used a YouTube video annotation system, which labels videos with the main topics in them. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals, so they repre- sent an excellent target for content-based annotation approaches. We ï¬ltered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre- trained on ImageNet to extract the hidden representation immedi- ately prior to the classiï¬cation layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. The dataset contains frame-level features for over 1.9 billion video frames and 8 million videos, making it the largest public multi-label video dataset.
Vertical Filter Entities [oeom ress goa] [Spo]
Figure 1: YouTube-8M is a large-scale benchmark for general multi-label video classiï¬cation. This screenshot of a dataset explorer depicts a subset of videos in the dataset annotated with the entity âGuitarâ. The dataset explorer allows browsing and searching of the full vocabulary of Knowledge Graph enti- ties, grouped in 24 top-level verticals, along with corresponding videos.
We trained various (modest) classiï¬cation models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using the publicly-available TensorFlow framework. We plan to release code for training a basic TensorFlow model and for computing metrics.
like Sports-1M and ActivityNet. We achieve state-of-the-art on Ac- tivityNet, improving mAP from 53.8% to 77.6%. We hope that the unprecedented scale and diversity of YouTube-8M will lead to ad- vances in video understanding and representation learning.
ous tasks beyond classiï¬cation [41, 9, 31]. In a similar vein, the amount and size of video benchmarks is growing with the avail- ability of Sports-1M [19] for sports videos and ActivityNet [12] for human activities. However, unlike ImageNet, which contains a diverse and general set of objects/entities, existing video bench- marks are restricted to action and sports classes.
In this paper, we introduce YouTube-8M 1, a large-scale bench- mark dataset for general multi-label video classiï¬cation. We treat the task of video classiï¬cation as that of producing labels that are relevant to a video given its frames. Therefore, unlike Sports-1M and ActivityNet, YouTube-8M is not restricted to action classes alone. For example, Figure 1 shows random video examples for the Guitar entity.
# INTRODUCTION
Large-scale datasets such as ImageNet [6] have been key en- ablers of recent progress in image understanding [20, 14, 11]. By supporting the learning process of deep networks with mil- lions of parameters, such datasets have played a crucial role for the rapid progress of image understanding to near-human level ac- curacy [30]. Furthermore, intermediate layer activations of such networks have proven to be powerful and interpretable for vari-
We ï¬rst construct a visual annotation vocabulary from Knowl- edge Graph entities that appear as topic annotations for YouTube videos based on the YouTube video annotation system [2]. To en- sure that our vocabulary consists of entities that are recognizable visually, we use various ï¬ltering criteria, including human raters. The entities in the dataset span activities (sports, games, hobbies), objects (autos, food, products), scenes (travel), and events. The
# 1http://research.google.com/youtube8m
10 r o. ul i YouTybe8M @ | gr o eee PImagenet 0% Seog e ioplebachle rc) ist Cgco magenet(ILSYRG). 2 Msit Ggco 5 | mee perenne E10 bce Fgp....@ 4 S : : âSUN. 2 Actntglgt & , 1 Pasgal UCF 109 Caltech 256 © 10°» af serietereed Sveetieeeeity °-Galtech 101: fl Hollyvood ne ! Image Datasets ; : Video Datasets 10 ; 10" 10 10 10° 10° Total Number of Classes
Figure 2: The progression of datasets for image and video understand- ing tasks. Large datasets have played a key role for advances in both areas.
entities were selected using a combination of their popularity on YouTube and manual ratings of their visualness according to hu- man raters. They are an attempt to describe the central themes of videos using a few succinct labels.
We then collect a sample set of videos for each entity, and use a publicly available state-of-the-art Inception network [4] to extract features from them. Speciï¬cally, we decode videos at one frame- per-second and extract the last hidden representation before the classiï¬cation layer for each frame. We compress the frame-level features and make them available on our website for download.
Overall, YouTube-8M contains more than 8 million videosâ over 500,000 hours of videoâfrom 4,800 classes. Figure 2 illus- trates the scale of YouTube-8M, compared to existing image and video datasets. We hope that the unprecedented scale and diversity of this dataset will be a useful resource for developing advanced video understanding and representation learning techniques.
Towards this end, we provide extensive experiments comparing several state-of-the-art techniques for video representation learn- ing, including Deep Networks [26], and LSTMs (Long Short-Term In addition, we show Memory Networks) [13] on this dataset. that transfering video feature representations learned on this dataset leads to signiï¬cant improvements on other benchmarks such as Sports-1M and ActivityNet.
In the rest of the paper, we ï¬rst review existing benchmarks for image and video classiï¬cation in Section 2. We present the details of our dataset including the collection process and a brief analysis of the categories and videos in Section 3. In Section 4, we review several approaches for the task of multi-label video classiï¬cation given ï¬xed frame-level features, and evaluate the approaches on the dataset. In Section 5, we show that features and models learned on our large-scale dataset generalize very well on other benchmarks. We offer concluding remarks with Section 6.
# 2. RELATED WORK
Image benchmarks have played a signiï¬cant role in advancing computer vision algorithms for image understanding. Starting from a number of well labeled small-scale datasets such as Caltech 101/256 [8, 10], MSRC [32], PASCAL [7], image understanding research has rapidly advanced to utilizing larger datasets such as ImageNet [6] and SUN [38] for the next generation of vision algorithms. Im- ageNet in particular has enabled the development of deep feature learning techniques with millions of parameters such as the AlexNet
[20] and Inception [14] architectures due to the number of classes (21841), the diversity of the classes (27 top-level categories) and the millions of labeled images available.
A similar effort is in progress in the video understanding do- main where the community has quickly progressed from small, well-labeled datasets such as KTH [22], Hollywood 2 [23], Weiz- mann [5], with a few thousand video clips, to medium-scale datasets such as UCF101 [33], Thumosâ14 [16] and HMDB51 [21], with more than 50 action categories. Currently, the largest available video benchmarks are the Sports-1M [19], with 487 sports related activities and 1M videos, the YFCC-100M [34], with 800K videos and raw metadata (titles, descriptions, tags) for some of them, the FCVID [17] dataset of 91, 223 videos manually annotated with 239 categories, and ActivityNet [12], with â¼200 human activity classes and a few thousand videos. However, almost all current video benchmarks are restricted to recognizing action and activity categories, and have less than 500 categories.
YouTube-8M ï¬lls the gap in video benchmarks as follows:
⢠A large-scale video annotation and representation learn- ing benchmark, reï¬ecting the main themes of a video.
⢠A signiï¬cant jump in the number and diversity of annotation classesâ4800 Knowledge Graph entities vs. less than 500 categories for all other datasets.
⢠A substantial increase in the number of labeled videosâover 8 million videos, more than 500,000 hours of video.
⢠Availability of pre-computed state-of-the-art features for 1.9 billion video frames.
We hope the pre-computed features will remove computational bar- riers, level the playing ï¬eld, and enable researchers to explore new technologies in the video domain at an unprecedented scale.
# 3. YOUTUBE-8M DATASET
YouTube-8M is a benchmark dataset for video understanding, where the main task is to determine the key topical themes of a video. We start with YouTube videos since they are a good (albeit noisy) source of knowledge for diverse categories including vari- ous sports, activities, animals, foods, products, tourist attractions, games, and many more. We use the YouTube video annotation system [2] to obtain topic annotations for a video, and to retrieve videos for a given topic. The annotations are provided in the form of Knowledge Graph entities [3] (formerly, Freebase topics [1]). They are associated with each video based on the videoâs metadata, context, and content signals [2].
We use Knowledge Graph entities to succinctly describe the main themes of a video. For example, a video of biking on dirt roads and cliffs would have a central topic/theme of Mountain Biking, not Dirt, Road, Person, Sky, and so on. Therefore, the aim of the dataset is not only to understand what is present in each frame of the video, but also to identify the few key topics that best describe what the video is about. Note that this is different than typical event or scene recognition tasks, where each item belongs to a single event or scene. [38, 28] It is also different than most object recognition tasks, where the goal is to label everything visible in an image. This would produce thousands of labels on each video but without an- swering what the video is really about. The goal of this benchmark is to understand what is in the video and to summarize that into a few key topics. In the following sub-sections, we describe our vo- cabulary and video selection scheme, followed by a brief summary of dataset statistics.
# Action-adventure-game
seco Anin VAT O Nar satict sasebali Basketball exner oscenseBICYCI® ay gu eax BOlly WOOK, sxing BOxing Callof-Duty comers CAL camer CAPEOON Cat checteading CHOI Christmas cicus cesnorcens CliMbIN Combat COMEMY comiciooe COMICS computer CONCELTE Cooking cooking-show Cosmetics canersuie cite respons CYCIING Da NCâ¬Dashcam Disc-jockey ona 09 po,c095n201DFaWiNgonnn DTUMS ovoeancran FASHION rarove-sawarnasmn SNING,.....FOOdFOOtball Games Gardening cranc-rnet-autow Grand-Theft-Auto-V Guitar Gymnastics Hair saisvie saio Handball Handheld-game-console High-school Highlight-filmHockey Home-improvement HOIS sorse-racing Hote! House HuMan-swimming HuntinglCE-SKAtING ,p.siPhone iroakayek knite Landing Laptop LeagUe-Of-Legends LEGO Medicine menorwinsom MINECLAft mae MO DIIE-PNONE model-aircratt mono MOtorcycle Moto rsport vonnwe MUSIC-VIC GO musical-ensemble ya: Naruto Nature teresnson Orchestra orsen Outdoor-recreation Painting rarscrutng PEFSONAl-COMpUter Photography Piano Pokémon rosi Frayer Racing Radicons scr Rado contolt-car Radio-controlled-model Rallying Recipe rolierskating Rugby-football_ Runescape Running School shoe simulation-video-game sitcom Skateboarding gcsceomesy SKIING sietsngsiscsnow ~OMartphone Sports-game Strategy-video-game surfing Tablet-computer Talent-show tan Television teleisoradverisement Tennis TheSims Theat: Disney-Company Touhow Project TOY youn ratornnmy Thier Train Trucks) vim WE@NhICle Video-game video-game-console.,.,, Aircraft aor Album American-football © Amusement-park Advertising âSamsung-Galaxy Snowboarding _Sonic-the-Hedgehog Star-Wars Warcraft Water Weapon weather Wedding Weight-training yw Winter-sport Woodturning World-of-Warcraft Wrestling xbox
# Figure 3: A tag-cloud representation of the top 200 entities. Font size is proportional to the number of videos labeled with the entity.
Top-level Category Arts & Entertainment Autos & Vehicles Beauty & Fitness Books & Literature Business & Industrial Computers & Electronics Finance Food & Drink Games Health Hobbies & Leisure Home & Garden Internet & Telecom Jobs & Education Law & Government News People & Society Pets & Animals Real Estate Reference Science Shopping Sports Travel Full vocabulary 1st Entity Concert Vehicle Fashion Book Train Personal computer Video game console Money Food Video game Medicine Fishing Gardening Mobile phone School Tank Weather Prayer Animal House Vampire Nature Toy Motorsport Amusement park Vehicle 2nd Entity Animation Car Hair Harry Potter Model aircraft Bank Cooking Minecraft Raw food Outdoor recreation Home improvement Smartphone University Fireï¬ghter Snow Family Dog Apartment Bus Robot LEGO Football Hotel Concert 3rd Entity Music video Motorcycle Cosmetics The Bible Fish iPhone Foreign Exchange Recipe Action-adventure game Ear Radio-controlled model Wedding Kitchen House Website Telephone Teacher High school Soldier President of the U.S.A. News broadcasting Rain Human Play-Doh Cat Horse Dormitory Condominium City River Ice Eye Doll Sledding Cycling Winter sport Beach Airport Music video Animation 4th Entity Dance Bicycle Weight training Writing Water PlayStation 3 Euro Cake Strategy video game Glasses 5th Entity Guitar Aircraft Hairstyle Magazine Tractor pulling Tablet computer United States Dollar Chocolate Sports game Injury Christmas Garden Sony Xperia Kindergarten President Newspaper Dragon Bird Mansion Mermaid Biology Shoe Basketball Roller coaster Video game 6th Entity Disc jockey Truck Nail Alice Advertising Xbox 360 Credit card Egg Call of Duty Dietary supplement Dental braces Hunting Door Google Nexus Campus Police ofï¬cer Mattel Angel Aquarium Skyscraper Village Skin My Little Pony Gymnastics Lake Motorsport 7th Entity Trailer Boat Mascara E-book Landing Microsoft Windows Cash Eating Grand Theft Auto V Diving Swimming pool World Wide Web Classroom Fighter aircraft Hail Tarot Puppy Loft Samurai Light Nike; Inc. Wrestling Resort Football
# Table 1: Most frequent entities for each of the top-level categories.
# 3.1 Vocabulary Construction
We followed two main tenets when designing the vocabulary for the dataset; namely 1) every label in the dataset should be distin- guishable using visual information alone, and 2) each label should have sufï¬cient number of videos for training models and for com- puting reliable metrics on the test set. For the former, we used a combination of manually curated topics and human ratings to prune the vocabulary into a visual set. For the latter, we considered only entities having at least 200 videos in the dataset.
The Knowledge Graph contains millions of topics. Each topic has one or more types, that are curated with high precision. For ex- ample, there is an exhaustive list of animals with type animal and an exhaustive list of foods with type food. To start with our initial vocabulary, we manually selected a whitelist of 25 entity types that we considered visual (e.g. sport, tourist_attraction, inventions), and also blacklisted types that we thought are non-visual (e.g. mu- sic artists, music compositions, album, software). We then obtained all entities that have at least one whitelisted type and no blacklisted
types, which resulted in an initial vocabulary of â¼50, 000 entities. Following this, we used human raters in order to manually prune this set into a smaller set of entities that are considered visual with high conï¬dence, and are also recognizable without very deep do- main expertise. Raters were provided with instructions and exam- ples. Each entity was rated by 3 raters and the ratings were av- eraged. Figure 4a shows the main rating question. The process resulted in a total of â¼10, 000 entities that are considered visually recognizable and are not too ï¬ne-grained (i.e. can be recognized by non-domain experts after studying some examples). These enti- ties were further pruned: we only kept entities that have more than 200 popular videos, as explained in the next section. The ï¬nal set of entities in the dataset are fairly balanced in terms of the speci- ï¬city of the topic they describe, and span both coarse-grained and ï¬ne-grained entities, as shown in Figure 4b.
# 3.2 Collecting Videos
Having established the initial target vocabulary, we followed these
Entity Name Entity URL Entity Description A thunderstorm, also known as an electrical storm, a lightning storm, or @ thundershower, Is a type of storm characterized by the presence of lightning and its acoustie effect on the Earthâs atmosphere known as thunder. The Thunderstorm http://www fr rym j021 ma Ter meteorologically assigned cloud type associated with the thunderstorm is the cumulonimbus. Thunderstorms are usually accompanied by strong winds, heavy rain and sometimes snow, sleet, hall, or no precipitation at al How difficult is it to identify this entity in images or videos (without audio, titles, comments, etc)? 1. Any layperson could . Experts in some field can . Not possible without non-visual knowledge . Non-visual uRWNnN . Any layperson after studying examples, wikipedia, etc could
Entity Name Entity URL Entity Description A thunderstorm, also known as an electrical storm, a lightning storm, or @ thundershower, Is a type of storm characterized by the presence of lightning and its acoustie effect on the Earthâs atmosphere known as thunder. The Thunderstorm http://www fr rym j021 ma Ter meteorologically assigned cloud type associated with the thunderstorm is the cumulonimbus. Thunderstorms are usually accompanied by strong winds, heavy rain and sometimes snow, sleet, hall, or no precipitation at al
Coarse-grained Medium-grained Fine-grained 0 500 1000 1500 2000 2500 3000 Number of entities
(a) Screenshot of the question displayed to human raters.
(b) Distribution of vocabulary topics in terms of speciï¬city.
Figure 4: Rater guidelines to assess how speciï¬c and visually recognizable each entity is, on a discrete scale of (1 to 5), where 1 is most visual and easily recognizable by a layperson. Each entity was rated by 3 raters. We kept only entities with a maximum average score of 2.5, and categorized them by speciï¬city, into coarse-grained, medium-grained, and ï¬ne-grained entities, using equally sized score range buckets.
steps to obtain the videos:
Train Dataset YouTube-8M 5,786,881 Validate 1,652,167 Test 825,602 Total 8,264,650
⢠Collected all videos corresponding to the 10, 000 visual en- tities and have at least 1, 000 views, using the YouTube video annotation system [2]. We excluded too short (< 120 secs) or too long (> 500 secs) videos.
⢠Randomly sampled 10 million videos among them.
⢠Obtained all entities for the sampled 10 million videos using the YouTube video annotation system. This completes the annotations.
⢠Filtered out entities with less than 200 videos, and videos with no remaining entities. This reduced the size of our data to 8, 264, 650 videos.
⢠Split our videos into 3 partitions, Train : Validate : Test, with ratios 70% : 20% : 10%. We publish features for all splits, but only publish labels for the Train and Validate partitions.
Table 2: Dataset partition sizes.
10° Pa TT nae 2 10° : > 3 io 8 E 10? | 2 10° 10? 10? 10° 10° Entity ID
# 3.3 Features
The original size of the video dataset is hundreds of Terabytes, and covers over 500, 000 hours of video. This is impractical to process by most research teams (using a real-time video processing engine, it would take over 50 years to go through the data). There- fore, we pre-process the videos and extract frame-level features us- ing a state-of-the-art deep model: the publicly available Inception network [4] trained on ImageNet [14]. Concretely, we decode each video at 1 frame-per-second up to the ï¬rst 360 seconds (6 minutes), feed the decoded frames into the Inception network, and fetch the ReLu activation of the last hidden layer, before the classiï¬cation layer (layer name pool_3/_reshape). The feature vector is 2048-dimensional per second of video. While this removes mo- tion information from the videos, recent work shows diminishing returns from motion features as the size and diversity of the video data increases [26, 35]. The static frame-level features provide an excellent baseline, and constructing compact and efï¬cient motion features is beyond the scope of this paper. Nonetheless, we hope to extend the dataset with audio and motion features in the future. We cap processing of each video up to the ï¬rst 360 seconds for storage and computational reasons. For comparison, the average length of videos in UCF-101 is 10 â 15 seconds, Sports-1M is 336 seconds and in this dataset, it is 230 seconds.
Figure 5: Number of videos in log-scale versus entity rank in log scale. Entities were sorted by number of videos. We note that this somewhat follows the natural Zipf distribution.
Afterwards, we apply PCA (+ whitening) to reduce feature di- mensions to 1024, followed by quantization (1 byte per coefï¬cient). These two compression techniques reduce the size of the data by a factor of 8. The mean vector and covariance matrix for PCA was computed on all frames from the Train partition. We quantize each 32-bit ï¬oat into 256 distinct values (8 bits) using optimally com- puted (non-uniform) quantization bin boundaries. We conï¬rmed that the size reduction does not signiï¬cantly hurt the evaluation metrics. In fact, training all baselines on the full-size data (8 times larger than what we publish), increases all evaluation metrics by less than 1%.
Note that while this dataset comes with standard frame-level fea- tures, it leaves a lot of room for investigating video representation learning approaches on top of the ï¬xed frame-level features (see Section 4 for approaches we explored).
# 3.4 Dataset Statistics
The YouTube-8M dataset contains 4, 800 classes and a total of
Games Arts & Entertainment Autos & Vehicles Food & Drink Business & Industrial Computers & Electronics Science Sports, Pets & Animals Shopping Home & Garden Hobbies & Leisure People & Society Beauty & Fitness Travel Books & Literature Reference Law & Government Internet & Telecom News Health, Jobs & Education Finance Real Estated Vertical 200 400 600 800 Number of Entites 1000
Arts & Entertainment Games Autos & Vehicles Sports Food & Drink Computers & Electronics Hobbies & Leisure Business & Industrial Beauty & Fitness Vertical Internet & Telecom Shopping Home & Garden Travel People & Society News Reference Jobs & Education Books & Literature Law & Government 10° 10° 107 Number of Videos
Games Arts & Entertainment Autos & Vehicles Food & Drink Business & Industrial Computers & Electronics Science Sports, Pets & Animals Shopping Home & Garden Hobbies & Leisure People & Society Beauty & Fitness Travel Books & Literature Reference Law & Government Internet & Telecom News Health, Jobs & Education Finance Real Estated 200 400 600 800 Number of Entites 1000 Arts & Entertainment Games Autos & Vehicles Sports Food & Drink Computers & Electronics Hobbies & Leisure Business & Industrial Beauty & Fitness Vertical Internet & Telecom Shopping Home & Garden Travel People & Society News Reference Jobs & Education Books & Literature Law & Government 10° 10° 107 Number of Videos
(a) Number of entities in each top-level category.
(b) Number of train videos in log-scale per top-level category.
Figure 6: Top-level category statistics of the YouTube-8M dataset.
8, 264, 650 videos. A video may be annotated with more than one class and the average number of classes per video is 1.8. Table 2 shows the number of videos for which we are releasing features, across the three datasets.
ated on the human-based ground truth), if one explicitly models incorrect [29] (78.8% precision) or missing [40, 25] (14.5% recall) training labels. We believe this is an exciting area of research that this dataset will enable at scale.
We processed only the ï¬rst six minutes of each video, at 1 frame- per-second. The average length of a video in the dataset is 229.6 seconds, which amounts to â¼1.9 billion frames (and corresponding features) across the dataset.
We grouped the 4, 800 entities into 24 top-level categories to measure statistics and illustrate diversity. Although we do not use these categories during training, we are releasing the entity-to-category mapping for completeness. Table 1 shows the top entities per cate- gory. Note that while some categories themselves may not seem vi- sual, most of the entities within them are visual. For instance, Jobs & Education includes universities, classrooms, lectures, etc., and Law & Government includes police, emergency vehicles, military- related entities, which are well represented and visual.
Figure 5 shows a log-log scale distribution of entities and videos. Figures 6a and 6b show the size of categories, respectively, in terms of the number of entities and the number of videos.
# 4. BASELINE APPROACHES
# 4.1 Models from Frame Features
One of the challenges with this dataset is that we only have video-level ground-truth labels. We do not have any additional information that speciï¬es how the labels are localized within the video, nor their relative prominence in the video, yet we want to in- fer their importance for the full video. In this section, we consider models trained to predict the main themes of the video using the in- put frame-level features. Frame-level models have shown competi- tive performance for video-level tasks in previous work [19, 26]. A video v is given by a sequence of frame-level features xv 1:Fv , where j is the feature of the jth frame from video v. xv
# 4.1.1 Frame-Level Models and Average Pooling
# 3.5 Human Rated Test Set
The annotations from the YouTube video annotation system can be noisy and incomplete, as they are automatically generated from metadata, anchor text, comments, and user engagement signals [2]. To quantify the noise, we uniformly sampled over 8000 videos from the Test partition, and used 3 human raters per video to ex- haustively rate their labels. We measured the precision and recall of the ground truth labels to be 78.8% and 14.5%, respectively, with respect to the human raters. Note that typical inter-rater agreement on similar annotation tasks with human raters is also around 80% so the precision of these ground truth labels is perhaps compara- ble to (non-expert) human-provided labels. The recall, however, is low, which makes this an excellent test bed for approaches that deal with missing data. We report the accuracy of our models primarily on the (noisy) Validate partition but also show some results on the much smaller human-rated set, showing that some of the metrics are surprisingly similar on the two datasets.
Since we do not have frame-level ground-truth, we assign the video-level ground-truth to every frame within that video. More sophisticated formulations based on multiple-instance learning are left for future work. From each video, we sample 20 random frames and associate all frames to the video-level ground-truth. This re- sults in about 120 million frames. For each entity e, we get 120M i ) pairs, where xi â R1024 is the inception fea- instances of (xi, ye ture and ye i â 0, 1 is the ground-truth associated with entity e for the ith example. We train 4800 independent one-vs-all classiï¬ers for each entity e. We use the online training framework after par- allelizing the work for each entity across multiple workers. During inference, we score every frame in the test video using the models for all classes. Since all our evaluations are based on video-level ground truths, we need to aggregate the frame-level scores (for each entity) to a single video-level score. The frame-level probabili- ties are aggregated to the video-level using a simple average. We choose average instead of max pooling since we want to reduce the effect of outlier detections and capture the prominence of each en- tity in the entire video. In other words, let p(e|x) be the probability of existence of e given the features x. We compute the probability
While the baselines in section 4 show very promising results, we believe that they can be signiï¬cantly improved (when evalu-
Shared Parameters Pooling Classifier Frame-level Features
Figure 7: The network architecture of the DBoF approach. Input frame features are ï¬rst fed into a up-projection layer with shared pa- rameters for all frames. This is followed by a pooling layer that con- verts the frame-level sparse codes into a video-level representation. A few hidden layers and a classiï¬cation layer provide the ï¬nal video-level predictions.
pv(e|xv
1:Fv ) of the entity e associated with the video v as
Fy v 1 po(elXt.r,) = EF SY rle j=l x"). (1)
# 4.1.2 Deep Bag of Frame (DBoF) Pooling
Inspired by the success of various classic bag of words represen- tations for video classiï¬cation [23, 36], we next consider a Deep Bag-of-Frames (DBoF) approach. Figure 7 shows the overall ar- chitecture of our DBoF network for video classiï¬cation. The N - dimensional input frame level features from k randomly selected frames of a video are ï¬rst fed into a fully connected layer of M units with RELU activations. Typically, with M > N , the input features are projected onto a higher dimensional space. Crucially, the parameters of the fully connected layer are shared across the k input frames. Along with the RELU activation, this leads to a sparse coding of the input features in the M -dimensional space.
The obtained sparse codes are fed into a pooling layer that aggre- gates the codes of the k frames into a single ï¬xed-length video rep- resentation. We use max pooling to perform the aggregation. We use a batch normalization layer before pooling to improve stabil- ity and speed-up convergence. The obtained ï¬xed length descriptor of the video can now be classiï¬ed into the output classes using a Logistic or Softmax layer with additional fully connected layers in between. The M -dimensions of the projection layer could be thought of as M discriminative clusters which can be trained in a single network end to end using backpropagation.
The entire network is trained using Stocastic Gradient Descent (SGD) with logistic loss for a logistic layer and cross-entropy loss for a softmax layer. The backpropagated gradients from the top layer train the weight vectors of the projection layer in a discrimina- tive fashion in order to provide a powerful representation of the in- put bag of features. A similar network was proposed in [26] where the convolutional layer outputs are pooled across all the frames of a video to obtain a ï¬xed length descriptor. However, the net- work in [26] does not use an intermediate projection layer which we found to be a crucial difference when learning from input frame features. Note that the up-projection layer into sparse codes is sim- ilar to what Fisher Vectors [27] and VLAD [15] approaches do but the projection (i.e., clustering) is done discriminatively here. We
also experimented with Fisher Vectors and VLAD but were not able to obtain competitive results using comparable codebook sizes.
Hyperparameters: We considered values of {2048, 4096, 8192} for the number of units in the projection layer of the network and found that larger values lead to better results. We used 8192 for all datasets. We used a single hidden layer with 1024 units between the pooling layer and the ï¬nal classiï¬cation layer in all experiments. The network was trained using SGD with AdaGrad, a learning rate of 0.1, and a weight decay penalty of 0.0005.
# 4.1.3 Long Short-Term Memory (LSTM)
We take a similar approach to [26] to utilize LSTMs for video- level prediction. However, unlike that work, we do not have access to the raw video frames. This means that we can only train the LSTM and Softmax layers.
We experimented with the number of stacked LSTM layers and the number of hidden units. We empirically found that 2 layers with 1024 units provided the highest performance on the validation set. Similarly to [26], we also employ linearly increasing per-frame weights going from 1/N to 1 for the last frame.
During the training time, the LSTM was unrolled for 60 itera- tions. Therefore, the gradient horizon for LSTM was 60 seconds. We experimented with a larger number of unroll iterations, but that slowed down the training process considerably. In the end, the best model was the one trained for the largest number of steps (rather than the most real time).
In order to transfer the learned model to ActivityNet, we used a fully-connected model which uses as inputs the concatenation of the LSTM layersâ outputs as computed at the last frame of the videos in each of these two benchmarks. Unlike traditional trans- fer learning methods, we do not ï¬ne-tune the LSTM layers. This approach is more robust to overï¬tting than traditional methods, which is crucial for obtaining competitive performance on Activ- ityNet due to its size. We did perform full ï¬ne-tuning experiments on Sports-1M, which is large enough to ï¬ne-tune the entire LSTM model after pre-training.
# 4.2 Video level representations
Instead of training classiï¬ers directly on frame-level features, we also explore extracting a task-independent ï¬xed-length video-level feature vector from the frame-level features xv 1:Fv for each video v. There are several beneï¬ts of extracting ï¬xed-length video features:
1. Standard classiï¬ers can apply: Since the dimensionality of the representations are ï¬xed across videos, we may train standard classiï¬ers like logistic, SVM, mixture of experts. 2. Compactness: We get a compact representation for the en- tire video, thereby reducing the training data size by a few orders of magnitude.
3. More suitable for domain adaptation: Since the video- level representations are unsupervised (extracted independently of the labels), these representations are far less specialized to the labels associated with the current dataset, and can gener- alize better to new tasks or video domains.
Formally, a video-level feature Ï(xv 1:Fv ) is a ï¬xed-length repre- sentation (at the video-level). We explore a simple aggregation technique for getting these video-level representations. We also experimented with Fisher Vectors (FV) [27] and VLAD [15] ap- proaches for task-independent video-level representations but were not able to achieve competitive results for FV or VLAD representa- tions of similar dimensionality. We leave it as future work to come up with compact FV or VLAD type representations that outperform the much simpler approach described below.
# 4.2.1 First, second order and ordinal statistics
j â R1024, we ex- tract the mean µv â R1024 and the standard-deviation Ïv â R1024. Additionally, we also extract the top 5 ordinal statistics for each dimension. Formally, TopK (xv(j)1:Fv ) returns a K dimensional vector where the pth dimension contains the pth highest value of the feature-vectorâs jth dimension over the entire video. We denote TopK (xv 1:Fv ) to be a KD dimensional vector obtained by concate- nating the ordinal statistics for each dimension. Thus, the resulting feature-vector Ï(xv 1:Fv ) for the video becomes:
Ï(xv 1:Fv ) = µ(xv Ï(xv TopK (xv 1:Fv ) 1:Fv ) 1:Fv ) . (2)
# 4.2.2 Feature normalization
Standardization of features has been proven to help with online learning algorithms[14, 37] as it makes the updates using Stochas- tic Gradient Descent (SGD) based algorithms (like Adagrad) more robust to learning rates, and speeds up convergence.
Before training our one-vs-all classiï¬ers on the video-level rep- resentation, we apply global normalization to the feature vectors Ï(xv 1:Fv ) (deï¬ned in equation 2). Similar to how we processed the frame features, we substract the mean Ï(.) then use PCA to decor- relate and whiten the features. The normalized video features are now approximately multivariate gaussian with zero mean and iden- tity covariance. This makes the gradient steps across the various dimensions independent, and learning algorithm gets an unbiased view of each dimension (since the same learning rate is applied to each dimension). Finally, the resulting features are L2 normal- ized. We found that these normalization techniques make our mod- els train faster.
# 4.3 Models from Video Features
Given the video-level representations, we train independent bi- nary classiï¬ers for each label using all the data. Exploiting the structure information between the various labels is left for future work. A key challenge is training these classiï¬ers at the scale of this dataset. Even with a compact video-level representation for the 6M training videos, it is unfeasible to train batch optimization classiï¬ers, like SVM. Instead, we use online learning algorithms, and use Adagrad to perform model updates on the weight vectors given a small mini-batch of examples (each example is associated with a binary ground-truth value).
# 4.3.1 Logistic Regression
Given D dimensional video-level features, the parameters Î of the logistic regression classiï¬er are the entity speciï¬c weights we. During scoring, given x â RD+1 to be the video-level feature of the test example, the probability of the entity e is given as p(e|x) = Ï(wT e x). The weights we are obtained by minimizing the total log-loss on the training data given as:
w Allwell? + D0 L(yi.e, (we x:)), G3) i=l
where Ï(.) is the standard logistic, Ï(z) = 1/(1 + exp(âz)).
# 4.3.2 Hinge Loss
Since training batch SVMs on such a large dataset is impossible, we use the online SVM approach. As in the conventional SVM framework, we use ±1 to represent negative and positive labels
respectively. Given binary ground-truth labels y (0 or 1), and pre- dicted labels Ëy (positive or negative scalars), the hinge loss is:
L(y, Ëy) = max(0, b â (2y â 1)Ëy), (4)
where b is the hinge-loss parameter which can be ï¬ne-tuned further or set to 1.0. Due to the presence of the max function, there is a discontinuity in the ï¬rst derivative. This results in the subgradient being used in the updates, slowing convergence signiï¬cantly.
# 4.3.3 Mixture of Experts (MoE)
Mixture of experts (MoE) was first proposed by Jacobs and Jor- dan [18]. The binary classifier for an entity e is composed of a set of hidden states, or experts, He. A softmax is typically used to model the probability of choosing each expert. Given an ex- pert, we can use a sigmoid to model the existence of the entity. Thus, the final probability for entity eâs existence is p(e|x) = hen. p(h|x)o(uz_x), where p(h|x) is a softmax over |He| + 1 The last, exp(w? x) I+Dnrene exP(wry%) states. In other words, p(h|x) =
(|He| + 1)th, state is a dummy state that always results in the non-existence of the entity. Denote py|x = p(y = 1|x), ph|x = p(h|x) and ph = p(y = 1|x, h). Given a set of training examples (xi, gi)i=1...N for a binary classiï¬er, where xi is the feature vec- tor and gi â [0, 1] is the ground-truth, let L(pi, gi) be the log-loss between the predicted probability and the ground-truth:
L(p, 9) = âg log p â (1 â g) log(1 â p). (5) We could directly write the derivative of £ [Pulx: g) with respect to the softmax weight wy, and the logistic weight u), as
d£ [Puig] Pile (Pylnx _ Pylx) (Py|x _ 9) » ©) Own, Pylx(1 â Pylx) AL [Py\x; 9] 4c PilxPulnoe(L = Puine) (Puix = 9) a) oun Pylx (1 â Pylx)
We use Adagrad with a learning rate of 1.0 and batch size of 32 to learn the weights. Since we are training independent classiï¬ers for each label, the work is distributed across multiple machines.
For MoE models, we experimented with varying number of mix- tures (1, 2, 4), and found that performance increases by 0.5%-1% on all metrics as we go from 1 to 2, and then to 4 mixtures, but the number of model parameters correspondingly increases by 2 or 4 times. We chose 2 mixtures as a good compromise and report numbers with the 2-mixture MoE model for all datasets.
# 5. EXPERIMENTS
In this section, we ï¬rst provide benchmark baseline results for the above multi-label classiï¬cation approaches on the YouTube-8M dataset. We then evaluate the usefulness of video representations learned on this dataset for other tasks, such as Sports-1M sports classiï¬cation and AcitvityNet activity classiï¬cation.
# 5.1 Evaluation Metrics
Mean Average Precision (mAP): For each entity, we ï¬rst round the annotation scores in buckets of 10â4 and sort all the non-zero annotations according to the model score. At a given threshold Ï , the precision P (Ï ) and recall R(Ï ) are given by I(yt â¥ Ï )gt I(yt â¥ Ï ) I(yt â¥ Ï )gt tâT gt
Modeling Approach Input Features Frame-level, {xv Logistic + Average (4.1.1) Frame-level, {xv Deep Bag of Frames (4.1.2) Frame-level, {xv LSTM (4.1.3) Video-level, µ Hinge loss (4.3) Video-level, µ Logistic Regression (4.3) Video-level, µ Mixture-of-2-Experts (4.3) Video-level, [µ; Ï; Top5] Mixture-of-2-Experts (4.3) 1:Fv } 1:Fv } 1:Fv } mAP Hit@1 50.8 11.0 62.7 26.9 64.5 26.6 56.3 17.0 60.5 28.1 62.3 29.6 30.0 63.3 PERR 42.2 55.1 57.3 47.9 53.0 54.9 55.8
Table 3: Results of the various benchmark baselines on the YouTube- 8M dataset. We ï¬nd that binary classiï¬ers on simple video-level rep- resentations perform substantially better than frame-level approaches. Deep learning methods such as DBoF and LSTMs do not provide a substantial boost over traditional dense feature aggregation methods because the underlying frame-level features are already very strong.
Approach Deep Bag of Frames (DBoF) (4.1.2) LSTM (4.1.3) Mixture-of-2-Experts ([µ; Ï; Top5]) (4.3) Hit@1 68.6 69.1 70.1 PERR Hit@5 83.5 29.0 84.7 30.5 84.8 29.1
Table 4: Results of the three best approaches on the human rated test set of the YouTube-8M dataset. A comparison with the results on the validation set (Table 3) shows that the relative strengths of the different approaches are largely preserved on both sets.
where I(.) is the indicator function. The average precision, ap- proximating the area under the precision-recall curve, can then be computed as
10000 AP = P(rj)[R(73) â R(tH41)], 1 j= (10)
where where Ïj = j as the unweighted mean of all the per-class average precisions.
Hit@k: This is the fraction of test samples that contain at least one of the ground truth labels in the top k predictions. If rankv,e is the rank of entity e on video v (with the best scoring entity having rank 1), and Gv is the set of ground-truth entities for v, then Hit@k can be written as:
1 iv S VeeG, I(rankv,e < k), veVv ql)
where ⨠is logical OR.
Precision at equal recall rate (PERR): We measure the video- level annotation precision when we retrieve the same number of entities per video as there are in the ground-truth. With the same notation as for Hit@k, PERR can be written as:
1 1 , WGa>0. S Ga S I(ranky,c < |Go|) | - vEV:|Gy|>0 eâ¬Gy
# 5.2 Results on YouTube-8M
Table 3 shows results for all approaches on the YouTube-8M dataset. Frame-level models (row 1), trained on the strong Incep- tion features and logistic regression, followed by simple averaging of predictions across all frames, perform poorly on this dataset. This shows that the video-level prediction task cannot be reduced to simple frame-level classiï¬cation.
Aggregating the frame-level features at the video-level using sim- ple mean pooling of frame-level features, followed by a hinge loss or logistic regression model, provides a non-trivial improvement in video level accuracies over naive averaging of the frame-level predictions. Further improvements are observed by using mixture- of-experts models and by adding other statistics, like the standard
deviation and ordinal features, computed over the frame-level fea- tures. Note that the standard deviation and ordinal statistics are more meaningful in the original RELU activation space so we re- construct the RELU features from the PCA-ed and quantized fea- tures by inverting the quantization and the PCA using the provided PCA matrix, computing the collection statistics over the recon- structed frame-level RELU features, and then re-applying PCA, whitening, and L2 normalization as described in Section 4.2.2. This simple task-independent feature pooling and normalization strategy yields some of the most competitive results on this dataset.
Finally, we also evaluate two deep network architectures that have produced state-of-art results on previous benchmarks [26]. The DBoF architecture ignores sequence information and treats the input video as a bag of frames whereas LSTMs use state informa- tion to preserve the video sequence. The DBoF approach with a logistic classiï¬cation layer produces 2% (absolute) gains in Hit@1 and PERR metrics over using simple mean feature pooling and a single-layer logistic model, which shows the beneï¬ts of discrim- intatively training a projection layer to obtain a task-speciï¬c video- level representation. The mAP results for DBoF are slightly worse than mean pooling + logistic model, which we attribute to slower training and convergence of DBoF on rare classes (mAP is strongly affected by results on rare classes and the joint class training of DBoF is a disadvantage for those classes).
The LSTM network generally performs best, except for mAP, where the 1-vs-all binary MoE classiï¬ers perform better, likely for the same reasons of slower convergence on rare classes. LSTM does improve on Hit@1 and PERR metrics, as expected given its ability to learn long-term correlations in the time domain. Also, in [26], the authors used data augmentation by sampling multi- ple snippets of ï¬xed length from a video and averaged the results, which could produce even better accuracies than our current results. We also considered Fisher vectors and VLAD given their recent success in aggregating CNN features at the video-level in [39]. However, for the same dimensionality as the video-level represen- tations of the LSTM, DBoF and mean features, they did not pro- duce competitive results.
# 5.2.1 Human Rated Test Set
We also report results on the human rated test set of over 8000 videos (see Section 3.5) in Table 4 for the top three approaches. We report PERR, Hit@1, and Hit@5, since the mAP is not reliable given the size of the test set. The Hit@1 numbers are uniformly higher for all approaches when compared to the incomplete valida- tion set in Table 3 whereas the PERR numbers are uniformly lower. This is largely attributable to the missing labels in the validation set (recall of the Validation set labels is around 15% compared to ex- haustive human ratings). However, the relative ordering of the var- ious approaches is fairly consistent between the two sets, showing that the validation set results are still reliable enough to compare different approaches.
# 5.3 Results on Sports-1M
Next, we investigate generalization of the video-level features learned using the YouTube-8M dataset and perform transfer learn- ing experiments on the Sports-1M dataset. The Sports-1M dataset [19] consists of 487 sports activities with 1.2 million YouTube videos and is one of the largest benchmarks available for sports/activity recognition. We use the ï¬rst 360 seconds of a video sampled at 1 frame per second for all experiments.
To evaluate transfer learning on this dataset, in one experiment we simply use the aggregated video-level descriptors, based on the PCA matrix learned on the YouTube-8M dataset, and train MoE or
Approach Logistic Regression (µ) (4.3) Mixture-of-2-Experts (µ) (4.3) Mixture-of-2-Experts ([µ; Ï; Top5]) (4.2.1) LSTM (4.1.3) +Pretrained on YT-8M (4.1.3) Hierarchical 3D Convolutions [19] Stacked 3D Convolutions [35] LSTM with Optical Flow and Pixels [26] mAP Hit@1 Hit@5 79.6 60.1 58.0 80.4 61.5 59.1 82.6 63.2 61.3 85.6 64.9 66.7 86.2 65.7 67.6 80.0 61.0 - 85.0 61.0 - 91.0 73.0 -
Approach Mixture-of-2-Experts (µ) (4.3) +Pretrained PCA on YT-8M Mixture-of-2-Experts ([µ; Ï; Top5]) (4.2.1) +Pretrained PCA on YT-8M LSTM (4.1.3) +Pretrained on YT-8M (4.1.3) Ma, Bargal et al.[24] Heilbron et al.[12] mAP Hit@1 Hit@5 85.4 68.7 69.1 89.3 72.5 74.1 72.3 74.2 NO 91.6 74.9 77.6 81.0 63.4 57.9 92.4 74.2 75.6 - - 53.8 - - 43.0 89.6
(a) Sports-1M: Our learned features are competitive on this dataset beating all but the approach of [26], which learned directly from the video pixels. Both [26] and [35] included motion features.
(b) ActivityNet: Since the dataset is small, we see a substantial boost in performance by pre-training on YouTube-8M or using the transfer learnt PCA versus the one learnt from scratch on ActivityNet.
Table 5: Results of transferring video representations learned on the YouTube-8M dataset to the (a) Sports-1M and (b) ActivityNet.
logistic models on top using target domain training data.
For the LSTM networks, we have two scenarios: 1) we use the PCA transformed features and learn a LSTM model from scratch using these features; or 2) we use the LSTM layers pre-trained on the YouTube-8M task, and ï¬ne-tune them on the Sports-1M dataset (along with a new softmax classiï¬er).
Table 5a shows the evaluation metrics for the various video-level representations on the Sports-1M dataset. Our learned features are competitive on this dataset, with the best approach beating all but the approach of [26], which learned directly from the pixels of the videos in the Sports-1M dataset, including optical ï¬ow, and made use of data augmentation strategies and multiple inferences over several video segments. We also show that even on such a large dataset (1M videos), pre-training on YouTube-8M still helps, and improves the LSTM performance by â¼1% on all metrics (vs. no pre-training).
# 5.4 Results on ActivityNet
Our ï¬nal set of experiments demonstrate the generality of our learned features for the ActivityNet untrimmed video classiï¬cation task. Similar to Sports-1M experiments, we compare directly train- ing on the ActivityNet dataset against pre-training on YouTube-8M for aggregation based and LSTM approaches. As seen in Table 5b, all of the transferred features are much better in terms of all metrics than training on ActivityNet alone. Notably, without the use of mo- tion information, our best feature is better by up to 80% than the HOG, HOF, MBH, FC-6, FC-7 features used in [12]. This result shows that features learned on YouTube-8M generalize very well to other datasets/tasks. We believe this is because of the diversity and scale of the videos present in YouTube-8M.
will prove to be a test bed for developing novel video representation learning algorithms, and especially approaches that deal effectively with noisy or incomplete labels.
As a side effect, we also provide one of the largest and most diverse public visual annotation vocabularies (consisting of 4800 visual Knowledge Graph entities), constructed from popularity sig- nals on YouTube as well as manual curation, and organized into 24 top-level categories.
We provide extensive experiments comparing several strong base- lines for video representation learning, including Deep Networks and LSTMs, on this dataset. We demonstrate the efï¬cacy of using a fairly unexplored class of models (mixture-of-experts) and show that they can outperform popular classiï¬ers like logistic regression and SVMs. This is particularly true for our large dataset where many classes can be multi-modal. We explore various video-level representations using simple statistics extracted from the frame- level features and model the probability of an entity given the ag- gregated vector as an MoE. We show that this yields competitive performance compared to more complex approaches (that directly use frame-level information) such as LSTM and DBoF. This also demonstrates that if the underlying frame-level features are strong, the need for more sophisticated video-level modeling techniques is reduced.
Finally, we illustrate the usefulness of the dataset by perform- ing transfer learning experiments on existing video benchmarksâ Sports-1M and ActivityNet. Our experiments show that features learned on this dataset generalize well on these benchmarks, in- cluding setting a new state-of-the-art on ActivityNet.
# 6. CONCLUSIONS
In this paper, we introduce YouTube-8M, a large-scale video benchmark for video classiï¬cation and representation learning. With YouTube-8M, our goal is to advance the ï¬eld of video understand- ing, similarly to what large-scale image datasets have done for im- age understanding. Speciï¬cally, we address the two main chal- lenges with large-scale video understandingâ(1) collecting a large labeled video dataset, with reasonable quality labels, and (2) re- moving computational barriers by pre-processing the dataset and providing state-of-the-art frame-level features to build from. We process over 50 years worth of video, and provide features for nearly 2 billion frames from more than 8 million videos, which enables training a reasonable model at this scale within 1 day, us- ing an open source framework on a single machine! We expect this dataset to level the playing ï¬eld for academia researchers, bridge the gap with large-scale labeled video datasets, and signiï¬cantly accelerate research on video understanding. We hope this dataset
7. REFERENCES [1] Freebase: A community-curated database of well-known people, places, and things. https://www.freebase.com. [2] Google I/O 2013 - semantic video annotations in the Youtube Topics API: Theory and applications. https://www.youtube.com/watch?v=wf_77z1H-vQ.
[3] Knowledge Graph Search API. https://developers.google.com/knowledge-graph/.
[4] Tensorï¬ow: Image recognition.
https://www.tensorï¬ow.org/tutorials/image_recognition. [5] M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri.
5] M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri. Actions as space-time shapes. In Proceedings of the International Conference on Computer Vision (ICCV), 2005.
Actions as space-time shapes. In Proceedings of the International Conference on Computer Vision (ICCV), 2005. [6] J. Deng, W. Dong, R. Socher, L. jia Li, K. Li, and L. Fei-fei. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2009.
[7] M. Everingham, L. V. Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge, 2009.
[8] L. Fei-fei, R. Fergus, and P. Perona. One-shot learning of object categories. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28, 2006.
[9] R. Girshick. Fast R-CNN. In Proceedings of the International Conference on Computer Vision (ICCV), 2015.
[10] G. Grifï¬n, A. Holub, and P. Perona. Caltech-256 object category dataset. Technical Report 7694, California Institute of Technology, 2007.
[11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. [12] F. C. Heilbron, V. Escorcia, B. Ghanem, and J. C. Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 961â970, 2015.
[13] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computing, 9(8), Nov. 1997.
[14] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning (ICML), pages 448â456, 2015.
[15] H. Jegou, F. Perronnin, M. Douze, J. Sanchez, P. Perez, and C. Schmid. Aggregating local image descriptors into compact codes. IEEE Trans. Pattern Anal. Mach. Intell., 34(9), Sept. 2012.
[16] Y. Jiang, J. Liu, A. Roshan Zamir, G. Toderici, I. Laptev, M. Shah, and R. Sukthankar. THUMOS challenge: Action recognition with a large number of classes. http://crcv.ucf.edu/THUMOS14, 2014.
[17] Y.-G. Jiang, Z. Wu, J. Wang, X. Xue, and S.-F. Chang. Exploiting feature and class relationships in video categorization with regularized deep neural networks. arXiv preprint arXiv:1502.07209, 2015.
[18] M. I. Jordan. Hierarchical mixtures of experts and the em algorithm. Neural Computation, 6, 1994.
[19] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classiï¬cation with convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1725â1732, Columbus, Ohio, USA, 2014.
[20] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classiï¬cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems (NIPS), pages 1097â1105, 2012.
[21] H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. Hmdb: a large video database for human motion recognition. In Proceedings of the International Conference on Computer Vision (ICCV), 2011.
[22] I. Laptev and T. Lindeberg. Space-time interest points. In Proceedings of the International Conference on Computer Vision (ICCV), 2003.
[23] I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld. Learning realistic human actions from movies. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008.
[24] S. Ma, S. A. Bargal, J. Zhang, L. Sigal, and S. Sclaroff. Do less and achieve more: Training cnns for action recognition utilizing action images from the web. CoRR, abs/1512.07155, 2015.
[25] V. Mnih and G. Hinton. Learning to label aerial images from noisy data. In Proceedings of the 29th Annual International Conference on Machine Learning (ICML), June 2012.
[26] J. Y.-H. Ng, M. J. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici. Beyond short snippets: Deep networks for video classiï¬cation. In IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), pages 4694â4702, 2015.
[27] F. Perronnin and C. Dance. Fisher kernels on visual
vocabularies for image categorization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2007. [28] A. Quattoni and A. Torralba. Recognizing indoor scenes. In
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009.
[29] S. Reed, H. Lee, D. Anguelov, C. Szegedy, D. Erhan, and
A. Rabinovich. Training deep neural networks on noisy labels with bootstrapping. ArXiv e-prints, Dec. 2014. [30] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211â252, 2015.
[31] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. In International Conference on Learning Representations (ICLR). [32] J. Shotton, J. Winn, C. Rother, and A. Criminisi.
Textonboost: Joint appearance, shape and context modeling for multi-class object. In Proceedings of the European Conference on Computer Vision (ECCV), 2006.
[33] K. Soomro, A. R. Zamir, and M. Shah. UCF101: A dataset of 101 human actions classes from videos in the wild. In CRCV-TR-12-01, 2012.
[34] B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L. Li. The new data and new challenges in multimedia research. CoRR, abs/1503.01817, 2015.
[35] D. Tran, L. D. Bourdev, R. Fergus, L. Torresani, and M. Paluri. C3D: generic features for video analysis. CoRR, abs/1412.0767, 2014.
[36] H. Wang, M. M. Ullah, A. Kläser, I. Laptev, and C. Schmid. Evaluation of local spatio-temporal features for action recognition. In Proc. BMVC, 2009.
[37] S. Wiesler, A. Richard, R. Schlüter, and H. Ney. Mean-normalized stochastic gradient for large-scale deep learning. In IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2014, Florence, Italy, May 4-9, 2014, pages 180â184. IEEE, 2014.
[38] J. Xiao, K. A. Ehinger, J. Hays, A. Torralba, A. Oliva, and J. Xiao. Sun database: Exploring a large collection of scene categories, 2013.
[39] Z. Xu, Y. Yang, and A. G. Hauptmann. A discriminative cnn video representation for event detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015.
[40] H.-F. Yu, P. Jain, P. Kar, and I. Dhillon. Large-scale multi-label learning with missing labels. In Proceedings of The 31st International Conference on Machine Learning (ICML), pages 593â601, 2014.
[41] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. CoRR, abs/1311.2901, 2013. | {
"id": "1502.07209"
} |
1609.07843 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | 6 1 0 2
p e S 6 2 ] L C . s c [
1 v 3 4 8 7 0 . 9 0 6 1 : v i X r a
# Pointer Sentinel Mixture Models
Stephen Merity Caiming Xiong James Bradbury Richard Socher MetaMind - A Salesforce Company, Palo Alto, CA, USA
SMERITY@SALESFORCE.COM CXIONG@SALESFORCE.COM JAMES.BRADBURY@SALESFORCE.COM RSOCHER@SALESFORCE.COM
Abstract Recent neural network sequence models with softmax classiï¬ers have achieved their best lan- guage modeling performance only with very large hidden states and large vocabularies. Even then they struggle to predict rare or unseen words even if the context makes the prediction un- ambiguous. We introduce the pointer sentinel mixture architecture for neural sequence models which has the ability to either reproduce a word from the recent context or produce a word from a standard softmax classiï¬er. Our pointer sentinel- LSTM model achieves state of the art language modeling performance on the Penn Treebank (70.9 perplexity) while using far fewer parame- ters than a standard softmax LSTM. In order to evaluate how well language models can exploit longer contexts and deal with more realistic vo- cabularies and larger corpora we also introduce the freely available WikiText corpus.1
QO 0 0 we {}â>+(}>- 10> 0-0 Fed Chair Janet Yellen... raised rates. Ms. [ 29? : . : A : r i (ee 5 : : z ' é H . ' ' Sentinel Pptr( Yellen) g % >| 2arqvark Bernanke Rosenthal Yellen zebra &2] + t 4 t . Zz] ' : ' 8 : [I : o a anll feooaoello r Pvocab( Yellen) p(Yellen) = g Pvocab(Yellen) + (1 â g) ppte(Yellen)
Figure 1. Illustration of the pointer sentinel-RNN mixture model. g is the mixture gate which uses the sentinel to dictate how much probability mass to give to the vocabulary.
states, in effect increasing hidden state capacity and pro- viding a path for gradients not tied to timesteps. Even with attention, the standard softmax classiï¬er that is being used in these models often struggles to correctly predict rare or previously unknown words.
# 1. Introduction
A major difï¬culty in language modeling is learning when to predict speciï¬c words from the immediate context. For instance, imagine a new person is introduced and two para- graphs later the context would allow one to very accurately predict this personâs name as the next word. For standard neural sequence models to predict this name, they would have to encode the name, store it for many time steps in their hidden state, and then decode it when appropriate. As the hidden state is limited in capacity and the optimization of such models suffer from the vanishing gradient prob- lem, this is a lossy operation when performed over many timesteps. This is especially true for rare words.
Models with soft attention or memory components have been proposed to help deal with this challenge, aiming to allow for the retrieval and use of relevant previous hidden
Pointer networks (Vinyals et al., 2015) provide one poten- tial solution for rare and out of vocabulary (OoV) words as a pointer network uses attention to select an element from the input as output. This allows it to produce previously unseen input tokens. While pointer networks improve per- formance on rare words and long-term dependencies they are unable to select words that do not exist in the input.
We introduce a mixture model, illustrated in Fig. 1, that combines the advantages of standard softmax classiï¬ers with those of a pointer component for effective and efï¬- cient language modeling. Rather than relying on the RNN hidden state to decide when to use the pointer, as in the re- cent work of G¨ulc¸ehre et al. (2016), we allow the pointer component itself to decide when to use the softmax vocab- ulary through a sentinel. The model improves the state of the art perplexity on the Penn Treebank. Since this com- monly used dataset is small and no other freely available alternative exists that allows for learning long range depen- dencies, we also introduce a new benchmark dataset for language modeling called WikiText.
1Available for download at the WikiText dataset site
Pointer Sentinel Mixture Models
# Output Distribution
P(yn|wi,...,wWn-1)
Pointer Distribution Pptr(yn|w1, ..+,WN-1) Softmax ; & ' 1 1 ' \------- â Query 1! Softmax >| = RNN Distribution Pvocab(Yn|W1,---;Wn-1)
Figure 2. Visualization of the pointer sentinel-RNN mixture model. T RNN, is used and the RNN y the pointer network to identify likely matching wor idden states. If the pointer component is not confident, probability m: [he query, produced from applying an MLP to the last output of the s from the past. The © nodes are inner products between the query can be directed to the RNN by increasing the value of the mixture gate g via the sentinel, seen in grey. If g = 1 then only the RNN is used. If g = 0 then only the pointer is used.
# 2. The Pointer Sentinel for Language Modeling
Given a sequence of words w1, . . . , wN predict the next word wN . â 1, our task is to
# 2.1. The softmax-RNN Component
# 2.2. The Pointer Network Component
In this section, we propose a modiï¬cation to pointer net- works for language modeling. To predict the next word in the sequence, a pointer network would select the member of the input sequence p(w1, . . . , wN 1) with the maximal attention score as the output.
Recurrent neural networks (RNNs) have seen widespread use for language modeling (Mikolov et al., 2010) due to their ability to, at least in theory, retain long term depen- dencies. RNNs employ the chain rule to factorize the joint probabilities over a sequence of tokens: p(wi,..., wn) = TI, p(w: ..,Wi-1). More precisely, at each time step 7, we compute the RNN hidden state h; according to the previous hidden state h;_; and the input x; such that hy = RNN(a;,hi-1). When all the N â 1 words have been processed by the RNN, the final state hy_ is fed into a softmax layer which computes the probability over a vocabulary of possible words: W1,-
The simplest way to compute an attention score for a spe- ciï¬c hidden state is an inner product with all the past hid- RH . However, if den states h, with each hidden state hi â we want to compute such a score for the most recent word (since this word may be repeated), we need to include the last hidden state itself in this inner product. Taking the in- ner product of a vector with itself results in the vectorâs magnitude squared, meaning the attention scores would be strongly biased towards the most recent word. Hence we project the current hidden state to a query vector q ï¬rst. To produce the query q we compute
pvocab(w) = softmax(U hN 1), (1)
â
H , H is the hidden size, and where pvocab à V the vocabulary size. RNNs can suffer from the vanishing gradient problem. The LSTM (Hochreiter & Schmidhuber, 1997) architecture has been proposed to deal with this by updating the hidden state according to a set of gates. Our work focuses on the LSTM but can be applied to any RNN architecture that ends in a vocabulary softmax.
q = tanh(W hN 1 + b), (2)
â RH . To generate the RH , and q where W pointer attention scores, we compute the match between the previous RNN output states hi and the query q by taking the inner product, followed by a softmax activation function to obtain a probability distribution:
zi = qT hi, a = softmax(z),
(3)
(4)
where z RL, a RL, and L is the total number of hidden
â
â
Pointer Sentinel Mixture Models
states. The probability mass assigned to a given word is the sum of the probability mass given to all token positions where the given word appears:
is used and 1 means only the softmax-RNN is used.
xi) + (1 xi). (6)
# p(yi|
xi) = g pvocab(yi|
# g) pptr(yi|
â
> ai, (5) tel (w,x) Dptr(w) =
â
While the models could be entirely separate, we re-use many of the parameters for the softmax-RNN and pointer components. This sharing minimizes the total number of parameters in the model and capitalizes on the pointer net- workâs supervision for the RNN component.
where I(w, x) results in all positions of the word w in the RV . This technique, referred to as input x and pptr pointer sum attention, has been used for question answer- ing (Kadlec et al., 2016).
Given the length of the documents used in language mod- eling, it may not be feasible for the pointer network to eval- uate an attention score for all the words back to the begin- ning of the dataset. Instead, we may elect to maintain only a window of the L most recent words for the pointer to match against. The length L of the window is a hyperparameter that can be tuned on a held out dataset or by empirically an- alyzing how frequently a word at position t appears within the last L words.
To illustrate the advantages of this approach, consider a long article featuring two sentences President Obama dis- cussed the economy and President Obama then ï¬ew to If the query was Which President is the article Prague. about?, probability mass could be applied to Obama in If the question was instead Who ï¬ew to either sentence. Prague?, only the latter occurrence of Obama provides the proper context. The attention sum model ensures that, as long as the entire attention probability mass is distributed on the occurrences of Obama, the pointer network can achieve zero loss. This ï¬exibility provides supervision without forcing the model to put mass on supervision sig- nals that may be incorrect or lack proper context. This fea- ture becomes an important component in the pointer sen- tinel mixture model.
# 2.4. Details of the Gating Function
To compute the new pointer sentinel gate g, we modify the pointer component. In particular, we add an additional ele- ment to z, the vector of attention scores as deï¬ned in Eq. 3. This element is computed using an inner product between RH . This change the query and the sentinel2 vector s can be summarized by changing Eq. 4 to:
a = softmax ( [z; q's) . (7)
RV +1 to be the attention distribution over We deï¬ne a both the words in the pointer window as well as the sentinel state. We interpret the last element of this vector to be the gate value: g = a[V + 1].
Any probability mass assigned to g is given to the stan- dard softmax vocabulary of the RNN. The ï¬nal updated, normalized pointer probability over the vocabulary in the window then becomes:
pptr(yi| xi) = 1 1 g a[1 : V ], (8)
â
where we denoted [1 : V ] to mean the ï¬rst V elements of the vector. The ï¬nal mixture model is the same as Eq. 6 but with the updated Eq. 8 for the pointer probability.
# 2.3. The Pointer Sentinel Mixture Model
While pointer networks have proven to be effective, they cannot predict output words that are not present in the in- put, a common scenario in language modeling. We propose to resolve this by using a mixture model that combines a standard softmax with a pointer.
This setup encourages the model to have both components compete: use pointers whenever possible and back-off to the standard softmax otherwise. This competition, in par- ticular, was crucial to obtain our best model. By integrating the gating function directly into the pointer computation, it is inï¬uenced by both the RNN hidden state and the pointer windowâs hidden states.
# 2.5. Motivation for the Sentinel as Gating Function
Our mixture model has two base distributions: the softmax vocabulary of the RNN output and the positional vocabu- lary of the pointer model. We refer to these as the RNN component and the pointer component respectively. To combine the two base distributions, we use a gating func- xi) where zi is the latent variable stating tion g = p(zi = k which base distribution the data point belongs to. As we only have two base distributions, g can produce a scalar in the range [0, 1]. A value of 0 implies that only the pointer
To make the best decision possible regarding which compo- nent to use the gating function must have as much context as possible. As we increase both the number of timesteps and the window of words for the pointer component to con- sider, the RNN hidden state by itself isnât guaranteed to
2A sentinel value is inserted at the end of a search space in or- der to ensure a search algorithm terminates if no matching item is found. Our sentinel value terminates the pointer search space and distributes the rest of the probability mass to the RNN vocabulary.
Pointer Sentinel Mixture Models
accurately recall the identity or order of words it has re- cently seen (Adi et al., 2016). This is an obvious limitation of encoding a variable length sequence into a ï¬xed dimen- sionality vector.
no penalty and the loss is entirely determined by the loss of the softmax-RNN component.
# 2.7. Parameters and Computation Time
In our task, where we may want a pointer window where the length L is in the hundreds, accurately modeling all of this information within the RNN hidden state is impracti- cal. The position of speciï¬c words is also a vital feature as relevant words eventually fall out of the pointer compo- nentâs window. To correctly model this would require the RNN hidden state to store both the identity and position of each word in the pointer window. This is far beyond what the ï¬xed dimensionality hidden state of an RNN is able to accurately capture.
For this reason, we integrate the gating function directly into the pointer network by use of the sentinel. The deci- sion to back-off to the softmax vocabulary is then informed by both the query q, generated using the RNN hidden state 1, and from the contents of the hidden states in the hN pointer window itself. This allows the model to accurately query what hidden states are contained in the pointer win- dow and avoid having to maintain state for when a word may have fallen out of the pointer window.
The pointer sentinel-LSTM mixture model results in a relatively minor increase in parameters and computation time, especially when compared to the size of the mod- els required to achieve similar performance using standard LSTM models.
The only two additional parameters required by the model are those required for computing q, speciï¬cally W â RH RH , and the sentinel vector embedding, H and b à RH . This is independent of the depth of the RNN as s the the pointer component only interacts with the output of the ï¬nal RNN layer. The additional H 2 + 2H parameters are minor compared to a single LSTM layerâs 8H 2 + 4H parameters. Most state of the art models also require mul- tiple LSTM layers.
In terms of additional computation, a pointer sentinel- LSTM of window size L only requires computing the query q (a linear layer with tanh activation), a total of L parallelizable inner product calculations, and the attention scores for the L resulting scalars via the softmax function.
# 2.6. Pointer Sentinel Loss Function
of is a one hot encod- â ing of the correct output. During training, as Ëyi is one hot, only a single mixed probability p(yij) must be computed for calculating the loss. This can result in a far more efï¬cient GPU implementation. At prediction time, when xi), a maximum of L word we want all values for p(yi| probabilities must be mixed, as there is a maximum of L unique words in the pointer window of length L. This mixing can occur on the CPU where random access indexing is more efï¬cient than the GPU.
Following the pointer sum attention network, the aim is to place probability mass from the attention mechanism on the correct output Ëyi if it exists in the input. In the case of our mixture model the pointer loss instead becomes:
âlog | g+ > a; |, (9) iâ¬I(y,a)
i â where I(y, x) results in all positions of the correct output y in the input x. The gate g may be assigned all probabil- ity mass if, for instance, the correct output Ëyi exists only in the softmax-RNN vocabulary. Furthermore, there is no penalty if the model places the entire probability mass, on any of the instances of the correct word in the input win- dow. If the pointer component places the entirety of the probability mass on the gate g, the pointer network incurs
# 3. Related Work
Considerable research has been dedicated to the task of lan- guage modeling, from traditional machine learning tech- niques such as n-grams to neural sequence models in deep learning.
Mixture models composed of various knowledge sources have been proposed in the past for language modeling. Rosenfeld (1996) uses a maximum entropy model to com- bine a variety of information sources to improve language modeling on news text and speech. These information sources include complex overlapping n-gram distributions and n-gram caches that aim to capture rare words. The n- gram cache could be considered similar in some ways to our modelâs pointer network, where rare or contextually relevant words are stored for later use.
Beyond n-grams, neural sequence models such as recurrent neural networks have been shown to achieve state of the art results (Mikolov et al., 2010). A variety of RNN regular- ization methods have been explored, including a number of dropout variations (Zaremba et al., 2014; Gal, 2015) which prevent overï¬tting of complex LSTM language models. Other work has improved language modeling performance by modifying the RNN architecture to better handle in- creased recurrence depth (Zilly et al., 2016).
In order to increase capacity and minimize the impact of vanishing gradients, some language and translation mod-
Pointer Sentinel Mixture Models
Penn Treebank Valid Train Test Train WikiText-2 Valid Test Train WikiText-103 Valid Test Articles Tokens - 929,590 - 73,761 - 82,431 600 2,088,628 60 217,646 60 245,569 28,475 103,227,021 60 217,646 60 245,569 Vocab size OoV rate 10,000 4.8% 33,278 2.6% 267,735 0.4%
Table 1. Statistics of the Penn Treebank, WikiText-2, and WikiText-103. The out of vocabulary (OoV) rate notes what percentage of tokens have been replaced by an (unk) token. The token count includes newlines which add to the structure of the WikiText datasets.
els have also added a soft attention or memory compo- nent (Bahdanau et al., 2015; Sukhbaatar et al., 2015; Cheng et al., 2016; Kumar et al., 2016; Xiong et al., 2016; Ahn et al., 2016). These mechanisms allow for the retrieval and use of relevant previous hidden states. Soft attention mech- anisms need to ï¬rst encode the relevant word into a state vector and then decode it again, even if the output word is identical to the input word used to compute that hid- den state or memory. A drawback to soft attention is that if, for instance, January and March are both equally at- tended candidates, the attention mechanism may blend the two vectors, resulting in a context vector closest to Febru- ary (Kadlec et al., 2016). Even with attention, the standard softmax classiï¬er being used in these models often strug- gles to correctly predict rare or previously unknown words.
Attention-based pointer mechanisms were introduced in Vinyals et al. (2015) where the pointer network is able to select elements from the input as output. In the above example, only January or March would be available as options, as February does not appear in the input. The use of pointer networks have been shown to help with geometric problems (Vinyals et al., 2015), code genera- tion (Ling et al., 2016), summarization (Gu et al., 2016; G¨ulc¸ehre et al., 2016), question answering (Kadlec et al., 2016). While pointer networks improve performance on rare words and long-term dependencies they are unable to select words that do not exist in the input.
according to the switching network and the word or loca- tion with the highest ï¬nal attention score is selected for out- put. Although this approach uses both a pointer and RNN component, it is not a mixture model and does not combine the probabilities for a word if it occurs in both the pointer location softmax and the RNN vocabulary softmax. In our model the word probability is a mix of both the RNN and pointer components, allowing for better predictions when the context may be ambiguous.
Extending this concept further, the latent predictor network (Ling et al., 2016) generates an output sequence condi- tioned on an arbitrary number of base models where each base model may have differing granularity. In their task of code generation, the output could be produced one charac- ter at a time using a standard softmax or instead copy entire words from referenced text ï¬elds using a pointer network. As opposed to G¨ulc¸ehre et al. (2016), all states which pro- duce the same output are merged by summing their prob- abilities. Their model however requires a more complex training process involving the forward-backward algorithm for Semi-Markov models to prevent an exponential explo- sion in potential paths.
# 4. WikiText - A Benchmark for Language Modeling
G¨ulc¸ehre et al. (2016) introduce a pointer softmax model that can generate output the vocabulary softmax of an RNN or the location softmax of the pointer network. Not only does this allow for producing OoV words which are not in the input, the pointer softmax model is able to better deal with rare and unknown words than a model only featuring an RNN softmax. Rather than constructing a mixture model as in our work, they use a switching network to decide which component to use. For neural machine translation, the switching network is condi- tioned on the representation of the context of the source text and the hidden state of the decoder. The pointer network is not used as a source of information for switching network as in our model. The pointer and RNN softmax are scaled
We ï¬rst describe the most commonly used language model- ing dataset and its pre-processing in order to then motivate the need for a new benchmark dataset.
# 4.1. Penn Treebank
In order to compare our model to the many recent neural language models, we conduct word-level prediction exper- iments on the Penn Treebank (PTB) dataset (Marcus et al., 1993), pre-processed by Mikolov et al. (2010). The dataset consists of 929k training words, 73k validation words, and 82k test words. As part of the pre-processing performed by Mikolov et al. (2010), words were lower-cased, numbers were replaced with N, newlines were replaced with (eos), and all other punctuation was removed. The vocabulary is the most frequent 10k words with the rest of the tokens be-
Pointer Sentinel Mixture Models
ing replaced by an (unk) token. For full statistics, refer to Table 1.
Algorithm 1 Calculate truncated BPTT where every k1 timesteps we run back propagation for k2 timesteps
for t = 1 to t = T do
# 4.2. Reasons for a New Dataset
Run the RNN for one step, computing ht and zt
While the processed version of the PTB above has been frequently used for language modeling, it has many limi- tations. The tokens in PTB are all lower case, stripped of any punctuation, and limited to a vocabulary of only 10k words. These limitations mean that the PTB is unrealistic for real language use, especially when far larger vocabu- laries with many rare words are involved. Fig. 3 illustrates this using a Zipï¬an plot over the training partition of the PTB. The curve stops abruptly when hitting the 10k vocab- ulary. Given that accurately predicting rare words, such as named entities, is an important task for many applications, the lack of a long tail for the vocabulary is problematic.
# if t divides k1 then
Run BPTT from t down to t
â
# end if end for
same format and following the same conventions as that of the PTB dataset above.
# 4.4. Statistics
Other larger scale language modeling datasets exist. Un- fortunately, they either have restrictive licensing which pre- vents widespread use or have randomized sentence order- ing (Chelba et al., 2013) which is unrealistic for most lan- guage use and prevents the effective learning and evalua- tion of longer term dependencies. Hence, we constructed a language modeling dataset using text extracted from Wikipedia and will make this available to the community.
# 4.3. Construction and Pre-processing
We selected articles only ï¬tting the Good or Featured ar- ticle criteria speciï¬ed by editors on Wikipedia. These ar- ticles have been reviewed by humans and are considered well written, factually accurate, broad in coverage, neutral in point of view, and stable. This resulted in 23,805 Good articles and 4,790 Featured articles. The text for each arti- cle was extracted using the Wikipedia API. Extracting the raw text from Wikipedia mark-up is nontrivial due to the large number of macros in use. These macros are used extensively and include metric conversion, abbreviations, language notation, and date handling.
Once extracted, speciï¬c sections which primarily featured lists were removed by default. Other minor bugs, such as sort keys and Edit buttons that leaked in from the HTML, were also removed. Mathematical formulae and LATEX code, were replaced with tokens. Normaliza- tion and tokenization were performed using the Moses to- kenizer (Koehn et al., 2007), slightly augmented to further 8 @,@ 600) and with some addi- split numbers (8,600 tional minor ï¬xes. Following Chelba et al. (2013) a vocab- ulary was constructed by discarding all words with a count below 3. Words outside of the vocabulary were mapped to token, also a part of the vocabulary. the
# (unk)
The full WikiText dataset is over 103 million words in size, a hundred times larger than the PTB. It is also a tenth the size of the One Billion Word Benchmark (Chelba et al., 2013), one of the largest publicly available language mod- eling benchmarks, whilst consisting of articles that allow for the capture and usage of longer term dependencies as might be found in many real world tasks.
The dataset is available in two different sizes: WikiText-2 and WikiText-103. Both feature punctuation, original cas- ing, a larger vocabulary, and numbers. WikiText-2 is two times the size of the Penn Treebank dataset. WikiText-103 features all extracted articles. Both datasets use the same articles for validation and testing with the only difference being the vocabularies. For full statistics, refer to Table 1.
# 5. Experiments
# 5.1. Training Details
As the pointer sentinel mixture model uses the outputs of the RNN from up to L timesteps back, this presents a chal- lenge for training. If we do not regenerate the stale his- torical outputs of the RNN when we update the gradients, backpropagation through these stale outputs may result in incorrect gradient updates. If we do regenerate all stale out- puts of the RNN, the training process is far slower. As we can make no theoretical guarantees on the impact of stale outputs on gradient updates, we opt to regenerate the win- dow of RNN outputs used by the pointer component after each gradient update.
We also use truncated backpropagation through time (BPTT) in a different manner to many other RNN language models. Truncated BPTT allows for practical time-efï¬cient training of RNN models but has fundamental trade-offs that are rarely discussed.
To ensure the dataset is immediately usable by existing lan- guage modeling tools, we have provided the dataset in the
For running truncated BPTT, BPTT is run for k2 timesteps every k1 timesteps, as seen in Algorithm 1. For many RNN
Pointer Sentinel Mixture Models
Zipf plot for Penn Treebank 10° 105 the <unk> N g Absolute frequency of token 8 8 10% 10° 5 10° 10° 102 103 104 105 Frequency rank of token Zipf plot for WikiText-2 10° g Absolute frequency of token 8 8 10% 10° 5 10° 10° 102 103 104 105 Frequency rank of token
Zipf plot for Penn Treebank 10° 105 the <unk> N g Absolute frequency of token 8 8 10% 10° 5 10° 10° 102 103 104 105 Frequency rank of token
Zipf plot for WikiText-2 10° g Absolute frequency of token 8 8 10% 10° 5 10° 10° 102 103 104 105 Frequency rank of token
Figure 3. Zipï¬an plot over the training partition in Penn Treebank and WikiText-2 datasets. Notice the severe drop on the Penn Treebank when the vocabulary hits 104. Two thirds of the vocabulary in WikiText-2 are past the vocabulary cut-off of the Penn Treebank.
language modeling training schemes, k1 = k2, meaning that every k timesteps truncated BPTT is performed for the k previous timesteps. This results in only a single RNN output receiving backpropagation for k timesteps, with the other extreme being that the ï¬rst token receives backprop- agation for 0 timesteps. This issue is compounded by the fact that most language modeling code split the data tem- porally such that the boundaries are always the same. As such, most words in the training data will never experience a full backpropagation for k timesteps.
the pointer component always looks L In our task, timesteps into the past if L past timesteps are available. We select k1 = 1 and k2 = L such that for each timestep we perform backpropagation for L timesteps and advance one timestep at a time. Only the loss for the ï¬nal predicted word is used for backpropagation through the window.
ration which features a hidden size of 1500 and a two layer LSTM.
We produce results for two model types, an LSTM model that uses dropout regularization and the pointer sentinel- LSTM model. The variants of dropout used were zone- out (Krueger et al., 2016) and variational inference based dropout (Gal, 2015). Zoneout, which stochastically forces some recurrent units to maintain their previous values, was used for the recurrent connections within the LSTM. Varia- tional inference based dropout, where the dropout mask for a layer is locked across timesteps, was used on the input to each RNN layer and also on the output of the ï¬nal RNN layer. We used a value of 0.5 for both dropout connections.
# 5.3. Comparison over Penn Treebank
# 5.2. Model Details
Our experimental setup reï¬ects that of Zaremba et al. (2014) and Gal (2015). We increased the number of timesteps used during training from 35 to 100, matching the length of the window L. Batch size was increased to 32 from 20. We also halve the learning rate when valida- tion perplexity is worse than the previous iteration, stop- ping training when validation perplexity fails to improve for three epochs or when 64 epochs are reached. The gra- dients are rescaled if their global norm exceeds 1 (Pascanu et al., 2013b).3 We evaluate the medium model conï¬gura- tion which features a hidden size of H = 650 and a two layer LSTM. We compare against the large model conï¬gu-
Table 2 compares the pointer sentinel-LSTM to a vari- ety of other models on the Penn Treebank dataset. The pointer sentinel-LSTM achieves the lowest perplexity, fol- lowed by the recent Recurrent Highway Networks (Zilly et al., 2016). The medium pointer sentinel-LSTM model also achieves lower perplexity than the large LSTM mod- els. Note that the best performing large variational LSTM model uses computationally intensive Monte Carlo (MC) dropout averaging. Monte Carlo dropout averaging is a general improvement for any sequence model that uses dropout but comes at a greatly increased test time cost. In Gal (2015) it requires rerunning the test model with 1000 different dropout masks. The pointer sentinel-LSTM is able to achieve these results with far fewer parameters than other models with comparable performance, speciï¬- cally with less than a third the parameters used in the large variational LSTM models.
3The highly aggressive clipping is likely due to the increased BPTT length. Even with such clipping early batches may experi- ence excessively high perplexity, though this settles rapidly.
We also test a variational LSTM that uses zoneout, which
Pointer Sentinel Mixture Models
serves as the RNN component of our pointer sentinel- LSTM mixture. This variational LSTM model performs BPTT for the same length L as the pointer sentinel-LSTM, where L = 100 timesteps. The results for this model abla- tion are worse than that of Gal (2015)âs variational LSTM without Monte Carlo dropout averaging.
# 5.4. Comparison over WikiText-2
As WikiText-2 is being introduced in this dataset, there are no existing baselines. We provide two baselines to compare the pointer sentinel-LSTM against: our variational LSTM using zoneout and the medium variational LSTM used in Gal (2015).4 Attempts to run the Gal (2015) large model variant, a two layer LSTM with hidden size 1500, resulted in out of memory errors on a 12GB K80 GPU, likely due to the increased vocabulary size. We chose the best hyper- parameters from PTB experiments for all models.
better) Sy a nN ° 0.5 Mean difference in log perplexity (higher 0.0, 2 3 4 5 6 7 8 9 10 Word buckets of equal size (frequent words on left)
Figure 4. Mean difference in log perplexity on PTB when using the pointer sentinel-LSTM compared to the LSTM model. Words were sorted by frequency and split into equal sized buckets.
Table 3 shows a similar gain made by the pointer sentinel- LSTM over the variational LSTM models. The variational LSTM from Gal (2015) again beats out the variational LSTM used as a base for our experiments.
# 6. Analysis
# 6.1. Impact on Rare Words
# 6.2. Qualitative Analysis of Pointer Usage
In a qualitative analysis, we visualized the gate use and pointer attention for a variety of examples in the validation set, focusing on predictions where the gate primarily used the pointer component. These visualizations are available in the supplementary material.
A hypothesis as to why the pointer sentinel-LSTM can out- perform an LSTM is that the pointer component allows the model to effectively reproduce rare words. An RNN may be able to better use the hidden state capacity by deferring to the pointer component. The pointer component may also allow for a sharper selection of a single word than may be possible using only the softmax.
Figure 4 shows the improvement of perplexity when com- paring the LSTM to the pointer sentinel-LSTM with words split across buckets according to frequency. It shows that the pointer sentinel-LSTM has stronger improvements as words become rarer. Even on the Penn Treebank, where there is a relative absence of rare words due to only select- ing the most frequent 10k words, we can see the pointer sentinel-LSTM mixture model provides a direct beneï¬t.
While the improvements are largest on rare words, we can see that the pointer sentinel-LSTM is still helpful on rela- tively frequent words. This may be the pointer component directly selecting the word or through the pointer supervi- sion signal improving the RNN by allowing gradients to ï¬ow directly to other occurrences of the word in that win- dow.
4https://github.com/yaringal/BayesianRNN
As expected, the pointer component is heavily used for rare names such as Seidman (23 times in training), Iverson (7 times in training), and Rosenthal (3 times in training).
The pointer component was also heavily used when it came to other named entity names such as companies like Honey- well (8 times in training) and Integrated (41 times in train- ing, though due to lowercasing of words this includes inte- grated circuits, fully integrated, and other generic usage).
Surprisingly, the pointer component was also used for many frequent tokens. For selecting the unit of measure- ment (tons, kilograms, . . . ) or the short scale of numbers (thousands, millions, billions, . . . ), the pointer would refer to previous recent usage. This is to be expected, especially when phrases are of the form increased from N tons to N tons. The model can even be found relying on a mixture of the softmax and the pointer for predicting certain frequent verbs such as said.
Finally, the pointer component can be seen pointing to words at the very end of the 100 word window (position 97), a far longer horizon than the 35 steps that most lan- guage models truncate their backpropagation training to. This illustrates why the gating function must be integrated into the pointer component. If the gating function could only use the RNN hidden state, it would need to be wary of words that were near the tail of the pointer, especially if it was not able to accurately track exactly how long it
Pointer Sentinel Mixture Models
Model Parameters Validation Test Mikolov & Zweig (2012) - KN-5 Mikolov & Zweig (2012) - KN5 + cache Mikolov & Zweig (2012) - RNN Mikolov & Zweig (2012) - RNN-LDA Mikolov & Zweig (2012) - RNN-LDA + KN-5 + cache Pascanu et al. (2013a) - Deep RNN Cheng et al. (2014) - Sum-Prod Net Zaremba et al. (2014) - LSTM (medium) Zaremba et al. (2014) - LSTM (large) Gal (2015) - Variational LSTM (medium, untied) Gal (2015) - Variational LSTM (medium, untied, MC) Gal (2015) - Variational LSTM (large, untied) Gal (2015) - Variational LSTM (large, untied, MC) Kim et al. (2016) - CharCNN Zilly et al. (2016) - Variational RHN 2Mâ¡ 2Mâ¡ 6Mâ¡ 7Mâ¡ 9Mâ¡ 6M 5Mâ¡ 20M 66M 20M 20M 66M 66M 19M 32M â â â â â â â 86.2 82.2 81.9 ± â ± â â 72.8 77.9 0.2 0.3 141.2 125.7 124.7 113.7 92.0 107.5 100.0 82.7 78.4 79.7 78.6 75.2 73.4 ± ± ± ± 78.9 71.3 0.1 0.1 0.2 0.0 Zoneout + Variational LSTM (medium) Pointer Sentinel-LSTM (medium) 20M 21M 84.4 72.4 80.6 70.9
Table 2. Single model perplexity on validation and test sets for the Penn Treebank language modeling task. For our models and the models of Zaremba et al. (2014) and Gal (2015), medium and large refer to a 650 and 1500 units two layer LSTM respectively. The medium pointer sentinel-LSTM model achieves lower perplexity than the large LSTM model of Gal (2015) while using a third of the parameters and without using the computationally expensive Monte Carlo (MC) dropout averaging at test time. Parameter numbers with â¡ are estimates based upon our understanding of the model and with reference to Kim et al. (2016).
Model Parameters Validation Test Variational LSTM implementation from Gal (2015) 20M 101.7 96.3 Zoneout + Variational LSTM Pointer Sentinel-LSTM 20M 21M 108.7 84.8 100.9 80.8
Table 3. Single model perplexity on validation and test sets for the WikiText-2 language modeling task. All compared models use a two layer LSTM with a hidden size of 650 and the same hyperparameters as the best performing Penn Treebank model.
was since seeing a word. By integrating the gating func- tion into the pointer component, we avoid the RNN hidden state having to maintain this intensive bookkeeping.
# 7. Conclusion
# References
Adi, Yossi, Kermany, Einat, Belinkov, Yonatan, Lavi, Ofer, and Goldberg, Yoav. Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks. arXiv preprint arXiv:1608.04207, 2016.
We introduced the pointer sentinel mixture model and the WikiText language modeling dataset. This model achieves state of the art results in language modeling over the Penn Treebank while using few additional parameters and little additional computational complexity at prediction time.
We have also motivated the need to move from Penn Tree- bank to a new language modeling dataset for long range dependencies, providing WikiText-2 and WikiText-103 as potential options. We hope this new dataset can serve as a platform to improve handling of rare words and the usage of long term dependencies in language modeling.
Ahn, Sungjin, Choi, Heeyoul, P¨arnamaa, Tanel, and Ben- gio, Yoshua. A Neural Knowledge Language Model. CoRR, abs/1608.00318, 2016.
Bahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neural Machine Translation by Jointly Learning to Align and Translate. In ICLR, 2015.
Chelba, Ciprian, Mikolov, Tomas, Schuster, Mike, Ge, Qi, Brants, Thorsten, Koehn, Phillipp, and Robin- son, Tony. One Billion Word Benchmark for Measur- ing Progress in Statistical Language Modeling. arXiv preprint arXiv:1312.3005, 2013.
Cheng, Jianpeng, Dong, Li, and Lapata, Mirella. Long
Pointer Sentinel Mixture Models
Short-Term Memory-Networks for Machine Reading. CoRR, abs/1601.06733, 2016.
Cheng, Wei-Chen, Kok, Stanley, Pham, Hoai Vu, Chieu, Hai Leong, and Chai, Kian Ming Adam. Language Mod- eling with Sum-Product Networks. In INTERSPEECH, 2014.
Marcus, Mitchell P., Santorini, and Beatrice, Building a Large An- The Penn Treebank. Marcinkiewicz, Mary Ann. notated Corpus of English: Computational Linguistics, 19:313â330, 1993.
Mikolov, Tomas and Zweig, Geoffrey. Context dependent recurrent neural network language model. In SLT, 2012.
Gal, Yarin. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks. arXiv preprint arXiv:1512.05287, 2015.
Mikolov, Tomas, Karaï¬Â´at, Martin, Burget, Luk´as, Cer- nock´y, Jan, and Khudanpur, Sanjeev. Recurrent neu- ral network based language model. In INTERSPEECH, 2010.
Gu, Jiatao, Lu, Zhengdong, Li, Hang, and Li, Victor O. K. Incorporating Copying Mechanism in Sequence- to-Sequence Learning. CoRR, abs/1603.06393, 2016.
Pascanu, Razvan, C¸ aglar G¨ulc¸ehre, Cho, Kyunghyun, and Bengio, Yoshua. How to Construct Deep Recurrent Neu- ral Networks. CoRR, abs/1312.6026, 2013a.
G¨ulc¸ehre, C¸ aglar, Ahn, Sungjin, Nallapati, Ramesh, Zhou, Bowen, and Bengio, Yoshua. Pointing the Unknown Words. arXiv preprint arXiv:1603.08148, 2016.
Pascanu, Razvan, Mikolov, Tomas, and Bengio, Yoshua. On the difï¬culty of training recurrent neural networks. In ICML, 2013b.
Hochreiter, Sepp and Schmidhuber, J¨urgen. Long Short- Term Memory. Neural Computation, 9(8):1735â1780, Nov 1997. ISSN 0899-7667.
Rosenfeld, Roni. A Maximum Entropy Approach to Adap- tive Statistical Language Modeling. 1996.
Kadlec, Rudolf, Schmid, Martin, Bajgar, Ondrej, and Kleindienst, Jan. Text Understanding with the Attention Sum Reader Network. arXiv preprint arXiv:1603.01547, 2016.
Kim, Yoon, Jernite, Yacine, Sontag, David, and Rush, Alexander M. Character-aware neural language models. CoRR, abs/1508.06615, 2016.
Koehn, Philipp, Hoang, Hieu, Birch, Alexandra, Callison- Burch, Chris, Federico, Marcello, Bertoldi, Nicola, Cowan, Brooke, Shen, Wade, Moran, Christine, Zens, Richard, Dyer, Chris, Bojar, Ondej, Constantin, Alexan- dra, and Herbst, Evan. Moses: Open Source Toolkit for Statistical Machine Translation. In ACL, 2007.
Sukhbaatar, Sainbayar, Szlam, Arthur, Weston, Jason, and Fergus, Rob. End-To-End Memory Networks. In NIPS, 2015.
Vinyals, Oriol, Fortunato, Meire, and Jaitly, Navdeep. In Advances in Neural Information Pointer networks. Processing Systems, pp. 2692â2700, 2015.
Xiong, Caiming, Merity, Stephen, and Socher, Richard. Dynamic Memory Networks for Visual and Textual Question Answering. In ICML, 2016.
Zaremba, Wojciech, Sutskever, Ilya, and Vinyals, Oriol. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.
Krueger, David, Maharaj, Tegan, Kram´ar, J´anos, Pezeshki, Mohammad, Ballas, Nicolas, Ke, Nan Rosemary, Goyal, Anirudh, Bengio, Yoshua, Larochelle, Hugo, Courville, Aaron, et al. Zoneout: Regularizing RNNs by Ran- domly Preserving Hidden Activations. arXiv preprint arXiv:1606.01305, 2016.
Zilly, Julian Georg, Srivastava, Rupesh Kumar, Koutn´ık, Jan, and Schmidhuber, J¨urgen. Recurrent Highway Net- works. arXiv preprint arXiv:1607.03474, 2016.
Kumar, Ankit, Irsoy, Ozan, Ondruska, Peter, Iyyer, Mo- hit, Bradbury, James, Gulrajani, Ishaan, Zhong, Victor, Paulus, Romain, and Socher, Richard. Ask me any- thing: Dynamic memory networks for natural language processing. In ICML, 2016.
Ling, Wang, Grefenstette, Edward, Hermann, Karl Moritz, Kocisk´y, Tom´as, Senior, Andrew, Wang, Fumin, and Blunsom, Phil. Latent Predictor Networks for Code Gen- eration. CoRR, abs/1603.06744, 2016.
Pointer Sentinel Mixture Models
# Supplementary material
# Pointer usage on the Penn Treebank
For a qualitative analysis, we visualize how the pointer component is used within the pointer sentinel mixture model. The gate refers to the result of the gating function, with 1 indicating the RNN component is exclusively used whilst 0 indicates the pointer component is exclusively used. We begin with predictions that are using the RNN component primarily and move to ones that use the pointer component primarily.
Figure 5. In predicting the fall season has been a good one especially for those retailers, the pointer component suggests many words from the historical window that would ï¬t - retailers, investments, chains, and institutions. The gate is still primarily weighted towards the RNN component however.
Figure 6. In predicting the national cancer institute also projected that overall u.s. mortality, the pointer component is focused on mortality and rates, both of which would ï¬t. The gate is still primarily weighted towards the RNN component.
Figure 7. In predicting people do nât seem to be unhappy with it he said, the pointer component correctly selects said and is almost equally weighted with the RNN component. This is surprising given how frequent the word said is used within the Penn Treebank.
Pointer Sentinel Mixture Models
Predicting billion using 100 words of history (gate = 0.44)
Figure 8. For predicting the federal government has had to pump in $ N billion, the pointer component focuses on the recent usage of billion with highly similar context. The pointer component is also relied upon more heavily than the RNN component - surprising given the frequency of billion within the Penn Treebank and that the usage was quite recent.
Predicting noriega using 100 words of history (gate = 0.12)
entity gen. douglas would have fallen out of the window in only four more timesteps, a fact that the RNN hidden state would not be able guessed the same word. This additionally illustrates why the gating function must be integrated into the pointer component. The named to accurately retain for almost 100 timesteps. Figure 9. For predicting (unk) âs ghost sometimes runs through the e ring dressed like gen. noriega, the pointer component reaches 97 timesteps back to retrieve gen. douglas. Unfortunately this prediction is incorrect but without additional context a human would have
Predicting iverson using 100 words of history (gate = 0.03)
Figure 10. For predicting mr. iverson, the pointer component has learned the ability to point to the last name of the most recent named entity. The named entity also occurs 45 timesteps ago, which is longer than the 35 steps that most language models truncate their backpropagation to.
Predicting rosenthal using 100 words of history (gate = 0.00)
Figure 11. For predicting mr. rosenthal, the pointer is almost exclusively used and reaches back 65 timesteps to identify bruce rosenthal as the person speaking, correctly only selecting the last name.
Predicting integrated using 100 words of history (gate = 0.00)
Figure 12. For predicting in composite trading on the new york stock exchange yesterday integrated, the company Integrated and the (unk) token are primarily attended to by the pointer component, with nearly the full prediction being determined by the pointer component.
Pointer Sentinel Mixture Models
# Zipï¬an plot over WikiText-103
Zipf plot for WikiText 7 10 the 106 ry Oo a ry fo} rg ry fo} o servitude Absolute frequency of token 102 Schmerber 101 Goddet 10° 10° 101 102 103 104 105 106 Frequency rank of token
Figure 13. Zipï¬an plot over the training partition in the WikiText-103 dataset. With the dataset containing over 100 million tokens, there is reasonable coverage of the long tail of the vocabulary. | {
"id": "1607.03474"
} |
1609.08144 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | 6 1 0 2
t c O 8 ] L C . s c [
2 v 4 4 1 8 0 . 9 0 6 1 : v i X r a
# Googleâs Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi yonghui,schuster,zhifengc,qvl,mnorouzi@google.com
Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeï¬ Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Åukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliï¬ Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduï¬ Hughes, Jeï¬rey Dean
# Abstract
Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, with the potential to overcome many of the weaknesses of conventional phrase-based translation systems. Unfortunately, NMT systems are known to be computationally expensive both in training and in translation inference â sometimes prohibitively so in the case of very large data sets and large models. Several authors have also charged that NMT systems lack robustness, particularly when input sentences contain rare words. These issues have hindered NMTâs use in practical deployments and services, where both accuracy and speed are essential. In this work, we present GNMT, Googleâs Neural Machine Translation system, which attempts to address many of these issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder layers using residual connections as well as attention connections from the decoder network to the encoder. To improve parallelism and therefore decrease training time, our attention mechanism connects the bottom layer of the decoder to the top layer of the encoder. To accelerate the ï¬nal translation speed, we employ low-precision arithmetic during inference computations. To improve handling of rare words, we divide words into a limited set of common sub-word units (âwordpiecesâ) for both input and output. This method provides a good balance between the ï¬exibility of âcharacterâ-delimited models and the eï¬ciency of âwordâ-delimited models, naturally handles translation of rare words, and ultimately improves the overall accuracy of the system. Our beam search technique employs a length-normalization procedure and uses a coverage penalty, which encourages generation of an output sentence that is most likely to cover all the words in the source sentence. To directly optimize the translation BLEU scores, we consider reï¬ning the models by using reinforcement learning, but we found that the improvement in the BLEU scores did not reï¬ect in the human evaluation. On the WMTâ14 English-to-French and English-to-German benchmarks, GNMT achieves competitive results to state-of-the-art. Using a human side-by-side evaluation on a set of isolated simple sentences, it reduces translation errors by an average of 60% compared to Googleâs phrase-based production system.
1
# 1 Introduction
Neural Machine Translation (NMT) [41, 2] has recently been introduced as a promising approach with the potential of addressing many shortcomings of traditional machine translation systems. The strength of NMT lies in its ability to learn directly, in an end-to-end fashion, the mapping from input text to associated output text. Its architecture typically consists of two recurrent neural networks (RNNs), one to consume the input text sequence and one to generate translated output text. NMT is often accompanied by an attention mechanism [2] which helps it cope eï¬ectively with long input sequences.
An advantage of Neural Machine Translation is that it sidesteps many brittle design choices in traditional phrase-based machine translation [26]. In practice, however, NMT systems used to be worse in accuracy than phrase-based translation systems, especially when training on very large-scale datasets as used for the very best publicly available translation systems. Three inherent weaknesses of Neural Machine Translation are
1
responsible for this gap: its slower training and inference speed, ineï¬ectiveness in dealing with rare words, and sometimes failure to translate all words in the source sentence. Firstly, it generally takes a considerable amount of time and computational resources to train an NMT system on a large-scale translation dataset, thus slowing the rate of experimental turnaround time and innovation. For inference they are generally much slower than phrase-based systems due to the large number of parameters used. Secondly, NMT lacks robustness in translating rare words. Though this can be addressed in principle by training a âcopy modelâ to mimic a traditional alignment model [31], or by using the attention mechanism to copy rare words [37], these approaches are both unreliable at scale, since the quality of the alignments varies across languages, and the latent alignments produced by the attention mechanism are unstable when the network is deep. Also, simple copying may not always be the best strategy to cope with rare words, for example when a transliteration is more appropriate. Finally, NMT systems sometimes produce output sentences that do not translate all parts of the input sentence â in other words, they fail to completely âcoverâ the input, which can result in surprising translations.
This work presents the design and implementation of GNMT, a production NMT system at Google, that aims to provide solutions to the above problems. In our implementation, the recurrent networks are Long Short-Term Memory (LSTM) RNNs [23, 17]. Our LSTM RNNs have 8 layers, with residual connections between layers to encourage gradient ï¬ow [21]. For parallelism, we connect the attention from the bottom layer of the decoder network to the top layer of the encoder network. To improve inference time, we employ low-precision arithmetic for inference, which is further accelerated by special hardware (Googleâs Tensor Processing Unit, or TPU). To eï¬ectively deal with rare words, we use sub-word units (also known as âwordpiecesâ) [35] for inputs and outputs in our system. Using wordpieces gives a good balance between the ï¬exibility of single characters and the eï¬ciency of full words for decoding, and also sidesteps the need for special treatment of unknown words. Our beam search technique includes a length normalization procedure to deal eï¬ciently with the problem of comparing hypotheses of diï¬erent lengths during decoding, and a coverage penalty to encourage the model to translate all of the provided input.
Our implementation is robust, and performs well on a range of datasets across many pairs of languages without the need for language-speciï¬c adjustments. Using the same implementation, we are able to achieve results comparable to or better than previous state-of-the-art systems on standard benchmarks, while delivering great improvements over Googleâs phrase-based production translation system. Speciï¬cally, on WMTâ14 English-to-French, our single model scores 38.95 BLEU, an improvement of 7.5 BLEU from a single model without an external alignment model reported in [31] and an improvement of 1.2 BLEU from a single model without an external alignment model reported in [45]. Our single model is also comparable to a single model in [45], while not making use of any alignment model as being used in [45]. Likewise on WMTâ14 English-to-German, our single model scores 24.17 BLEU, which is 3.4 BLEU better than a previous competitive baseline [6]. On production data, our implementation is even more eï¬ective. Human evaluations show that GNMT has reduced translation errors by 60% compared to our previous phrase-based system on many pairs of languages: English â French, English â Spanish, and English â Chinese. Additional experiments suggest the quality of the resulting translation system gets closer to that of average human translators.
# 2 Related Work
Statistical Machine Translation (SMT) has been the dominant translation paradigm for decades [3, 4, 5]. Practical implementations of SMT are generally phrase-based systems (PBMT) which translate sequences of words or phrases where the lengths may diï¬er [26].
Even prior to the advent of direct Neural Machine Translation, neural networks have been used as a component within SMT systems with some success. Perhaps one of the most notable attempts involved the use of a joint language model to learn phrase representations [13] which yielded an impressive improvement when combined with phrase-based translation. This approach, however, still makes use of phrase-based translation systems at its core, and therefore inherits their shortcomings. Other proposed approaches for learning phrase representations [7] or learning end-to-end translation with neural networks [24] oï¬ered encouraging hints, but ultimately delivered worse overall accuracy compared to standard phrase-based systems.
The concept of end-to-end learning for machine translation has been attempted in the past (e.g., [8]) with
2
limited success. Following seminal papers in the area [41, 2], NMT translation quality has crept closer to the level of phrase-based translation systems for common research benchmarks. Perhaps the ï¬rst successful attempt at surpassing phrase-based translation was described in [31]. On WMTâ14 English-to-French, this system achieved a 0.5 BLEU improvement compared to a state-of-the-art phrase-based system.
Since then, many novel techniques have been proposed to further improve NMT: using an attention mechanism to deal with rare words [37], a mechanism to model translation coverage [42], multi-task and semi-supervised training to incorporate more data [14, 29], a character decoder [9], a character encoder [11], subword units [38] also to deal with rare word outputs, diï¬erent kinds of attention mechanisms [30], and sentence-level loss minimization [39, 34]. While the translation accuracy of these systems has been encouraging, systematic comparison with large scale, production quality phrase-based translation systems has been lacking.
# 3 Model Architecture
Our model (see Figure 1) follows the common sequence-to-sequence learning framework [41] with attention [2]. It has three components: an encoder network, a decoder network, and an attention network. The encoder transforms a source sentence into a list of vectors, one vector per input symbol. Given this list of vectors, the decoder produces one symbol at a time, until the special end-of-sentence symbol (EOS) is produced. The encoder and decoder are connected through an attention module which allows the decoder to focus on diï¬erent regions of the source sentence during the course of decoding.
For notation, we use bold lower case to denote vectors (e.g., v, oi), bold upper case to represent matrices (e.g., U, W), cursive upper case to represent sets (e.g., V , T ), capital letters to represent sequences (e.g. X, Y ), and lower case to represent individual symbols in a sequence, (e.g., x1, x2).
Let (X, Y ) be a source and target sentence pair. Let X = x1, x2, x3, ..., xM be the sequence of M symbols in the source sentence and let Y = y1, y2, y3, ..., yN be the sequence of N symbols in the target sentence. The encoder is simply a function of the following form:
x1, x2, ..., xM = EncoderRN N (x1, x2, x3, ..., xM ) (1)
In this equation, x1, x2, ..., xM is a list of ï¬xed size vectors. The number of members in the list is the same as the number of symbols in the source sentence (M in this example). Using the chain rule the conditional probability of the sequence P (Y |X) can be decomposed as:
P (Y |X) = P (Y |x1, x2, x3, ..., xM) = N Y P (yi|y0, y1, y2, ..., yiâ1; x1, x2, x3, ..., xM) i=1 (2)
where y0 is a special âbeginning of sentenceâ symbol that is prepended to every target sentence. During inference we calculate the probability of the next symbol given the source sentence encoding and the decoded target sequence so far:
P (yi|y0, y1, y2, y3, ..., yiâ1; x1, x2, x3, ..., xM) (3)
Our decoder is implemented as a combination of an RNN network and a softmax layer. The decoder RNN network produces a hidden state yi for the next symbol to be predicted, which then goes through the softmax layer to generate a probability distribution over candidate output symbols.
In our experiments we found that for NMT systems to achieve good accuracy, both the encoder and decoder RNNs have to be deep enough to capture subtle irregularities in the source and target languages. This observation is similar to previous observations that deep LSTMs signiï¬cantly outperform shallow LSTMs [41]. In that work, each additional layer reduced perplexity by nearly 10%. Similar to [31], we use a deep stacked Long Short Term Memory (LSTM) [23] network for both the encoder RNN and the decoder RNN.
Our attention module is similar to [2]. More speciï¬cally, let yiâ1 be the decoder-RNN output from the past decoding time step (in our implementation, we use the output from the bottom decoder layer). Attention
3
i Encoder LSTMs pus cpus : 8 ayers GPU3 * GPU2 i f GPUS | iâ> Attention H GPU2 i GPU2 } GPul } GPUL :
Figure 1: The model architecture of GNMT, Googleâs Neural Machine Translation system. On the left is the encoder network, on the right is the decoder network, in the middle is the attention module. The bottom encoder layer is bi-directional: the pink nodes gather information from left to right while the green nodes gather information from right to left. The other layers of the encoder are uni-directional. Residual connections start from the layer third from the bottom in the encoder and decoder. The model is partitioned into multiple GPUs to speed up training. In our setup, we have 8 encoder LSTM layers (1 bi-directional layer and 7 uni-directional layers), and 8 decoder layers. With this setting, one model replica is partitioned 8-ways and is placed on 8 diï¬erent GPUs typically belonging to one host machine. During training, the bottom bi-directional encoder layers compute in parallel ï¬rst. Once both ï¬nish, the uni-directional encoder layers can start computing, each on a separate GPU. To retain as much parallelism as possible during running the decoder layers, we use the bottom decoder layer output only for obtaining recurrent attention context, which is sent directly to all the remaining decoder layers. The softmax layer is also partitioned and placed on multiple GPUs. Depending on the output vocabulary size we either have them run on the same GPUs as the encoder and decoder networks, or have them run on a separate set of dedicated GPUs.
context ai for the current time step is computed according to the following formulas:
st = AttentionF unction(yiâ1, xt) ât, 1 ⤠t ⤠M pt = exp(st)/ M X exp(st) ât, 1 ⤠t ⤠M t=1 ai = M X pt.xt t=1 (4)
where AttentionF unction in our implementation is a feed forward network with one hidden layer.
# 3.1 Residual Connections
As mentioned above, deep stacked LSTMs often give better accuracy over shallower models. However, simply stacking more layers of LSTM works only to a certain number of layers, beyond which the network becomes
4
too slow and diï¬cult to train, likely due to exploding and vanishing gradient problems [33, 22]. In our experience with large-scale translation tasks, simple stacked LSTM layers work well up to 4 layers, barely with 6 layers, and very poorly beyond 8 layers.
© ®@ ® 2 9 ® + A LSTM, } cara oo >â) fâ) (stâ¢,} > Th) âa STN2) »f xt xt x3 (3) â0 Ly. iN spt = e CO â G apt = i XX (1stm, }>{ Lstm,}â>{ LstM,}>( LSTM, } EN A a » © © ©
Figure 2: The diï¬erence between normal stacked LSTM and our stacked LSTM with residual connections. On the left: simple stacked LSTM layers [41]. On the right: our implementation of stacked LSTM layers with residual connections. With residual connections, input to the bottom LSTM layer (x0 i âs to LSTM1) is element-wise added to the output from the bottom layer (x1 i âs). This sum is then fed to the top LSTM layer (LSTM2) as the new input.
Motivated by the idea of modeling diï¬erences between an intermediate layerâs output and the targets, which has shown to work well for many projects in the past [16, 21, 40], we introduce residual connections among the LSTM layers in a stack (see Figure 2). More concretely, let LSTMi and LSTMi+1 be the i-th and (i + 1)-th LSTM layers in a stack, whose parameters are Wi and Wi+1 respectively. At the t-th time step, for the stacked LSTM without residual connections, we have:
tâ1, xiâ1 t = LSTMi(ci t, mi ci t = mi xi t t = LSTMi+1(ci+1 , mi+1 tâ1, mi ; Wi) t tâ1, mi+1 tâ1, xi t; Wi+1) (5)
# ci+1 t
where xi LSTMi at time step t, respectively. t is the input to LSTMi at time step t, and mi t and ci t are the hidden states and memory states of
With residual connections between LSTMi and LSTMi+1, the above equations become:
ci+1 t t = LSTMi(ci t, mi ci t + xiâ1 xi t = mi t t = LSTMi+1(ci+1 , mi+1 tâ1, mi tâ1, xiâ1 ; Wi) t tâ1, mi+1 tâ1, xi t; Wi+1) (6)
Residual connections greatly improve the gradient ï¬ow in the backward pass, which allows us to train very deep encoder and decoder networks. In most of our experiments, we use 8 LSTM layers for the encoder and decoder, though residual connections can allow us to train substantially deeper networks (similar to what was observed in [45]).
# 3.2 Bi-directional Encoder for First Layer
For translation systems, the information required to translate certain words on the output side can appear anywhere on the source side. Often the source side information is approximately left-to-right, similar to
5
# LY
the target side, but depending on the language pair the information for a particular output word can be distributed and even be split up in certain regions of the input side.
To have the best possible context at each point in the encoder network it makes sense to use a bi-directional RNN [36] for the encoder, which was also used in [2]. To allow for maximum possible parallelization during computation (to be discussed in more detail in section 3.3), bi-directional connections are only used for the bottom encoder layer â all other encoder layers are uni-directional. Figure 3 illustrates our use of bi-directional LSTMs at the bottom encoder layer. The layer LSTMf processes the source sentence from left to right, while the layer LSTMb processes the source sentence from right to left. Outputs from LSTMf ( t) and LSTMb ââ xb (
@ @ Bidirectional â¢, Bottom Layer
:
Figure 3: The structure of bi-directional connections in the ï¬rst layer of the encoder. LSTM layer LSTMf processes information from left to right, while LSTM layer LSTMb processes information from right to left. Output from LSTMf and LSTMb are ï¬rst concatenated and then fed to the next LSTM layer LSTM1.
# 3.3 Model Parallelism
Due to the complexity of our model, we make use of both model parallelism and data parallelism to speed up training. Data parallelism is straightforward: we train n model replicas concurrently using a Downpour SGD algorithm [12]. The n replicas all share one copy of model parameters, with each replica asynchronously updating the parameters using a combination of Adam [25] and SGD algorithms. In our experiments, n is often around 10. Each replica works on a mini-batch of m sentence pairs at a time, which is often 128 in our experiments.
In addition to data parallelism, model parallelism is used to improve the speed of the gradient computation on each replica. The encoder and decoder networks are partitioned along the depth dimension and are placed on multiple GPUs, eï¬ectively running each layer on a diï¬erent GPU. Since all but the ï¬rst encoder layer are uni-directional, layer i + 1 can start its computation before layer i is fully ï¬nished, which improves training speed. The softmax layer is also partitioned, with each partition responsible for a subset of symbols in the output vocabulary. Figure 1 shows more details of how partitioning is done.
Model parallelism places certain constraints on the model architectures we can use. For example, we cannot aï¬ord to have bi-directional LSTM layers for all the encoder layers, since doing so would reduce parallelism among subsequent layers, as each layer would have to wait until both forward and backward directions of the previous layer have ï¬nished. This would eï¬ectively constrain us to make use of only 2 GPUs
6
in parallel (one for the forward direction and one for the backward direction). For the attention portion of the model, we chose to align the bottom decoder output to the top encoder output to maximize parallelism when running the decoder network. Had we aligned the top decoder layer to the top encoder layer, we would have removed all parallelism in the decoder network and would not beneï¬t from using more than one GPU for decoding.
# 4 Segmentation Approaches
Neural Machine Translation models often operate with ï¬xed word vocabularies even though translation is fundamentally an open vocabulary problem (names, numbers, dates etc.). There are two broad categories of approaches to address the translation of out-of-vocabulary (OOV) words. One approach is to simply copy rare words from source to target (as most rare words are names or numbers where the correct translation is just a copy), either based on the attention model [37], using an external alignment model [31], or even using a more complicated special purpose pointing network [18]. Another broad category of approaches is to use sub-word units, e.g., chararacters [10], mixed word/characters [28], or more intelligent sub-words [38].
# 4.1 Wordpiece Model
Our most successful approach falls into the second category (sub-word units), and we adopt the wordpiece model (WPM) implementation initially developed to solve a Japanese/Korean segmentation problem for the Google speech recognition system [35]. This approach is completely data-driven and guaranteed to generate a deterministic segmentation for any possible sequence of characters. It is similar to the method used in [38] to deal with rare words in Neural Machine Translation.
For processing arbitrary words, we ï¬rst break words into wordpieces given a trained wordpiece model. Special word boundary symbols are added before training of the model such that the original word sequence can be recovered from the wordpiece sequence without ambiguity. At decoding time, the model ï¬rst produces a wordpiece sequence, which is then converted into the corresponding word sequence. Here is an example of a word sequence and the corresponding wordpiece sequence:
⢠Word: Jet makers feud over seat width with big orders at stake
⢠wordpieces: _J et _makers _fe ud _over _seat _width _with _big _orders _at _stake
In the above example, the word âJetâ is broken into two wordpieces â_Jâ and âetâ, and the word âfeudâ is broken into two wordpieces â_feâ and âudâ. The other words remain as single wordpieces. â_â is a special character added to mark the beginning of a word.
The wordpiece model is generated using a data-driven approach to maximize the language-model likelihood of the training data, given an evolving word deï¬nition. Given a training corpus and a number of desired tokens D, the optimization problem is to select D wordpieces such that the resulting corpus is minimal in the number of wordpieces when segmented according to the chosen wordpiece model. Our greedy algorithm to this optimization problem is similar to [38] and is described in more detail in [35]. Compared to the original implementation used in [35], we use a special symbol only at the beginning of the words and not at both ends. We also cut the number of basic characters to a manageable number depending on the data (roughly 500 for Western languages, more for Asian languages) and map the rest to a special unknown character to avoid polluting the given wordpiece vocabulary with very rare characters. We ï¬nd that using a total vocabulary of between 8k and 32k wordpieces achieves both good accuracy (BLEU scores) and fast decoding speed across all pairs of language pairs we have tried.
As mentioned above, in translation it often makes sense to copy rare entity names or numbers directly from the source to the target. To facilitate this type of direct copying, we always use a shared wordpiece model for both the source language and target language. Using this approach, it is guaranteed that the same string in source and target sentence will be segmented in exactly the same way, making it easier for the system to learn to copy these tokens.
Wordpieces achieve a balance between the ï¬exibility of characters and eï¬ciency of words. We also ï¬nd that our models get better overall BLEU scores when using wordpieces â possibly due to the fact that our models now deal eï¬ciently with an essentially inï¬nite vocabulary without resorting to characters only. The
7
latter would make the average lengths of the input and output sequences much longer, and therefore would require more computation.
# 4.2 Mixed Word/Character Model
A second approach we use is the mixed word/character model. As in a word model, we keep a ï¬xed-size word vocabulary. However, unlike in a conventional word model where OOV words are collapsed into a single UNK symbol, we convert OOV words into the sequence of its constituent characters. Special preï¬xes are prepended to the characters, to 1) show the location of the characters in a word, and 2) to distinguish them from normal in-vocabulary characters. There are three preï¬xes: <B>,<M>, and <E>, indicating beginning of the word, middle of the word and end of the word, respectively. For example, letâs assume the word Miki is not in the vocabulary. It will be preprocessed into a sequence of special tokens: <B>M <M>i <M>k <E>i. The process is done on both the source and the target sentences. During decoding, the output may also contain sequences of special tokens. With the preï¬xes, it is trivial to reverse the tokenization to the original words as part of a post-processing step.
# 5 Training Criteria
Given a dataset of parallel text containing N input-output sequence pairs, denoted D = {(x®, yyy standard maximum-likelihood training aims at maximizing the sum of log probabilities of the ground-truth outputs given the corresponding inputs, ie
OML(θ) = N X log Pθ(Y â(i) | X (i)) . i=1 (7)
The main problem with this objective is that it does not reï¬ect the task reward function as measured by the BLEU score in translation. Further, this objective does not explicitly encourage a ranking among incorrect output sequences â where outputs with higher BLEU scores should still obtain higher probabilities under the model â since incorrect outputs are never observed during training. In other words, using maximum-likelihood training only, the model will not learn to be robust to errors made during decoding since they are never observed, which is quite a mismatch between the training and testing procedure.
Several recent papers [34, 39, 32] have considered diï¬erent ways of incorporating the task reward into optimization of neural sequence-to-sequence models. In this work, we also attempt to reï¬ne a model pre- trained on the maximum likelihood objective to directly optimize for the task reward. We show that, even on large datasets, reï¬nement of state-of-the-art maximum-likelihood models using task reward improves the results considerably.
We consider model reï¬nement using the expected reward objective (also used in [34]), which can be expressed as
ORL(θ) = N X X Pθ(Y | X (i)) r(Y, Y â(i)). i=1 Y âY (8)
Here, r(Y, Y â(i)) denotes the per-sentence score, and we are computing an expectation over all of the output sentences Y , up to a certain length.
The BLEU score has some undesirable properties when used for single sentences, as it was designed to be a corpus measure. We therefore use a slightly diï¬erent score for our RL experiments which we call the âGLEU scoreâ. For the GLEU score, we record all sub-sequences of 1, 2, 3 or 4 tokens in output and target sequence (n-grams). We then compute a recall, which is the ratio of the number of matching n-grams to the number of total n-grams in the target (ground truth) sequence, and a precision, which is the ratio of the number of matching n-grams to the number of total n-grams in the generated output sequence. Then GLEU score is simply the minimum of recall and precision. This GLEU scoreâs range is always between 0 (no matches) and 1 (all match) and it is symmetrical when switching output and target. According to our experiments, GLEU score correlates quite well with the BLEU metric on a corpus level but does not have its drawbacks for our per sentence reward objective.
8
As is common practice in reinforcement learning, we subtract the mean reward from r(Y, Y â(i)) in equation 8. The mean is estimated to be the sample mean of m sequences drawn independently from distribution Pθ(Y | X (i)). In our implementation, m is set to be 15. To further stabilize training, we optimize a linear combination of ML (equation 7) and RL (equation 8) objectives as follows:
OMixed(θ) = α â OML(θ) + ORL(θ) (9)
α in our implementation is typically set to be 0.017.
In our setup, we ï¬rst train a model using the maximum likelihood objective (equation 7) until convergence. We then reï¬ne this model using a mixed maximum likelihood and expected reward objective (equation 9), until BLEU score on a development set is no longer improving. The second step is optional.
# 6 Quantizable Model and Quantized Inference
One of the main challenges in deploying our Neural Machine Translation model to our interactive production translation service is that it is computationally intensive at inference, making low latency translation diï¬cult, and high volume deployment computationally expensive. Quantized inference using reduced precision arithmetic is one technique that can signiï¬cantly reduce the cost of inference for these models, often providing eï¬ciency improvements on the same computational devices. For example, in [43], it is demonstrated that a convolutional neural network model can be sped up by a factor of 4-6 with minimal loss on classiï¬cation accuracy on the ILSVRC-12 benchmark. In [27], it is demonstrated that neural network model weights can be quantized to only three states, -1, 0, and +1.
Many of those previous studies [19, 20, 43, 27] however mostly focus on CNN models with relatively few layers. Deep LSTMs with long sequences pose a novel challenge in that quantization errors can be signiï¬cantly ampliï¬ed after many unrolled steps or after going through a deep LSTM stack.
In this section, we present our approach to speed up inference with quantized arithmetic. Our solution is tailored towards the hardware options available at Google. To reduce quantization errors, additional constraints are added to our model during training so that it is quantizable with minimal impact on the output of the model. That is, once a model is trained with these additional constraints, it can be subsequently quantized without loss to translation quality. Our experimental results suggest that those additional constraints do not hurt model convergence nor the quality of a model once it has converged.
Recall from equation 6 that in an LSTM stack with residual connections there are two accumulators: ci t along the time axis and xi t along the depth axis. In theory, both of the accumulators are unbounded, but in practice, we noticed their values remain quite small. For quantized inference, we explicitly constrain the values of these accumulators to be within [-δ, δ] to guarantee a certain range that can be used for quantization later. The forward computation of an LSTM stack with residual connections is modiï¬ed to the following:
c0i t = LSTMi(ci t, mi t = max(âδ, min(δ, c0i ci x0i t + xiâ1 t = mi t t = max(âδ, min(δ, x0i xi t)) t = LSTMi+1(ci+1 , mi+1 tâ1, mi+1 tâ1, xi t = max(âδ, min(δ, c0i+1 ci+1 )) tâ1, mi tâ1, xiâ1 t)) t ; Wi) t; Wi+1) t (10)
# c0i+1 t
Let us expand LSTMi in equation 10 to include the internal gating logic. For brevity, we drop all the superscripts i.
9
W = [Wi, Wo, Ws, Wa, Ws, Wo, Wz, Ws! i, = sigmoid(W x; + W2m,) iâ, = tanh(W3x; + Wam,) f, = sigmoid(W;x, + Wem;) (11) 0; = sigmoid(W7x, + Wgm;) c= 1 Of +1, 0% my = Cy © O;
When doing quantized inference, we replace all the ï¬oating point operations in equations 10 and 11 with ï¬xed-point integer operations with either 8-bit or 16-bit resolution. The weight matrix W above is represented using an 8-bit integer matrix WQ and a ï¬oat vector s, as shown below:
si = max(abs(W[i, :])) WQ[i, j] = round(W[i, j]/si à 127.0) (12)
All accumulator values (ci and xi) are represented using 16-bit integers representing the range [â6, 6]. All matrix multiplications (e.g., W1x,, W2m;, etc.) in equation [11] are done using 8-bit integer multiplication accumulated into larger accumulators. All other operations, including all the activations (sigmoid, tanh) and elementwise operations (©, +) are done using 16-bit integer operations.
We now turn our attention to the log-linear softmax layer. During training, given the decoder RNN network output yt, we compute the probability vector pt over all candidate output symbols as follows:
vt = Ws â yt v0 pt = sof tmax(v0 t) t = max(âγ, min(γ, vt)) (13)
In equation 13, Ws is the weight matrix for the linear layer, which has the same number of rows as the number of symbols in the target vocabulary with each row corresponding to one unique target symbol. v represents the raw logits, which are ï¬rst clipped to be between âγ and γ and then normalized into a probability vector p. Input yt is guaranteed to be between âδ and δ due to the quantization scheme we applied to the decoder RNN. The clipping range γ for the logits v is determined empirically, and in our case, it is set to 25. In quantized inference, the weight matrix Ws is quantized into 8 bits as in equation 12, and the matrix multiplication is done using 8 bit arithmetic. The calculations within the sof tmax function and the attention model are not quantized during inference.
It is worth emphasizing that during training of the model we use full-precision ï¬oating point numbers. The only constraints we add to the model during training are the clipping of the RNN accumulator values into [âδ, δ] and softmax logits into [âγ, γ]. γ is ï¬xed to be at 25.0, while the value for δ is gradually annealed from a generous bound of δ = 8.0 at the beginning of training, to a rather stringent bound of δ = 1.0 towards the end of training. At inference time, δ is ï¬xed at 1.0. Those additional constraints do not degrade model convergence nor the decoding quality of the model when it has converged. In Figure 4, we compare the loss vs. steps for an unconstrained model (the blue curve) and a constrained model (the red curve) on WMTâ14 English-to-French. We can see that the loss for the constrained model is slightly better, possibly due to regularization roles those constraints play.
Our solution strikes a good balance between eï¬ciency and accuracy. Since the computationally expensive operations (the matrix multiplications) are done using 8-bit integer operations, our quantized inference is quite eï¬cient. Also, since error-sensitive accumulator values are stored using 16-bit integers, our solution is very accurate and is robust to quantization errors.
In Table 1 we compare the inference speed and quality when decoding the WMTâ14 English-to-French development set (a concatenation of newstest2012 and newstest2013 test sets for a total of 6003 sentences) on
10
w= Normal training 4.5 === Quantized training | Log perplexity ye} ol wo to 1.5 0 2 4 6 8 10 12 14 Steps 10°
# x
Figure 4: Log perplexity vs. steps for normal (non-quantized) training and quantization-aware training on WMTâ14 English to French during maximum likelihood training. Notice the training losses are similar, with the quantization-aware loss being slightly better. Our conjecture for quantization-aware training being slightly better is that the clipping constraints act as additional regularization which improves the model quality.
CPU, GPU and Googleâs Tensor Processing Unit (TPU) respectively.1 The model used here for comparison is trained with quantization constraints on the ML objective only (i.e., without reinforcement learning based model reï¬nement). When the model is decoded on CPU and GPU, it is not quantized and all operations are done using full-precision ï¬oats. When it is decoded on TPU, certain operations, such as embedding lookup and attention module, remain on the CPU, and all other quantized operations are oï¬-loaded to the TPU. In all cases, decoding is done on a single machine with two Intel Haswell CPUs, which consists in total of 88 CPU cores (hyperthreads). The machine is equipped with an NVIDIA GPU (Tesla k80) for the experiment with GPU or a single Google TPU for the experiment with TPU.
Table 1 shows that decoding using reduced precision arithmetics on the TPU suï¬ers a very minimal loss of 0.0072 on log perplexity, and no loss on BLEU at all. This result matches previous work reporting that quantizing convolutional neural network models can retain most of the model quality.
Table 1 also shows that decoding our model on CPU is actually 2.3 times faster than on GPU. Firstly, our dual-CPUs host machine oï¬ers a theoretical peak FLOP performance which is more than two thirds that of the GPU. Secondly, the beam search algorithm forces the decoder to incur a non-trivial amount of data transfer between the host and the GPU at every decoding step. Hence, our current decoder implementation
1https://cloudplatform.googleblog.com/2016/05/Google-supercharges-machine-learning-tasks-with-custom-chip.html
11
is not fully utilizing the computation capacities that a GPU can theoretically oï¬er during inference. Finally, Table 1 shows that decoding on TPUs is 3.4 times faster than decoding on CPUs, demonstrating that quantized arithmetics is much faster on TPUs than both CPUs or GPUs.
Table 1: Model inference on CPU, GPU and TPU. The model used here for comparison is trained with the ML objective only with quantization constraints. Results are obtained by decoding the WMT EnâFr development set on CPU, GPU and TPU respectively.
BLEU Log Perplexity Decoding time (s) CPU 31.20 GPU 31.20 TPU 31.21 1.4553 1.4553 1.4626 1322 3028 384
Unless otherwise noted, we always train and evaluate quantized models in our experiments. Because there is little diï¬erence from a quality perspective between a model decoded on CPUs and one decoded on TPUs, we use CPUs to decode for model evaluation during training and experimentation and use TPUs to serve production traï¬c.
# 7 Decoder
We use beam search during decoding to ï¬nd the sequence Y that maximizes a score function s(Y, X) given a trained model. We introduce two important reï¬nements to the pure max-probability based beam search algorithm: a coverage penalty [42] and length normalization. With length normalization, we aim to account for the fact that we have to compare hypotheses of diï¬erent length. Without some form of length-normalization regular beam search will favor shorter results over longer ones on average since a negative log-probability is added at each step, yielding lower (more negative) scores for longer sentences. We ï¬rst tried to simply divide by the length to normalize. We then improved on that original heuristic by dividing by lengthα, with 0 < α < 1 where α is optimized on a development set (α â [0.6 â 0.7] was usually found to be best). Eventually we designed the empirically-better scoring function below, which also includes a coverage penalty to favor translations that fully cover the source sentence according to the attention module.
More concretely, the scoring function s(Y, X) that we employ to rank candidate translations is deï¬ned as follows:
s(Y, X) = log(P (Y |X))/lp(Y ) + cp(X; Y ) (5 + |Y |)α (5 + 1)α |X| X lp(Y ) = cp(X; Y ) = β â log(min( |Y | X pi,j, 1.0)), i=1 j=1 (14)
where pi,j is the attention probability of the j-th target word yj on the i-th source word xi. By construction (equation 4), P|X| i=0 pi,j is equal to 1. Parameters α and β control the strength of the length normalization and the coverage penalty. When α = 0 and β = 0, our decoder falls back to pure beam search by probability. During beam search, we typically keep 8-12 hypotheses but we ï¬nd that using fewer (4 or 2) has only slight negative eï¬ects on BLEU scores. Besides pruning the number of considered hypotheses, two other forms of pruning are used. Firstly, at each step, we only consider tokens that have local scores that are not more than beamsize below the best token for this step. Secondly, after a normalized best score has been found according to equation 14, we prune all hypotheses that are more than beamsize below the best normalized score so far. The latter type of pruning only applies to full hypotheses because it compares scores in the normalized space, which is only available when a hypothesis ends. This latter form of pruning also has the eï¬ect that very quickly no more hypotheses will be generated once a suï¬ciently good hypothesis has been found, so the search will end quickly. The pruning speeds up search by 30% â 40% when run on CPUs
12
compared to not pruning (where we simply stop decoding after a predetermined maximum output length of twice the source length). Typically we use beamsize = 3.0, unless otherwise noted.
To improve throughput during decoding we can put many sentences (typically up to 35) of similar length into a batch and decode all of those in parallel to make use of available hardware optimized for parallel computations. In this case the beam search only ï¬nishes if all hypotheses for all sentences in the batch are out of beam, which is slightly less eï¬cient theoretically, but in practice is of negligible additional computational cost.
α BLEU 0.0 0.2 0.4 0.6 0.8 1.0 β 0.0 30.3 31.4 31.4 31.4 31.4 31.4 0.2 30.7 31.4 31.4 31.4 31.4 31.3 0.4 30.9 31.4 31.4 31.3 31.2 31.2 0.6 31.1 31.3 31.1 30.9 30.8 30.6 0.8 31.2 30.8 30.5 30.1 29.8 29.4 1.0 31.1 30.3 29.6 28.9 28.1 27.2
Table 2: WMTâ14 EnâFr BLEU score with respect to diï¬erent values of α and β. The model in this experiment trained using ML without RL reï¬nement. A single WMT EnâFr model achieves a BLEU score of 30.3 on the development set when the beam search scoring function is purely based on the sequence probability (i.e., both α and β are 0). Slightly larger α and β values improve BLEU score by up to +1.1 (α = 0.2, β = 0.2), with a wide range of α and β values giving results very close to the best BLEU scores.
Table 2 shows the impact of α and β on the BLEU score when decoding the WMTâ14 English-to-French development set. The model used here for experiments is trained using the ML objective only (without RL reï¬nement). As can be seen from the results, having some length normalization and coverage penalty improves BLEU score considerably (from 30.3 to 31.4).
We ï¬nd that length normalization (α) and coverage penalty (β) are less eï¬ective for models with RL reï¬nement. Table 3 summarizes our results. This is understandable, as during RL reï¬nement, the models already learn to pay attention to the full source sentence to not under-translate or over-translate, which would result in a penalty on the BLEU (or GLEU) scores.
α BLEU 0.0 0.2 0.4 0.6 0.8 1.0 β 0.0 0.320 0.322 0.322 0.322 0.322 0.322 0.2 0.321 0.322 0.322 0.322 0.322 0.321 0.4 0.322 0.322 0.322 0.321 0.321 0.321 0.6 0.322 0.322 0.321 0.321 0.321 0.320 0.8 0.322 0.321 0.321 0.319 0.316 0.313 1.0 0.322 0.321 0.316 0.309 0.302 0.295
Table 3: WMT EnâFr BLEU score with respect to diï¬erent values of α and β. The model used here is trained using ML, then reï¬ned with RL. Compared to the results in Table 2, coverage penalty and length normalization appear to be less eï¬ective for models after RL-based model reï¬nements. Results are obtained on the development set.
We found that the optimal α and β vary slightly for diï¬erent models. Based on tuning results using internal Google datasets, we use α = 0.2 and β = 0.2 in our experiments, unless noted otherwise.
# 8 Experiments and Results
In this section, we present our experimental results on two publicly available corpora used extensively as benchmarks for Neural Machine Translation systems: WMTâ14 English-to-French (WMT EnâFr) and English-to-German (WMT EnâDe). On these two datasets, we benchmark GNMT models with word-based,
13
character-based, and wordpiece-based vocabularies. We also present the improved accuracy of our models after ï¬ne-tuning with RL and model ensembling. Our main objective with these datasets is to show the contributions of various components in our implementation, in particular the wordpiece model, RL model reï¬nement, and model ensembling.
In addition to testing on publicly available corpora, we also test GNMT on Googleâs translation production corpora, which are two to three decimal orders of magnitudes bigger than the WMT corpora for a given language pair. We compare the accuracy of our model against human accuracy and the best Phrase-Based Machine Translation (PBMT) production system for Google Translate.
In all experiments, our models consist of 8 encoder layers and 8 decoder layers. (Since the bottom encoder layer is actually bi-directional, in total there are 9 logically distinct LSTM passes in the encoder.) The attention network is a simple feedforward network with one hidden layer with 1024 nodes. All of the models use 1024 LSTM nodes per encoder and decoder layers.
# 8.1 Datasets
We evaluate our model on the WMT EnâFr dataset, the WMT EnâDe dataset, as well as many Google- internal production datasets. On WMT EnâFr, the training set contains 36M sentence pairs. On WMT EnâDe, the training set contains 5M sentence pairs. In both cases, we use newstest2014 as the test sets to compare against previous work [31, 37, 45]. The combination of newstest2012 and newstest2013 is used as the development set.
In addition to WMT, we also evaluate our model on some Google-internal datasets representing a wider spectrum of languages with distinct linguistic properties: English â French, English â Spanish and English â Chinese.
# 8.2 Evaluation Metrics
We evaluate our models using the standard BLEU score metric. To be comparable to previous work [41, 31, 45], we report tokenized BLEU score as computed by the multi-bleu.pl script, downloaded from the public implementation of Moses (on Github), which is also used in [31].
As is well-known, BLEU score does not fully capture the quality of a translation. For that reason we also carry out side-by-side (SxS) evaluations where we have human raters evaluate and compare the quality of two translations presented side by side for a given source sentence. Side-by-side scores range from 0 to 6, with a score of 0 meaning âcompletely nonsense translationâ, and a score of 6 meaning âperfect translation: the meaning of the translation is completely consistent with the source, and the grammar is correctâ. A translation is given a score of 4 if âthe sentence retains most of the meaning of the source sentence, but may have some grammar mistakesâ, and a translation is given a score of 2 if âthe sentence preserves some of the meaning of the source sentence but misses signiï¬cant partsâ. These scores are generated by human raters who are ï¬uent in both languages and hence often capture translation quality better than BLEU scores.
# 8.3 Training Procedure
The models are trained by a system we implemented using TensorFlow[1]. The training setup follows the classic data parallelism paradigm. There are 12 replicas running concurrently on separate machines. Every replica updates the shared parameters asynchronously.
We initialize all trainable parameters uniformly between [-0.04, 0.04]. As is common wisdom in training RNN models, we apply gradient clipping (similar to [41]): all gradients are uniformly scaled down such that the norm of the modiï¬ed gradients is no larger than a ï¬xed constant, which is 5.0 in our case. If the norm of the original gradients is already smaller than or equal to the given threshold, then gradients are not changed. For the ï¬rst stage of maximum likelihood training (that is, to optimize for objective function 7), we use a combination of Adam [25] and simple SGD learning algorithms provided by the TensorFlow runtime system. We run Adam for the ï¬rst 60k steps, after which we switch to simple SGD. Each step in training is a mini-batch of 128 examples.
We ï¬nd that Adam accelerates training at the beginning, but Adam alone converges to a worse point than a combination of Adam ï¬rst, followed by SGD (Figure 5). For the Adam part, we use a learning rate of
14
=e= SGD only 45 === Adam only a Adam then SGD Log perplexity 0 1 1 1 1 1 1 1 0 2 4 6 8 10 12 14 16 Steps 10°
# x
Figure 5: Log perplexity vs. steps for Adam, SGD and Adam-then-SGD on WMT EnâFr during maximum likelihood training. Adam converges much faster than SGD at the beginning. Towards the end, however, Adam-then-SGD is gradually better. Notice the bump in the red curve (Adam-then-SGD) at around 60k steps where we switch from Adam to SGD. We suspect that this bump occurs due to diï¬erent optimization trajectories of Adam vs. SGD. When we switch from Adam to SGD, the model ï¬rst suï¬ers a little, but is able to quickly recover afterwards.
0.0002, and for the SGD part, we use a learning rate of 0.5. We ï¬nd that it is important to also anneal the learning rate after a certain number of total steps. For the WMT EnâFr dataset, we begin to anneal the learning rate after 1.2M steps, after which we halve the learning rate every 200k steps for an additional 800k steps. On WMT EnâFr, it takes around 6 days to train a basic model using 96 NVIDIA K80 GPUs.
Once a model is fully converged using the ML objective, we switch to RL based model reï¬nement, i.e., we further optimize the objective function as in equation 9. We reï¬ne a model until the BLEU score does not change much on the development set. For this model reï¬nement phase, we simply run the SGD optimization algorithm. The number of steps needed to reï¬ne a model varies from dataset to dataset. For WMT EnâFr, it takes around 3 days to complete 400k steps.
To prevent overï¬tting, we apply dropout during training with a scheme similar to [44]. For the WMT EnâFr and EnâDe datasets, we set the dropout probability to be 0.2 and 0.3 respectively. Due to various technical reasons, dropout is only applied during the ML training phase, not during the RL reï¬nement phase. The exact hyper-parameters vary from dataset to dataset and from model to model. For the WMT EnâDe dataset, since it is signiï¬cantly smaller than the WMT EnâFr dataset, we use a higher dropout
15
probability, and also train smaller models for fewer steps overall. On the production data sets, we typically do not use dropout, and we train the models for more steps.
# 8.4 Evaluation after Maximum Likelihood Training
The models in our experiments are word-based, character-based, mixed word-character-based or several wordpiece models with varying vocabulary sizes.
For the word model, we selected the most frequent 212K source words as the source vocabulary and the most popular 80k target words as the target vocabulary. Words not in the source vocabulary or the target vocabulary (unknown words) are converted into special <first_char>_UNK_<last_char> symbols. Note, in this case, there is more than one UNK (e.g., our production word models have roughly 5000 diï¬erent UNKs in this case). We then use the attention mechanism to copy a corresponding word from the source to replace these unknown words during decoding [37].
The mixed word-character model is similar to the word model, except the out-of-vocabulary (OOV) words are converted into sequences of characters with special delimiters around them as described in section 4.2 in more detail. In our experiments, the vocabulary size for the mixed word-character model is 32K. For the pure character model, we simply split all words into constituent characters, resulting typically in a few hundred basic characters (including special symbols appearing in the data). For the wordpiece models, we train 3 diï¬erent models with vocabulary sizes of 8K, 16K, and 32K.
Table 4 summarizes our results on the WMT EnâFr dataset. In this table, we also compare against other strong baselines without model ensembling. As can be seen from the table, âWPM-32Kâ, a wordpiece model with a shared source and target vocabulary of 32K wordpieces, performs well on this dataset and achieves the best quality as well as the fastest inference speed.
The pure character model (char input, char output) works surprisingly well on this task, not much worse than the best wordpiece models in BLEU score. However, these models are rather slow to train and slow to use as the sequences are much longer.
Our best model, WPM-32K, achieves a BLEU score of 38.95. Note that this BLEU score represents the averaged score of 8 models we trained. The maximum BLEU score of the 8 models is higher at 39.37. We point out that our models are completely self-contained, as opposed to previous models reported in [45], which depend on some external alignment models to achieve their best results. Also note that all our test set numbers were achieved by picking an optimal model on the development set which was then used to decode the test set.
Note that the timing numbers for this section are obtained on CPUs, not TPUs. We use here the same CPU machine as described above, and run the decoder with a batchsize of 16 sentences in parallel and a maximum of 4 concurrent hypotheses at any time per sentence. The time per sentence is the total decoding time divided by the number of respective sentences in the test set.
Table 4: Single model results on WMT EnâFr (newstest2014) Model BLEU CPU decoding time
37.90 Word 38.01 Character WPM-8K 38.27 WPM-16K 37.60 WPM-32K 38.95 38.39 37.0 31.5 33.1 37.7 39.2 Mixed Word/Character PBMT [15] LSTM (6 layers) [31] LSTM (6 layers + PosUnk) [31] Deep-Att [45] Deep-Att + PosUnk [45] per sentence (s) 0.2226 1.0530 0.1919 0.1874 0.2118 0.2774
Similarly, the results of WMT EnâDe are presented in Table 5. Again, we ï¬nd that wordpiece models
16
achieves the best BLEU scores.
Table 5: Single model results on WMT EnâDe (newstest2014)
Model BLEU CPU decoding time Word Character (512 nodes) 23.12 22.62 WPM-8K 23.50 WPM-16K 24.36 WPM-32K 24.61 24.17 20.7 16.5 16.9 16.9 20.6 Mixed Word/Character PBMT [6] RNNSearch [37] RNNSearch-LV [37] RNNSearch-LV [37] Deep-Att [45] per sentence (s) 0.2972 0.8011 0.2079 0.1931 0.1882 0.3268
WMT EnâDe is considered a more diï¬cult task than WMT EnâFr as it has much less training data, and German, as a more morphologically rich language, needs a huge vocabulary for word models. Thus it is more advantageous to use wordpiece or mixed word/character models, which provide a gain of more than 2 BLEU points on top of the word model and about 4 BLEU points on top of previously reported results in [6, 45]. Our best model, WPM-32K, achieves a BLEU score of 24.61, which is averaged over 8 runs. Consistently, on the production corpora, wordpiece models tend to be better than other models both in terms of speed and accuracy.
# 8.5 Evaluation of RL-reï¬ned Models
The models trained in the previous section are optimized for log-likelihood of the next step prediction which may not correlate well with translation quality, as discussed in section 5. We use RL training to ï¬ne-tune sentence BLEU scores after normal maximum-likelihood training.
The results of RL ï¬ne-tuning on the best EnâFr and EnâDe models are presented in Table 6, which show that ï¬ne-tuning the models with RL can improve BLEU scores. On WMT EnâFr, model reï¬nement improves BLEU score by close to 1 point. On EnâDe, RL-reï¬nement slightly hurts the test performance even though we observe about 0.4 BLEU points improvement on the development set. The results presented in Table 6 are the average of 8 independent models. We also note that there is an overlap between the wins from the RL reï¬nement and the decoder ï¬ne-tuning (i.e., the introduction of length normalization and coverage penalty). On a less ï¬ne-tuned decoder (e.g., if the decoder does beam search by log-probability only), the win from RL would have been bigger (as is evident from comparing results in Table 2 and Table 3).
Table 6: Single model test BLEU scores, averaged over 8 runs, on WMT EnâFr and EnâDe Dataset Trained with log-likelihood Reï¬ned with RL EnâFr EnâDe
38.95 24.67 39.92 24.60
# 8.6 Model Ensemble and Human Evaluation
We ensemble 8 RL-reï¬ned models to obtain a state-of-the-art result of 41.16 BLEU points on the WMT EnâFr dataset. Our results are reported in Table 7.
We ensemble 8 RL-reï¬ned models to obtain a state-of-the-art result of 26.30 BLEU points on the WMT EnâDe dataset. Our results are reported in Table 8.
Finally, to better understand the quality of our models and the eï¬ect of RL reï¬nement, we carried out a four-way side-by-side human evaluation to compare our NMT translations against the reference translations
17
Table 7: Model ensemble results on WMT EnâFr (newstest2014)
Model BLEU 40.35 41.16 35.6 37.5 40.4 WPM-32K (8 models) RL-reï¬ned WPM-32K (8 models) LSTM (6 layers) [31] LSTM (6 layers + PosUnk) [31] Deep-Att + PosUnk (8 models) [45]
Table 8: Model ensemble results on WMT EnâDe (newstest2014). See Table 5 for a comparison against non-ensemble models.
Model BLEU 26.20 26.30 WPM-32K (8 models) RL-reï¬ned WPM-32K (8 models)
and the best phrase-based statistical machine translations. During the side-by-side comparison, humans are asked to rate four translations given a source sentence. The four translations are: 1) the best phrase- based translations as downloaded from http://matrix.statmt.org/systems/show/2065, 2) an ensemble of 8 ML-trained models, 3) an ensemble of 8 ML-trained and then RL-reï¬ned models, and 4) reference human translations as taken directly from newstest2014, Our results are presented in Table 9.
Table 9: Human side-by-side evaluation scores of WMT EnâFr models.
Model BLEU PBMT [15] NMT before RL NMT after RL Human 37.0 40.35 41.16 Side-by-side averaged score 3.87 4.46 4.44 4.82
The results show that even though RL reï¬nement can achieve better BLEU scores, it barely improves the human impression of the translation quality. This could be due to a combination of factors including: 1) the relatively small sample size for the experiment (only 500 examples for side-by-side), 2) the improvement in BLEU score by RL is relatively small after model ensembling (0.81), which may be at a scale that human side-by-side evaluations are insensitive to, and 3) the possible mismatch between BLEU as a metric and real translation quality as perceived by human raters. Table 11 contains some example translations from PBMT, "NMT before RL" and "Human", along with the side-by-side scores that human raters assigned to each translation (some of which we disagree with, see the table caption).
# 8.7 Results on Production Data
We have carried out extensive experiments on many Google-internal production data sets. As the experiments above cast doubt on whether RL improves the real translation quality or simply the BLEU metric, RL-based model reï¬nement is not used during these experiments. Given the larger volume of training data available in the Google corpora, dropout is also not needed in these experiments.
In this section we describe our experiments with human perception of the translation quality. We asked human raters to rate translations in a three-way side-by-side comparison. The three sides are from: 1) translations from the production phrase-based statistical translation system used by Google, 2) translations from our GNMT system, and 3) translations by humans ï¬uent in both languages. Reported here in Table 10 are averaged rated scores for English â French, English â Spanish and English â Chinese. All the GNMT models are wordpiece models, without model ensembling, and use a shared source and target vocabulary with 32K wordpieces. On each pair of languages, the evaluation data consist of 500 randomly sampled sentences from Wikipedia and news websites, and the corresponding human translations to the target language. The
18
Table 10: Mean of side-by-side scores on production data Relative Improvement 87% 64% 58% 63% 83% 60%
results show that our model reduces translation errors by more than 60% compared to the PBMT model on these major pairs of languages. A typical distribution of side-by-side scores is shown in Figure 6.
400 200 Count (total 500) 7 ln ll 0 on LL. 2 3 4 5 6 PBMT - GNMT - Human
Figure 6: Histogram of side-by-side scores on 500 sampled sentences from Wikipedia and news websites for a typical language pair, here English â Spanish (PBMT blue, GNMT red, Human orange). It can be seen that there is a wide distribution in scores, even for the human translation when rated by other humans, which shows how ambiguous the task is. It is clear that GNMT is much more accurate than PBMT.
As expected, on this metric the GNMT system improves also compared to the PBMT system. In some cases human and GNMT translations are nearly indistinguishable on the relatively simplistic and isolated sentences sampled from Wikipedia and news articles for this experiment. Note that we have observed that human raters, even though ï¬uent in both languages, do not necessarily fully understand each randomly sampled sentence suï¬ciently and hence cannot necessarily generate the best possible translation or rate a given translation accurately. Also note that, although the scale for the scores goes from 0 (complete nonsense) to 6 (perfect translation) the human translations get an imperfect score of only around 5 in Table 10, which shows possible ambiguities in the translations and also possibly non-calibrated raters and translators with a varying level of proï¬ciency.
Testing our GNMT system on particularly diï¬cult translation cases and longer inputs than just single sentences is the subject of future work.
19
# 9 Conclusion
In this paper, we describe in detail the implementation of Googleâs Neural Machine Translation (GNMT) system, including all the techniques that are critical to its accuracy, speed, and robustness. On the public WMTâ14 translation benchmark, our systemâs translation quality approaches or surpasses all currently published results. More importantly, we also show that our approach carries over to much larger production data sets, which have several orders of magnitude more data, to deliver high quality translations.
Our key ï¬ndings are: 1) that wordpiece modeling eï¬ectively handles open vocabularies and the challenge of morphologically rich languages for translation quality and inference speed, 2) that a combination of model and data parallelism can be used to eï¬ciently train state-of-the-art sequence-to-sequence NMT models in roughly a week, 3) that model quantization drastically accelerates translation inference, allowing the use of these large models in a deployed production environment, and 4) that many additional details like length-normalization, coverage penalties, and similar are essential to making NMT systems work well on real data.
Using human-rated side-by-side comparison as a metric, we show that our GNMT system approaches the accuracy achieved by average bilingual human translators on some of our test sets. In particular, compared to the previous phrase-based production system, this GNMT system delivers roughly a 60% reduction in translation errors on several popular language pairs.
# Acknowledgements
We would like to thank the entire Google Brain Team and Google Translate Team for their foundational contributions to this project.
# References
[1] Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., Kudlur, M., Levenberg, J., Monga, R., Moore, S., Murray, D. G., Steiner, B., Tucker, P., Vasudevan, V., Warden, P., Wicke, M., Yu, Y., and Zheng, X. Tensorï¬ow: A system for large-scale machine learning. Tech. rep., Google Brain, 2016. arXiv preprint.
[2] Bahdanau, D., Cho, K., and Bengio, Y. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (2015).
[3] Brown, P., Cocke, J., Pietra, S. D., Pietra, V. D., Jelinek, F., Mercer, R., and Roossin, P. A statistical approach to language translation. In Proceedings of the 12th Conference on Computational Linguistics - Volume 1 (Stroudsburg, PA, USA, 1988), COLING â88, Association for Computational Linguistics, pp. 71â76.
[4] Brown, P. F., Cocke, J., Pietra, S. A. D., Pietra, V. J. D., Jelinek, F., Lafferty, J. D., Mercer, R. L., and Roossin, P. S. A statistical approach to machine translation. Computational linguistics 16, 2 (1990), 79â85.
[5] Brown, P. F., Pietra, V. J. D., Pietra, S. A. D., and Mercer, R. L. The mathematics of statistical machine translation: Parameter estimation. Comput. Linguist. 19, 2 (June 1993), 263â311.
[6] Buck, C., Heafield, K., and Van Ooyen, B. N-gram counts and language models from the common crawl. In LREC (2014), vol. 2, Citeseer, p. 4.
[7] Cho, K., van Merrienboer, B., Gülçehre, Ã., Bougares, F., Schwenk, H., and Bengio, Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Conference on Empirical Methods in Natural Language Processing (2014).
[8] Chrisman, L. Learning recursive distributed representations for holistic computation. Connection Science 3, 4 (1991), 345â366.
20
[9] Chung, J., Cho, K., and Bengio, Y. A character-level decoder without explicit segmentation for neural machine translation. arXiv preprint arXiv:1603.06147 (2016).
[10] Chung, J., Cho, K., and Bengio, Y. A character-level decoder without explicit segmentation for neural machine translation. CoRR abs/1603.06147 (2016).
[11] Costa-Jussà , M. R., and Fonollosa, J. A. R. Character-based neural machine translation. CoRR abs/1603.00810 (2016).
[12] Dean, J., Corrado, G. S., Monga, R., Chen, K., Devin, M., Le, Q. V., Mao, M. Z., Ranzato, M., Senior, A., Tucker, P., Yang, K., and Ng, A. Y. Large scale distributed deep networks. In NIPS (2012).
[13] Devlin, J., Zbib, R., Huang, Z., Lamar, T., Schwartz, R. M., and Makhoul, J. Fast and robust neural network joint models for statistical machine translation. In ACL (1) (2014), Citeseer, pp. 1370â1380.
[14] Dong, D., Wu, H., He, W., Yu, D., and Wang, H. Multi-task learning for multiple language translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (2015), pp. 1723â1732.
[15] Durrani, N., Haddow, B., Koehn, P., and Heafield, K. Edinburghâs phrase-based machine translation systems for WMT-14. In Proceedings of the Ninth Workshop on Statistical Machine Translation (2014), Association for Computational Linguistics Baltimore, MD, USA, pp. 97â104.
[16] Fahlman, S. E., and Lebiere, C. The cascade-correlation learning architecture. In Advances in Neural Information Processing Systems 2 (1990), Morgan Kaufmann, pp. 524â532.
[17] Gers, F. A., Schmidhuber, J., and Cummins, F. Learning to forget: Continual prediction with LSTM. Neural computation 12, 10 (2000), 2451â2471.
[18] Gülçehre, Ã., Ahn, S., Nallapati, R., Zhou, B., and Bengio, Y. Pointing the unknown words. CoRR abs/1603.08148 (2016).
[19] Gupta, S., Agrawal, A., Gopalakrishnan, K., and Narayanan, P. Deep learning with limited numerical precision. CoRR abs/1502.02551 (2015).
[20] Han, S., Mao, H., and Dally, W. J. Deep compression: Compressing deep neural network with pruning, trained quantization and huï¬man coding. CoRR abs/1510.00149 (2015).
[21] He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (2015).
[22] Hochreiter, S., Bengio, Y., Frasconi, P., and Schmidhuber, J. Gradient ï¬ow in recurrent nets: the diï¬culty of learning long-term dependencies, 2001.
[23] Hochreiter, S., and Schmidhuber, J. Long short-term memory. Neural computation 9, 8 (1997), 1735â1780.
[24] Kalchbrenner, N., and Blunsom, P. Recurrent continuous translation models. In Conference on Empirical Methods in Natural Language Processing (2013).
[25] Kingma, D. P., and Ba, J. Adam: A method for stochastic optimization. CoRR abs/1412.6980 (2014).
[26] Koehn, P., Och, F. J., and Marcu, D. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics (2003).
[27] Li, F., and Liu, B. Ternary weight networks. CoRR abs/1605.04711 (2016).
21
[28] Luong, M., and Manning, C. D. Achieving open vocabulary neural machine translation with hybrid word-character models. CoRR abs/1604.00788 (2016).
[29] Luong, M.-T., Le, Q. V., Sutskever, I., Vinyals, O., and Kaiser, L. Multi-task sequence to sequence learning. In International Conference on Learning Representations (2015).
[30] Luong, M.-T., Pham, H., and Manning, C. D. Eï¬ective approaches to attention-based neural machine translation. In Conference on Empirical Methods in Natural Language Processing (2015).
[31] Luong, M.-T., Sutskever, I., Le, Q. V., Vinyals, O., and Zaremba, W. Addressing the rare word problem in neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (2015).
[32] Norouzi, M., Bengio, S., Chen, Z., Jaitly, N., Schuster, M., Wu, Y., and Schuurmans, D. Reward augmented maximum likelihood for neural structured prediction. In Neural Information Processing Systems (2016).
[33] Pascanu, R., Mikolov, T., and Bengio, Y. Understanding the exploding gradient problem. CoRR abs/1211.5063 (2012).
[34] Ranzato, M., Chopra, S., Auli, M., and Zaremba, W. Sequence level training with recurrent neural networks. In International Conference on Learning Representations (2015).
[35] Schuster, M., and Nakajima, K. Japanese and Korean voice search. 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (2012).
[36] Schuster, M., and Paliwal, K. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45, 11 (Nov. 1997), 2673â2681.
[37] Sébastien, J., Kyunghyun, C., Memisevic, R., and Bengio, Y. On using very large target vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (2015).
[38] Sennrich, R., Haddow, B., and Birch, A. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (2016).
[39] Shen, S., Cheng, Y., He, Z., He, W., Wu, H., Sun, M., and Liu, Y. Minimum risk training In Proceedings of the 54th Annual Meeting of the Association for for neural machine translation. Computational Linguistics (2016).
[40] Srivastava, R. K., Greff, K., and Schmidhuber, J. Highway networks. CoRR abs/1505.00387 (2015).
[41] Sutskever, I., Vinyals, O., and Le, Q. V. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems (2014), pp. 3104â3112.
[42] Tu, Z., Lu, Z., Liu, Y., Liu, X., and Li, H. Coverage-based neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (2016).
[43] Wu, J., Leng, C., Wang, Y., Hu, Q., and Cheng, J. Quantized convolutional neural networks for mobile devices. CoRR abs/1512.06473 (2015).
[44] Zaremba, W., Sutskever, I., and Vinyals, O. Recurrent neural network regularization, 2014.
[45] Zhou, J., Cao, Y., Wang, X., Li, P., and Xu, W. Deep recurrent models with fast-forward connections for neural machine translation. CoRR abs/1606.04199 (2016).
22
Table 11: Some example translations from PBMT [15], our GNMT system (the "NMT before RL", Table 9), and Human. Source and target sentences (human translations) are from the public benchmark WMT EnâFr (newstest2014) data set. The right-hand column shows the human ratings on a scale of 0 (complete nonsense) to 6 (perfect translation). We disagree with some of the human ratings, e.g., the translation âElle a été repéré trois jours plus tard par un promeneur de chien piégé dans la carrièreâ contains grammatical mistakes and changes semantics, and is still scored 6. We present it to illustrate the potential problems of the scoring process.
Source PBMT GNMT Human Source PBMT GNMT Human Source PBMT GNMT "The reason Boeing are doing this is to cram more seats in to make their plane more competitive with our products," said Kevin Keniston, head of passenger comfort at Europeâs Airbus. "La raison pour laquelle Boeing sont en train de faire, câest de concentrer davantage de sièges pour prendre leur avion plus compétitive avec nos produits", a déclaré Kevin M. Keniston, chef du confort des passagers de lâAirbus de lâEurope. "La raison pour laquelle Boeing fait cela est de créer plus de sièges pour rendre son avion plus compétitif avec nos produits", a déclaré Kevin Keniston, chef du confort des passagers chez Airbus. "Boeing fait ça pour pouvoir caser plus de sièges et rendre ses avions plus compétitifs par rapports à nos produits", a déclaré Kevin Keniston, directeur de Confort Passager chez lâavionneur européen Airbus. When asked about this, an oï¬cial of the American administration replied: "The United States is not conducting electronic surveillance aimed at oï¬ces of the World Bank and IMF in Washington." Interrogé à ce sujet, un responsable de lâadministration américaine a répondu : "Les Etats-Unis nâest pas eï¬ectuer une surveillance électronique destiné aux bureaux de la Banque mondiale et du FMI à Washington". Interrogé à ce sujet, un fonctionnaire de lâadministration américaine a répondu: "Les Ãtats-Unis nâeï¬ectuent pas de surveillance électronique à lâintention des bureaux de la Banque mondiale et du FMI à Washington". Interrogé sur le sujet, un responsable de lâadministration américaine a répondu: "les Etats-Unis ne mènent pas de surveillance électronique visant les sièges de la Banque mondiale et du FMI à Washington". Martin told CNN that he asked Daley whether his then-boss knew about the potential shuï¬e. Martin a déclaré à CNN quâil a demandé Daley si son patron de lâépoque connaissaient le potentiel remaniement ministériel. Martin a dit à CNN quâil avait demandé à Daley si son patron dâalors était au courant du remaniement potentiel. Martin a dit sur CNN quâil avait demandé à Daley si son patron dâalors était au courant du remaniement éventuel. She was spotted three days later by a dog walker trapped in the quarry Human Source PBMT Elle a été repéré trois jours plus tard par un promeneur de chien piégé dans la carrière GNMT Elle a été repérée trois jours plus tard par un traîneau à chiens piégé dans la carrière. Human Source PBMT GNMT 3.0 6.0 6.0 3.0 6.0 6.0 2.0 6.0 5.0 6.0 2.0 5.0 5.0 2.0
Elle a été repérée trois jours plus tard par une personne qui promenait son chien coincée dans la carrière Analysts believe the country is unlikely to slide back into full-blown conï¬ict, but recent events have unnerved foreign investors and locals. Les analystes estiment que le pays a peu de chances de retomber dans un conï¬it total, mais les événements récents ont inquiété les investisseurs étrangers et locaux. Selon les analystes, il est peu probable que le pays retombe dans un conï¬it généralisé, mais les événements récents ont attiré des investisseurs étrangers et des habitants locaux. Les analystes pensent que le pays ne devrait pas retomber dans un conï¬it ouvert, mais les récents évènements ont ébranlé les investisseurs étrangers et la population locale.
# Human
23
5.0 | {
"id": "1603.06147"
} |
1609.07410 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | 2016:
6 1 0 2
t c O 9 2 ] L M . t a t s [
2 v 0 1 4 7 0 . 9 0 6 1 : v i X r a
arXiv:1609.07410v2
# One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
Michalis K. Titsias Department of Informatics Athens University of Economics and Business mtitsias@aueb.gr
# Abstract
The softmax representation of probabilities for categorical variables plays a promi- nent role in modern machine learning with numerous applications in areas such as large scale classiï¬cation, neural language modeling and recommendation systems. However, softmax estimation is very expensive for large scale inference because of the high cost associated with computing the normalizing constant. Here, we in- troduce an efï¬cient approximation to softmax probabilities which takes the form of a rigorous lower bound on the exact probability. This bound is expressed as a product over pairwise probabilities and it leads to scalable estimation based on stochastic optimization. It allows us to perform doubly stochastic estimation by subsampling both training instances and class labels. We show that the new bound has interesting theoretical properties and we demonstrate its use in classiï¬cation problems.
# 1 Introduction
Based on the softmax representation, the probability of a variable y to take the value k â {1, . . . , K}, where K is the number of categorical symbols or classes, is modeled by
p(y = k|x) = efk(x;w) K m=1 efm(x;w) , (1)
where each fk(x; w) is often referred to as the score function and it is a real-valued function in- P dexed by an input vector x and parameterized by w. The score function measures the compatibility of input x with symbol y = k so that the higher the score is the more compatible x becomes with y = k. The most common application of softmax is multiclass classiï¬cation where x is an observed input vector and fk(x; w) is often chosen to be a linear function or more generally a non-linear func- tion such as a neural network (Bishop, 2006; Goodfellow et al., 2016). Several other applications of softmax arise, for instance, in neural language modeling for learning word vector embeddings (Mnih and Teh, 2012; Mikolov et al., 2013; Pennington et al., 2014) and also in collaborating ï¬lter- ing for representing probabilities of (user, item) pairs (Paquet et al., 2012). In such applications the number of symbols K could often be very large, e.g. of the order of tens of thousands or millions, which makes the computation of softmax probabilities very expensive due to the large sum in the normalizing constant of Eq. (1). Thus, exact training procedures based on maximum likelihood or Bayesian approaches are computationally prohibitive and approximations are needed. While some rigorous bound-based approximations to the softmax exists (Bouchard, 2007), they are not so accu- rate or scalable and therefore it would be highly desirable to develop accurate and computationally efï¬cient approximations.
In this paper we introduce a new efï¬cient approximation to softmax probabilities which takes the form of a lower bound on the probability of Eq. (1). This bound draws an interesting connection be- tween the exact softmax probability and all its one-vs-each pairwise probabilities, and it has several
29th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
desirable properties. Firstly, for the non-parametric estimation case it leads to an approximation of the likelihood that shares the same global optimum with exact maximum likelihood, and thus estima- tion based on the approximation is a perfect surrogate for the initial estimation problem. Secondly, the bound allows for scalable learning through stochastic optimization where data subsampling can be combined with subsampling categorical symbols. Thirdly, whenever the initial exact softmax cost function is convex the bound remains also convex.
Regarding related work, there exist several other methods that try to deal with the high cost of softmax such as methods that attempt to perform the exact computations (Gopal and Yang, 2013; Vijayanarasimhan et al., 2014), methods that change the model based on hierarchical or stick- breaking constructions (Morin and Bengio, 2005; Khan et al., 2012) and sampling-based methods (Bengio and Sénécal, 2003; Mikolov et al., 2013; Devlin et al., 2014; Ji et al., 2015). Our method is a lower bound based approach that follows the variational inference framework. Other rigorous variational lower bounds on the softmax have been used before (Bohning, 1992; Bouchard, 2007), however they are not easily scalable since they require optimizing data-speciï¬c variational param- eters. In contrast, the bound we introduce in this paper does not contain any variational parameter, which greatly facilitates stochastic minibatch training. At the same time it can be much tighter than previous bounds (Bouchard, 2007) as we will demonstrate empirically in several classiï¬cation datasets.
# 2 One-vs-each lower bound on the softmax
Here, we derive the new bound on the softmax (Section 2.1) and we prove its optimality property when performing approximate maximum likelihood estimation (Section 2.2). Such a property holds for the non-parametric case, where we estimate probabilities of the form p(y = k), without condi- tioning on some x, so that the score functions fk(x; w) reduce to unrestricted parameters fk; see Eq. (2) below. Finally, we also analyze the related bound derived by Bouchard (Bouchard, 2007) and we compare it with our approach (Section 2.3).
# 2.1 Derivation of the bound
Consider a discrete random variable y â {1, . . . , K} that takes the value k with probability,
p(y = k) = Softmaxk(f1, . . . , fK) = efk K m=1 efm , (2)
where each fk is a free real-valued scalar parameter. We wish to express a lower bound on p(y = k) and the key step of our derivation is to re-write p(y = k) as
p(y = k) = 1 m6=k eâ(fkâfm) . 1 + (3)
Then, by exploiting the fact that for any non-negative numbers α1 and α2 it holds 1 + α1 + α2 ⤠P 1 + α1 + α2 + α1α2 = (1 + α1)(1 + α2), and more generally it holds (1 + i(1 + αi) where each αi ⥠0, we obtain the following lower bound on the above probability,
# P
# Q
p(y = k) ⥠1 1 + eâ(fkâfm) = efk efk + efm = Ï(fk â fm). (4)
# m6=k Y
# m6=k Y
# m6=k Y
where Ï(·) denotes the sigmoid function. Clearly, the terms in the product are pairwise probabilities each corresponding to the event y = k conditional on the union of pairs of events, i.e. y â {k, m} where m is one of the remaining values. We will refer to this bound as one-vs-each bound on the softmax probability, since it involves K â 1 comparisons of a speciï¬c event y = k versus each of the K â 1 remaining events. Furthermore, the above result can be stated more generally to deï¬ne bounds on arbitrary probabilities as the following statement shows. Proposition 1. Assume a probability model with state space ⦠and probability measure P (·). For any event A â ⦠and an associated countable set of disjoint events {Bi} such that âªiBi = ⦠\ A, it holds
P (A) ⥠P (A|A ⪠Bi). (5)
i Y
2
Proof. Given that P (A) = P (A) (1 + P (A) P (A)+Pi P (Bi) , the result follows by applying the inequality P (â¦) = i(1 + αi) exactly as done above for the softmax parameterization. i αi) â¤
# P
# Q
Remark. If the set {Bi} consists of a single event B then by deï¬nition B = ⦠\ A and the bound is exact since in such case P (A|A ⪠B) = P (A).
Furthermore, based on the above construction we can express a full class of hierarchically ordered bounds. For instance, if we merge two events Bi and Bj into a single one, then the term P (A|A ⪠Bi)P (A|A ⪠Bj) in the initial bound is replaced with P (A|A ⪠Bi ⪠Bj) and the associated new bound, obtained after this merge, can only become tighter. To see a more speciï¬c example in the softmax probabilistic model, assume a small subset of categorical symbols Ck, that does not include k, and denote the remaining symbols excluding k as ¯Ck so that k ⪠Ck ⪠¯Ck = {1, . . . , K}. Then, a tighter bound, that exists higher in the hierarchy, than the one-vs-each bound (see Eq. 4) takes the form, p(y = k) ⥠Softmaxk(fk, fCk ) à Softmaxk(fk, f ¯Ck ) ⥠Softmaxk(fk, fCk ) Ã
Ï(fk â fm), (6)
Ymâ ¯Ck efk efk +Pmâ ¯Ck
# efk
where Softmaxk(fk, fCk ) = efm . For sim- plicity of our presentation in the remaining of the paper we do not discuss further these more general bounds and we focus only on the one-vs-each bound.
The computationally useful aspect of the bound in Eq. (4) is that it factorizes into a product, where each factor depends only on a pair of parameters (fk, fm). Crucially, this avoids the evaluation of the normalizing constant associated with the global probability in Eq. (2) and, as discussed in Section 3, it leads to scalable training using stochastic optimization that can deal with very large K. Furthermore, approximate maximum likelihood estimation based on the bound can be very accurate and, as shown in the next section, it is exact for the non-parametric estimation case.
The fact that the one-vs-each bound in (4) is a product of pairwise probabilities suggests that there is a connection with Bradley-Terry (BT) models (Bradley and Terry, 1952; Huang et al., 2006) for learning individual skills from paired comparisons and the associated multiclass classiï¬cation systems obtained by combining binary classiï¬ers, such as one-vs-rest and one-vs-one approaches (Huang et al., 2006). Our method differs from BT models, since we do not combine binary proba- bilistic models to a posteriori form a multiclass model. Instead, we wish to develop scalable approx- imate algorithms that can surrogate the training of multiclass softmax-based models by maximizing lower bounds on the exact likelihoods of these models.
# 2.2 Optimality of the bound for maximum likelihood estimation
Assume a set of observation (y1, . . . , yN ) where each yi â {1, . . . , K}. The log likelihood of the data takes the form,
N K L(f ) = log p(yi) = log p(y = k)Nk , (7)
# i=1 Y
# k=1 Y
where f = (f1, . . . , fK) and Nk denotes the number of data points with value k. By substitut- ing p(y = k) from Eq. (2) and then taking derivatives with respect to f we arrive at the standard stationary conditions of the maximum likelihood solution,
Nk N These stationary conditions are satisï¬ed for fk = log Nk + c where c â R is an arbitrary constant. What is rather surprising is that the same solutions fk = log Nk + c satisfy also the stationary conditions when maximizing a lower bound on the exact log likelihood obtained from the product of one-vs-each probabilities.
More precisely, by replacing p(y = k) with the bound from Eq. (4) we obtain a lower bound on the exact log likelihood,
F (f ) = log K efk efk + efm Nk = log P (fk, fm), (9)
# k=1 Y
# m6=k Y
# k>m X
3
â
# Nm
is a likelihood involving only the data of the pair where P (fk, fm) = of states (k, m), while there exist K(Kâ1)/2 possible such pairs. If instead of maximizing the exact log likelihood from Eq. (7) we maximize the lower bound we obtain the same parameter estimates. Proposition 2. The maximum likelihood parameter estimates fk = log Nk + c, k = 1, . . . , K for the exact log likelihood from Eq. (7) globally also maximize the lower bound from Eq. (9).
Proof. By computing the derivatives of F (f ) we obtain the following stationary conditions
K â 1 = Nk + Nm Nk efk efk + efm , k = 1, . . . , K, (10)
# m6=k X
which form a system of K non-linear equations over the unknowns (f1, . . . , fK). By substituting the values fk = log Nk + c we can observe that all K equations are simultaneously satisï¬ed which means that these values are solutions. Furthermore, since F (f ) is a concave function of f we can conclude that the solutions fk = log Nk + c globally maximize F (f ).
Remark. Not only is F (f ) globally maximized by setting fk = log Nk + c, but also each pairwise likelihood P (fk, fm) in Eq. (9) is separately maximized by the same setting of parameters.
# 2.3 Comparison with Bouchardâs bound
Bouchard (Bouchard, 2007) proposed a related bound that next we analyze in terms of its ability to approximate the exact maximum likelihood training in the non-parametric case, and then we com- pare it against our method. Bouchard (Bouchard, 2007) was motivated by the problem of applying variational Bayesian inference to multiclass classiï¬cation and he derived the following upper bound on the log-sum-exp function,
K K log efm ⤠α + log 1 + efmâα , (11)
m=1 X where α â R is a variational parameter that needs to be optimized in order for the bound to become as tight as possible. The above induces a lower bound on the softmax probability p(y = k) from Eq. (2) that takes the form
# m=1 X
p(y = k) ⥠efkâα K m=1 (1 + efmâα) . (12)
This is not the same as Eq. (4), since there is not a value for α for which the above bound will reduce to our proposed one. For instance, if we set α = fk, then Bouchardâs bound becomes half the one in Eq. (4) due to the extra term 1 + efkâfk = 2 in the product in the denominator.1 Furthermore, such a value for α may not be the optimal one and in practice α must be chosen by minimizing the upper bound in Eq. (11). While such an optimization is a convex problem, it requires iterative optimization since there is not in general an analytical solution for α. However, for the simple case where K = 2 we can analytically ï¬nd the optimal α and the optimal f parameters. The following proposition carries out this analysis and provides a clear understanding of how Bouchardâs bound behaves when applied for approximate maximum likelihood estimation. Proposition 3. Assume that K = 2 and we approximate the probabilities p(y = 1) and p(y = 2) from (2) with the corresponding Bouchardâs bounds given by (1+ef1 âα)(1+ef2 âα) and (1+ef1 âα)(1+ef2 âα) . These bounds are used to approximate the maximum likelihood solution by maximizing a bound F (f1, f2, α) which is globally maximized for
α = f1 + f2 2 , fk = 2 log Nk + c, k = 1, 2. (13)
The proof of the above is given in the Appendix. Notice that the above estimates are biased so that the probability of the most populated class (say the y = 1 for which N1 > N2) is overestimated
1Notice that the product in Eq. (4) excludes the value k, while Bouchardâs bound includes it.
4
while the other probability is underestimated. This is due to the factor 2 that multiplies log N1 and log N2 in (13). Also notice that the solution α = f1+f2 is not a general trend, i.e. for K > 2 the optimal α is not the mean of fks. In such cases approximate maximum likelihood estimation based on Bouchardâs bound requires iterative optimization. Figure 1a shows some estimated softmax probabilities, using a dataset of 200 points each taking one out of ten values, where f is found by exact maximum likelihood, the proposed one-vs-each bound and Bouchardâs method. As expected estimation based on the bound in Eq. (4) gives the exact probabilities, while Bouchardâs bound tends to overestimate large probabilities and underestimate small ones.
â50
y t i l i 0.25 2 â100 b a b o r P d e t a m i t s E 0.2 0.15 0.1 0.05 1 0 â1 d n u o b r e w o L â150 â200 â250 â2 0 1 2 3 4 5 6 Values (a) 7 8 9 10 â3 â2 â1 (b) 0 1 2 â300 0 2000 4000 6000 Iterations (c) 8000 10000
Figure 1: (a) shows the probabilities estimated by exact softmax (blue bar), one-vs-each approxima- tion (red bar) and Bouchardâs method (green bar). (b) shows the 5-class artiï¬cial data together with the decision boundaries found by exact softmax (blue line), one-vs-each (red line) and Bouchardâs bound (green line). (c) shows the maximized (approximate) log likelihoods for the different ap- proaches when applied to the data of panel (b) (see Section 3). Notice that the blue line in (c) is the exact maximized log likelihood while the remaining lines correspond to lower bounds.
# 3 Stochastic optimization for extreme classiï¬cation
Here, we return to the general form of the softmax probabilities as deï¬ned by Eq. (1) where the score functions are indexed by input x and parameterized by w. We consider a classiï¬cation task n=1, where yn â {1, . . . , K}, we wish to ï¬t the parameters w where given a training set {xn, yn}N by maximizing the log likelihood,
# N
N efyn (xn;w) K m=1 efm(xn;w) L = log . (14)
# n=1 Y
When the number of training instances is very large, the above maximization can be carried out by ap- plying stochastic gradient descent (by minimizing âL) where we cycle over minibatches. However, this stochastic optimization procedure cannot deal with large values of K because the normalizing constant in the softmax couples all scores functions so that the log likelihood cannot be expressed as a sum across class labels. To overcome this, we can use the one-vs-each lower bound on the softmax probability from Eq. (4) and obtain the following lower bound on the previous log likelihood,
N N 1 1 + eâ[fyn (xn;w)âfm(xn;w)] F = log 1 + eâ[fyn (xn;w)âfm(xn;w)] = â log
# n=1 Y
# n=1 X
# m6=yn Y
# m6=yn X
n=lm#un n=1m#yn (15) which now consists of a sum over both data points and labels. Interestingly, the sum over the la- bels, ing yn? runs over all remaining classes that are different from the label y,, assigned to x,. Each term in the sum is a logistic regression cost, that depends on the pairwise score difference fun (ni W) â fm(Xn; w), and encourages the n-th data point to get separated from the m-th remain- ing class. The above lower bound can be optimized by stochastic gradient descent by subsampling terms in the double sum in Eq. (15), thus resulting in a doubly stochastic approximation scheme. Next we further discuss the stochasticity associated with subsampling remaining classes. The gradient for the cost associated with a single training instance (x, yn) is
âFn = Ï (fm(xn; w) â fyn(xn; w)) [âwfyn(xn; w) â âwfm(xn; w)] . (16)
# m6=yn X
5
This gradient consists of a weighted sum where the sigmoidal weights Ï (fm(xn; w) â fyn(xn; w)) quantify the contribution of the remaining classes to the whole gradient; the more a remaining class overlaps with yn (given xn) the higher its contribution is. A simple way to get an unbiased stochastic estimate of (16) is to randomly subsample a small subset of remaining classes from the set {m|m 6= yn}. More advanced schemes could be based on importance sampling where we introduce a proposal distribution pn(m) deï¬ned on the set {m|m 6= yn} that could favor selecting classes with large sigmoidal weights. While such more advanced schemes could reduce variance, they require prior knowledge (or on-the-ï¬y learning) about how classes overlap with one another. Thus, in Section 4 we shall experiment only with the simple random subsampling approach and leave the above advanced schemes for future work.
To illustrate the above stochastic gradient descent algorithm we simulated a two-dimensional data set of 200 instances, shown in Figure 1b, that belong to ï¬ve classes. We consider a linear classiï¬- cation model where the score functions take the form fk(xn, w) = wT xn and where the full set of k parameters is w = (w1, . . . , wK). We consider minibatches of size ten to approximate the sum n . Figure 1c shows the stochastic and subsets of remaining classes of size one to approximate evolution of the approximate log likelihood (dashed red line), i.e. the unbiased subsampling based approximation of (15), together with the maximized exact softmax log likelihood (blue line), the non-stochastically maximized approximate lower bound from (15) (red solid line) and Bouchardâs method (green line). To apply Bouchardâs method we construct a lower bound on the log likelihood by replacing each softmax probability with the bound from (12) where we also need to optimize a separate variational parameter αn for each data point. As shown in Figure 1c our method provides a tighter lower bound than Bouchardâs method despite the fact that it does not contain any variational parameters. Also, Bouchardâs method can become very slow when combined with stochastic gra- dient descent since it requires tuning a separate variational parameter αn for each training instance. Figure 1b also shows the decision boundaries discovered by the exact softmax, one-vs-each bound and Bouchardâs bound. Finally, the actual parameters values found by maximizing the one-vs-each bound were remarkably close (although not identical) to the parameters found by the exact softmax.
# 4 Experiments
# 4.1 Toy example in large scale non-parametric estimation
Here, we illustrate the ability to stochastically maximize the bound in Eq. (9) for the simple non- parametric estimation case. In such case, we can also maximize the bound based on the analytic formulas and therefore we will be able to test how well the stochastic algorithm can approximate the optimal/known solution. We consider a data set of N = 106 instances each taking one out of K = 104 possible categorical values. The data were generated from a distribution p(k) â u2 k, where each uk was randomly chosen in [0, 1]. The probabilities estimated based on the analytic formulas are shown in Figure 2a. To stochastically estimate these probabilities we follow the doubly stochas- tic framework of Section 3 so that we subsample data instances of minibatch size b = 100 and for each instance we subsample 10 remaining categorical values. We use a learning rate initialized to 0.5/b (and then decrease it by a factor of 0.9 after each epoch) and performed 2 à 105 iterations. Fig- ure 2b shows the ï¬nal values for the estimated probabilities, while Figure 2c shows the evolution of the estimation error during the optimization iterations. We can observe that the algorithm performs well and exhibits a typical stochastic approximation convergence.
x 10 â4 x 10 â4 0.7 3.5 3.5 0.6 y t i l i b a b o r P d e a m t 3 2.5 2 1.5 y t i l i b a b o r P d e a m t 3 2.5 2 1.5 r o r r E 0.5 0.4 0.3 i t s E 1 i t s E 1 0.2 0.5 0.5 0.1 0 0 2000 4000 6000 Values 8000 10000 0 0 2000 4000 6000 Values 8000 10000 0 0 0.5 1 Iterations 1.5 (a) (c) 2 5 x 10
(b) Figure 2: (a) shows the optimally estimated probabilities which have been sorted for visualizations purposes. (b) shows the corresponding probabilities estimated by stochastic optimization. (c) shows the absolute norm for the vector of differences between exact estimates and stochastic estimates.
6
4.2 Classiï¬cation Small scale classiï¬cation comparisons. Here, we wish to investigate whether the proposed lower bound on the softmax is a good surrogate for exact softmax training in classiï¬cation. More precisely, we wish to compare the parameter estimates obtained by the one-vs-each bound with the estimates obtained by exact softmax training. To quantify closeness we use the normalized absolute norm
norm = |wsoftmax â wâ| |wsoftmax| , (17)
where wsoftmax denotes the parameters obtained by exact softmax training and wâ denotes estimates obtained by approximate training. Further, we will also report predictive performance measured by classiï¬cation error and negative log predictive density (nlpd) averaged across test data,
Ntest Ntest error = (1/Ntest) I(yi 6= ti), nlpd = (1/Ntest) â log p(ti|xi), (18)
# i=1 X
# i=1 X
where ti denotes the true label of a test point and yi the predicted one. We trained the linear multi- class model of Section 3 with the following alternative methods: exact softmax training (SOFT), the one-vs-each bound (OVE), the stochastically optimized one-vs-each bound (OVE-SGD) and Bouchardâs bound (BOUCHARD). For all approaches, the associated cost function was maximized 2 λ||w||2, which ensures that the global max- together with an added regularization penalty term, â 1 imum of the cost function is achieved for ï¬nite w. Since we want to investigate how well we surrogate exact softmax training, we used the same ï¬xed value λ = 1 in all experiments. We considered three small scale multiclass classiï¬cation datasets: MNIST2, 20NEWS3 and BIBTEX (Katakis et al., 2008); see Table 1 for details. Notice that BIBTEX is originally a multi-label classiï¬- cation dataset (Bhatia et al., 2015). where each example may have more than one labels. Here, we maintained only a single label for each data point in order to apply standard multiclass classiï¬cation. The maintained label was the ï¬rst label appearing in each data entry in the repository ï¬les4 from which we obtained the data.
Figure 3 displays convergence of the lower bounds (and for the exact softmax cost) for all meth- ods. Recall, that the methods SOFT, OVE and BOUCHARD are non-stochastic and therefore their optimization can be carried out by standard gradient descent. Notice that in all three datasets the one-vs-each bound gets much closer to the exact softmax cost compared to Bouchardâs bound. Thus, OVE tends to give a tighter bound despite that it does not contain any variational parameters, while BOUCHARD has N extra variational parameters, i.e. as many as the training instances. The appli- cation of OVE-SGD method (the stochastic version of OVE) is based on a doubly stochastic scheme where we subsample minibatches of size 200 and subsample remaining classes of size one. We can observe that OVE-SGD is able to stochastically approach its maximum value which corresponds to OVE.
Table 2 shows the parameter closeness score from Eq. (17) as well as the classiï¬cation predictive scores. We can observe that OVE and OVE-SGD provide parameters closer to those of SOFT than the parameters provided by BOUCHARD. Also, the predictive scores for OVE and OVE-SGD are similar to SOFT, although they tend to be slightly worse. Interestingly, BOUCHARD gives the best classiï¬cation error, even better than the exact softmax training, but at the same time it always gives the worst nlpd which suggests sensitivity to overï¬tting. However, recall that the regularization parameter λ was ï¬xed to the value one and it was not optimized separately for each method using cross validation. Also notice that BOUCHARD cannot be easily scaled up (with stochastic optimization) to massive datasets since it introduces an extra variational parameter for each training instance.
Large scale classiï¬cation. Here, we consider AMAZONCAT-13K (see footnote 4) which is a large scale classiï¬cation dataset. This dataset is originally multi-labelled (Bhatia et al., 2015) and here we maintained only a single label, as done for the BIBTEX dataset, in order to apply standard multiclass classiï¬cation. This dataset is also highly imbalanced since there are about 15 classes having the half of the training instances while they are many classes having very few (or just a single) training instances.
# 2http://yann.lecun.com/exdb/mnist 3http://qwone.com/~jason/20Newsgroups/ 4http://research.microsoft.com/en-us/um/people/manik/downloads/XC/XMLRepository.html
7
Table 1: Summaries of the classiï¬cation datasets.
Name Dimensionality Classes Training examples Test examples MNIST 20NEWS BIBTEX AMAZONCAT-13K 784 61188 1836 203882 10 20 148 2919 60000 11269 4880 1186239 10000 7505 2515 306759
Table 2: Score measures for the small scale classiï¬cation datasets.
SOFT (error, nlpd) BOUCHARD (norm, error, nlpd) OVE (norm, error, nlpd) (0.074, 0.271) (0.272, 1.263) (0.622, 2.793) (0.64, 0.073, 0.333) (0.65, 0.249, 1.337) (0.25, 0.621, 2.955) (0.50, 0.082, 0.287) (0.05, 0.276, 1.297) (0.09, 0.636, 2.888)
(0.53, 0.080, 0.278) (0.14, 0.276, 1.312) (0.10, 0.633, 2.875)
# MNIST 20NEWS BIBTEX
4 x 10 0 d n u o b r e w o L â2 â3 â4 â5 â6 SOFT OVO OVOâSGD BOUCHARD d n u o b r e w o L â1500 â2000 â2500 â3000 â3500 d n u o b r e w o L â3000 â4000 â5000 d n u o b r e w o L â200 â400 â600 â800 â7 0 0.5 1 Iterations 1.5 2 5 x 10 â4000 0 5 Iterations 10 5 x 10 â6000 0 5 Iterations 10 5 x 10 â1000 0 5 Iterations 10 5 x 10 (a) (b) (c) (d)
Figure 3: (a) shows the evolution of the lower bound values for MNIST, (b) for 20NEWS and (c) for BIBTEX. For more clear visualization the bounds of the stochastic OVE-SGD have been smoothed using a rolling window of 400 previous values. (d) shows the evolution of the OVE-SGD lower bound (scaled to correspond to a single data point) in the large scale AMAZONCAT-13K dataset. Here, the plotted values have been also smoothed using a rolling window of size 4000 and then thinned by a factor of 5.
Further, notice that in this large dataset the number of parameters we need to estimate for the linear classiï¬cation model is very large: K à (D + 1) = 2919 à 203883 parameters where the plus one accounts for the biases. All methods apart from OVE-SGD are practically very slow in this massive dataset, and therefore we consider OVE-SGD which is scalable.
We applied OVE-SGD where at each stochastic gradient update we consider a single training instance (i.e. the minibatch size was one) and for that instance we randomly select ï¬ve remaining classes. This leads to sparse parameter updates, where the score function parameters of only six classes (the class of the current training instance plus the remaining ï¬ve ones) are updated at each iteration. We used a very small learning rate having value 10â8 and we performed ï¬ve epochs across the full dataset, that is we performed in total 5 à 1186239 stochastic gradient updates. After each epoch we halve the value of the learning rate before next epoch starts. By taking into account also the sparsity of the input vectors each iteration is very fast and full training is completed in just 26 minutes in a stand- alone PC. The evolution of the variational lower bound that indicates convergence is shown in Figure 3d. Finally, the classiï¬cation error in test data was 53.11% which is signiï¬cantly better than random guessing or by a method that decides always the most populated class (where in AMAZONCAT-13K the most populated class occupies the 19% of the data so the error of that method is around 79%).
# 5 Discussion
We have presented the one-vs-each lower bound on softmax probabilities and we have analyzed its theoretical properties. This bound is just the most extreme case of a full family of hierarchi- cally ordered bounds. We have explored the ability of the bound to perform parameter estimation through stochastic optimization in models having large number of categorical symbols, and we have demonstrated this ability to classiï¬cation problems.
There are several directions for future research. Firstly, it is worth investigating the usefulness of the bound in different applications from classiï¬cation, such as for learning word embeddings in natural
8
language processing and for training recommendation systems. Another interesting direction is to consider the bound not for point estimation, as done in this paper, but for Bayesian estimation using variational inference.
# Acknowledgments
We thank the reviewers for insightful comments. We would like also to thank Francisco J. R. Ruiz for useful discussions and David Blei for suggesting the name one-vs-each for the proposed method.
# A Proof of Proposition 3
Here we re-state and prove Proposition 3.
Proposition 3. Assume that K = 2 and we approximate the probabilities p(y = 1) and p(y = 2) from (2) with the corresponding Bouchardâs bounds given by (1+ef1 âα)(1+ef2 âα) and (1+ef1 âα)(1+ef2 âα) . These bounds are used to approximate the maximum likelihood solution for (f1, f2) by maximizing the lower bound
F (f1, f2, α) = log eN1(f1âα)+N2(f2âα) [(1 + ef1âα)(1 + ef2âα)]N1+N2 , (19)
obtained by replacing p(y = 1) and p(y = 2) in the exact log likelihood with Bouchardâs bounds. Then, the global maximizer of F (f1, f2, α) is such that
α = f1 + f2 2 , fk = 2 log Nk + c, k = 1, 2. (20)
Proof. The lower bound is written as
N1(f1 â α) + N2(f2 â α) â (N1 + N2) log(1 + ef1âα) + log(1 + ef2âα) .
.
We will ï¬rst maximize this quantity wrt α. For that is sufï¬ces to minimize the upper bound on the following log-sum-exp function
α + log(1 + ef1âα) + log(1 + ef2âα),
which is a convex function of α. By taking the derivative wrt α and setting to zero we obtain the stationary condition
ef1âα 1 + ef1âα + ef2âα 1 + ef2âα = 1.
Clearly, the value of α that satisï¬es the condition is α = f1+f2 into the initial bound we have 2 . Now if we substitute this value back
N1 f1 â f2 2 + N2 f2 â f1 2 â (N1 + N2) log(1 + e f1 âf2 2 ) + log(1 + e f2âf1 2 )
# h
# i
which is concave wrt f1 and f2. Then, by taking derivatives wrt f1 and f2 we obtain the conditions
N1 â N2 2 = (N1 + N2) 2 " e f1 âf2 2 1 + e f1âf2 2 â e f2âf1 2 1 + e f2âf1 2 # N2 â N1 2 = (N1 + N2) 2 " e f2 âf1 2 1 + e f2âf1 2 â e f1âf2 2 1 + e f1âf2 2 #
Now we can observe that these conditions are satisï¬ed by f1 = 2 log N1 + c and f2 = 2 log N2 + c which gives the global maximizer since F (f1, f2, α) is concave.
9
# References
Bengio, Y. and Sénécal, J.-S. (2003). Quick training of probabilistic neural nets by importance sampling. In Proceedings of the conference on Artiï¬cial Intelligence and Statistics (AISTATS).
Bhatia, K., Jain, H., Kar, P., Varma, M., and Jain, P. (2015). Sparse local embeddings for extreme multi-label classiï¬cation. In Cortes, C., Lawrence, N. D., Lee, D. D., Sugiyama, M., and Garnett, R., editors, Advances in Neural Information Processing Systems 28, pages 730â738. Curran Associates, Inc.
Bishop, C. M. (2006). Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., Secaucus, NJ, USA.
Bohning, D. (1992). Multinomial logistic regression algorithm. Annals of the Inst. of Statistical Math, 44:197â 200.
Bouchard, G. (2007). Efï¬cient bounds for the softmax function and applications to approximate inference in hybrid models. Technical report.
Bradley, R. A. and Terry, M. E. (1952). Rank analysis of incomplete block designs: I. The method of paired comparisons. Biometrika, 39(3/4):324â345.
Devlin, J., Zbib, R., Huang, Z., Lamar, T., Schwartz, R., and Makhoul, J. (2014). Fast and robust neural net- In Proceedings of the 52nd Annual Meeting of the work joint models for statistical machine translation. Association for Computational Linguistics (Volume 1: Long Papers), pages 1370â1380, Baltimore, Mary- land. Association for Computational Linguistics.
Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep learning. Book in preparation for MIT Press.
In Dasgupta, S. and Mcallester, D., editors, Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 289â297. JMLR Workshop and Conference Proceedings.
Huang, T.-K., Weng, R. C., and Lin, C.-J. (2006). Generalized Bradley-Terry models and multi-class probability estimates. J. Mach. Learn. Res., 7:85â115.
Ji, S., Vishwanathan, S. V. N., Satish, N., Anderson, M. J., and Dubey, P. (2015). Blackout: Speeding up recurrent neural network language models with very large vocabularies.
Katakis, I., Tsoumakas, G., and Vlahavas, I. (2008). Multilabel text classiï¬cation for automated tag suggestion. In In: Proceedings of the ECML/PKDD-08 Workshop on Discovery Challenge.
Khan, M. E., Mohamed, S., Marlin, B. M., and Murphy, K. P. (2012). A stick-breaking likelihood for categorical data analysis with latent Gaussian models. In Proceedings of the Fifteenth International Conference on Artiï¬cial Intelligence and Statistics, AISTATS 2012, La Palma, Canary Islands, April 21-23, 2012, pages 610â618.
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Burges, C. J. C., Bottou, L., Welling, M., Ghahramani, Z., and Weinberger, K. Q., editors, Advances in Neural Information Processing Systems 26, pages 3111â3119. Curran Associates, Inc.
Mnih, A. and Teh, Y. W. (2012). A fast and simple algorithm for training neural probabilistic language models. In Proceedings of the 29th International Conference on Machine Learning, pages 1751â1758.
Morin, F. and Bengio, Y. (2005). Hierarchical probabilistic neural network language model. In Proceedings of the Tenth International Workshop on Artiï¬cial Intelligence and Statistics, pages 246â252. Citeseer.
Paquet, U., Koenigstein, N., and Winther, O. (2012). Scalable Bayesian modelling of paired symbols. CoRR, abs/1409.2824.
In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532â1543, Doha, Qatar. Association for Computational Linguistics.
Vijayanarasimhan, S., Shlens, J., Monga, R., and Yagnik, J. (2014). Deep networks with large output spaces. CoRR, abs/1412.7479.
10 | {
"id": "1609.07410"
} |
1609.07061 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | 6 1 0 2
p e S 2 2 ] E N . s c [
1 v 1 6 0 7 0 . 9 0 6 1 : v i X r a
Quantized Neural Networks
# Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
Itay Hubara* Department of Electrical Engineering Technion - Israel Institute of Technology Haifa, Israel
itayh@campuse.technion.ac.il
Matthieu Courbariaux* Department of Computer Science and Department of Statistics Universit´e de Montr´eal Montr´eal, Canada
matthieu.courbariaux@gmail.com
# Daniel Soudry Department of Statistics Columbia University New York, USA
daniel.soudry@gmail.com
Ran El-Yaniv Department of Computer Science Technion - Israel Institute of Technology Haifa, Israel
rani@cs.technion.ac.il
Yoshua Bengio Department of Computer Science and Department of Statistics Universit´e de Montr´eal Montr´eal, Canada
yoshua.umontreal@gmail.com
*Indicates ï¬rst authors.
Editor:
# Abstract
We introduce a method to train Quantized Neural Networks (QNNs) â neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At train- time the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves 51% top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise opera- tion. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suï¬ering any loss in classiï¬cation accuracy. The QNN code is available online.
1
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
Keywords: Deep Learning, Neural Networks Compression, Energy Eï¬cient Neural Net- works, Computer vision, Language Models.
# 1. Introduction
Deep Neural Networks (DNNs) have substantially pushed Artiï¬cial Intelligence (AI) lim- its in a wide range of tasks, including but not limited to object recognition from im- ages (Krizhevsky et al., 2012; Szegedy et al., 2014), speech recognition (Hinton et al., 2012; Sainath et al., 2013), statistical machine translation (Devlin et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015), Atari and Go games (Mnih et al., 2015; Silver et al., 2016), and even computer generation of abstract art (Mordvintsev et al., 2015).
Training or even just using neural network (NN) algorithms on conventional general- purpose digital hardware (Von Neumann architecture) has been found highly ineï¬cient due to the massive amount of multiply-accumulate operations (MACs) required to compute the weighted sums of the neuronsâ inputs. Today, DNNs are almost exclusively trained on one or many very fast and power-hungry Graphic Processing Units (GPUs) (Coates et al., 2013). As a result, it is often a challenge to run DNNs on target low-power devices, and substantial research eï¬orts are invested in speeding up DNNs at run-time on both general- purpose (Vanhoucke et al., 2011; Gong et al., 2014; Romero et al., 2014; Han et al., 2015b) and specialized computer hardware (Farabet et al., 2011a,b; Pham et al., 2012; Chen et al., 2014a,b; Esser et al., 2015).
# sppvoach
The most common approach is to compress a trained (full precision) network. Hashed- Nets (Chen et al., 2015) reduce model sizes by using a hash function to randomly group connection weights and force them to share a single parameter value. Gong et al. (2014) compressed deep convnets using vector quantization, which resulteds in only a 1% accuracy loss. However, both methods focused only on the fully connected layers. A recent work by Han and Dally (2015) successfully pruned several state-of-the-art large scale networks and showed that the number of parameters could be reduced by an order of magnitude.
Recent works have shown that more computationally eï¬cient DNNs can be constructed by quantizing some of the parameters during the training phase. In most cases, DNNs are trained by minimizing some error function using Back-Propagation (BP) or related gradient descent methods. However, such an approach cannot be directly applied if the weights are restricted to binary values. Soudry et al. (2014) used a variational Bayesian approach with Mean-Field and Central Limit approximation to calculate the posterior distribution of the weights (the probability of each weight to be +1 or -1). During the inference stage (test phase), their method samples from this distribution one binary network and used it to predict the targets of the test set (More than one binary network can also be used). Courbariaux et al. (2015b) similarly used two sets of weights, real-valued and binary. They, however, updated the real valued version of the weights by using gradients computed by applying forward and backward propagation with the set of binary weights (which was obtained by quantizing the real-value weights to +1 and -1).
This study proposes a more advanced technique, referred to as Quantized Neural Net- work (QNN), for quantizing the neurons and weights during inference and training. In such networks, all MAC operations can be replaced with XN OR and population count (i.e., counting the number of ones in the binary number) operations. This is especially useful in
2
Quantized Neural Networks
QNNs with the extremely low precision â for example, when only 1-bit is used per weight and activation, leading to a Binarized Neural Network (BNN). The proposed method is par- ticularly beneï¬cial for implementing large convolutional networks whose neuron-to-weight ratio is very large.
This paper makes the following contributions:
⢠We introduce a method to train Quantized-Neural-Networks (QNNs), neural networks with low precision weights and activations, at run-time, and when computing the parameter gradients at train-time. In the extreme case QNNs use only 1-bit per weight and activation(i.e., Binarized NN; see Section 2).
⢠We conduct two sets of experiments, each implemented on a diï¬erent framework, namely Torch7 and Theano, which show that it is possible to train BNNs on MNIST, CIFAR-10 and SVHN and achieve near state-of-the-art results (see Section 4). More- over, we report results on the challenging ImageNet dataset using binary weights/activations as well as quantized version of it (more than 1-bit).
⢠We present preliminary results on quantized gradients and show that it is possible to use only 6-bits with only small accuracy degradation.
⢠We present results for the Penn Treebank dataset using language models (vanilla RNNs and LSTMs) and show that with 4-bit weights and activations Recurrent QNNs achieve similar accuracies as their 32-bit ï¬oating point counterparts.
⢠We show that during the forward pass (both at run-time and train-time), QNNs drastically reduce memory consumption (size and number of accesses), and replace most arithmetic operations with bit-wise operations. A substantial increase in power eï¬ciency is expected as a result (see Section 5). Moreover, a binarized CNN can lead to binary convolution kernel repetitions; we argue that dedicated hardware could reduce the time complexity by 60% .
⢠Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suï¬ering any loss in classiï¬cation accuracy (see Section 6).
⢠The code for training and applying our BNNs is available on-line (both the Theano 1 and the Torch framework 2).
# 2. Binarized Neural Networks
In this section, we detail our binarization function, show how we use it to compute the parameter gradients, and how we backpropagate through it.
# 1https://github.com/MatthieuCourbariaux/BinaryNet 2https://github.com/itayhubara/BinaryNet
3
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
# 2.1 Deterministic vs Stochastic Binarization
When training a BNN, we constrain both the weights and the activations to either +1 or â1. Those two values are very advantageous from a hardware perspective, as we explain in Section 6. In order to transform the real-valued variables into those two values, we use two diï¬erent binarization functions, as proposed by Courbariaux et al. (2015a). The ï¬rst binarization function is deterministic:
+1 ifa>0,
xb = sign(x) = â1 otherwise, (1)
where xb is the binarized variable (weight or activation) and x the real-valued variable. It is very straightforward to implement and works quite well in practice. The second binarization function is stochastic:
(2) pb _ Jf +1. with probability p = (2), =) 1 with probability 1 â p,
where Ï is the âhard sigmoidâ function:
Ï(x) = clip( x + 1 2 , 0, 1) = max(0, min(1, x + 1 2 )). (3)
This stochastic binarization is more appealing theoretically (see Section 4) than the sign function, but somewhat harder to implement as it requires the hardware to generate random bits when quantizing (Torii et al., 2016). As a result, we mostly use the deterministic binarization function (i.e., the sign function), with the exception of activations at train- time in some of our experiments.
# 2.2 Gradient Computation and Accumulation
Although our BNN training method utilizes binary weights and activations to compute the parameter gradients, the real-valued gradients of the weights are accumulated in real- valued variables, as per Algorithm 1. Real-valued weights are likely required for Stochasic Gradient Descent (SGD) to work at all. SGD explores the space of parameters in small and noisy steps, and that noise is averaged out by the stochastic gradient contributions accumulated in each weight. Therefore, it is important to maintain suï¬cient resolution for these accumulators, which at ï¬rst glance suggests that high precision is absolutely required. Moreover, adding noise to weights and activations when computing the parameter gra- dients provide a form of regularization that can help to generalize better, as previously shown with variational weight noise (Graves, 2011), Dropout (Srivastava et al., 2014) and DropConnect (Wan et al., 2013). Our method of training BNNs can be seen as a vari- ant of Dropout, in which instead of randomly setting half of the activations to zero when computing the parameter gradients, we binarize both the activations and the weights.
# 2.3 Propagating Gradients Through Discretization
The derivative of the sign function is zero almost everywhere, making it apparently in- compatible with back-propagation, since the exact gradients of the cost with respect to the
4
Quantized Neural Networks
quantities before the discretization (pre-activations or weights) are zero. Note that this lim- itation remains even if stochastic quantization is used. Bengio (2013) studied the question of estimating or propagating gradients through stochastic discrete neurons. He found that the fastest training was obtained when using the âstraight-through estimator,â previously introduced in Hintonâs lectures (Hinton, 2012). We follow a similar approach but use the version of the straight-through estimator that takes into account the saturation eï¬ect, and does use deterministic rather than stochastic sampling of the bit. Consider the sign function quantization
q = Sign(r),
and assume that an estimator gq of the gradient âC through estimator when needed). Then, our straight-through estimator of âC
gr = gq1|r|â¤1. (4)
Note that this preserves the gradient information and cancels the gradient when r is too large. Not cancelling the gradient when r is too large signiï¬cantly worsens performance. To better understand why the straight-through estimator works well, consider the stochastic (2) and rewrite Ï(r) = (HT(r) + 1) /2, where HT(r) is the binarization scheme in Eq. well-known âhard tanhâ,
HT(r) = +1 r > 1, r â1 r < â1. r â [â1, 1], (5)
In this case the input to the next layer has the following form,
Wbhb (r) = WbHT (r) + n (r) ,
where we use the fact that HT (r) is the expectation over hb (x) (see Eqs. (2) and (5)), and deï¬ne n (r) as binarization noise with mean equal to zero. When the layer is wide, we expect the deterministic mean term HT (x) to dominate, because the noise term n (r) is a summation over many independent binarizations from all the neurons in the previous layer. Thus, we argue that the binarization noise n (x) can be ignored when performing diï¬erentiation in the backward propagation stage. Therefore, we replace âhb(r) (which cannot be computed) with
âHT (r) âx = 0 r > 1, 1 r â [â1, 1], 0 r < â1, (6)
which is exactly the straight-through estimator deï¬ned in Eq (4). The use of this straight- through estimator is illustrated in Algorithm 1.
A similar binarization process was applied for weights in which we combine two ingre- dients:
⢠Project each real-valued weight to [-1,1], i.e., clip the weights during training, as per Algorithm 1. The real-valued weights would otherwise grow very large without any impact on the binary weights.
5
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
⢠When using a weight wr, quantize it using wb = Sign(wr).
Projecting the weights to [-1,1] is consistent with the gradient cancelling when |wr| > 1, according to Eq. ( 4).
# 2.4 Shift-based Batch Normalization
Batch Normalization (BN) (loffe and Szegedy] |2015) accelerates the training and reduces the overall impact of the weight scale (Courbariaux et al.| |2015a). The normalization procedure may also help to regularize the model. However, at train-time, BN requires many multiplications (calculating the standard deviation and dividing by it, namely, dividing by the running variance, which is the weighted mean of the training set activation variance). Although the number of scaling calculations is the same as the number of neurons, in the case of ConvNets this number is quite large. For example, in the CIFAR-10 dataset (using our architecture), the first convolution layer, consisting of only 128 x 3 x 3 filter masks, converts an image of size 3 x 32 x 32 to size 128 x 28 x 28, which is almost two orders of magnitude larger than the number of weights (87.1 to be exact). To achieve the results that BN would obtain, we use a shift-based batch normalization (SBN) technique, presented in Algori hm P| SBN approximates BN almost without multiplications. Define AP2(z) as the approximate power-of-2 of z (i-e., the index of the most significant bit (MSB)), and <> as both left and right binary shift. SBN replaces almost all multiplication with power-of-2- approximation and shift operations:
xx yu <> AP2y). (7)
The only operation which is not a binary shift or an add is the inverse square root (see normalization operation Algorithm 2). From the early work of Lomont (2003) we know that the inverse-square operation could be applied with approximately the same complexity as multiplication. There are also faster methods, which involve lookup table tricks that typically obtain lower accuracy (this may not be an issue, since our procedure already adds a lot of noise). However, the number of values on which we apply the inverse-square operation is rather small, since it is done after calculating the variance, i.e., after averaging (for a more precise calculation, see the BN analysis in Lin et al. (2015b). Furthermore, the size of the standard deviation vectors is relatively small. For example, these values make up only 0.3% of the network size (i.e., the number of learnable parameters) in the Cifar-10 network we used in our experiments.
In the experiment we observed no loss in accuracy when using the shift-based BN algo- rithm instead of the vanilla BN algorithm.
# 2.5 Shift Based AdaMax
The ADAM learning method (Kingma and Ba, 2014b) also reduces the impact of the weight scale. Since ADAM requires many multiplications, we suggest using instead the shift-based AdaMax we outlined in Algorithm 3. In the experiment we conducted we observed no loss in accuracy when using the shift-based AdaMax algorithm instead of the vanilla ADAM algorithm.
3Hardware implementation of AP2 is as simple as extracting the index of the most signiï¬cant bit from the numberâs binary representation.
6
Quantized Neural Networks
Algorithm 1 Training a BNN. C is the cost function for minibatch, λ, the learning rate (â¦) stands for element-wise multiplication. decay factor, and L, the number of layers. The function Binarize(·) speciï¬es how to (stochastically or deterministically) binarize the activations and weights, and Clip(), how to clip the weights. BatchNorm() speciï¬es how to batch-normalize the activations, using either batch normalization (Ioï¬e and Szegedy, 2015) or its shift-based variant we describe in Algorithm 2. BackBatchNorm() speciï¬es how to backpropagate through the normalization. Update() speciï¬es how to update the parameters when their gradients are known, using either ADAM (Kingma and Ba, 2014b) or the shift-based AdaMax we describe in Algorithm 3. Require: a minibatch of inputs and targets (a0, aâ), previous weights W , previous Batch- Norm parameters θ, weight initialization coeï¬cients from (Glorot and Bengio, 2010) γ, and previous learning rate η.
Norm parameters 0, weight initialization and previous learning rate 7. Ensure: updated weights Wât!, updated t+1 ing rate 7 {1. Computing the parameter gradients:} {1.1. Forward propagation: } for k =1 to Ldo WP < Binarize(W;,) Sk ab_ we ay < BatchNorm(s,, Ox) if k < L then ab â Binarize(a,) end if end for {1.2. Backward propagation: } {Note that the gradients are not binary.} Compute ga, = 2 on knowing az, and a* for k = L tol do if k < L then Jax Jar ° Vay <i end if (Gsps 90, ) <- BackBatchNorm(ga,, Sk, 9x) Fa_, â I. WR Iwp â Is M1 end for {2. Accumulating the parameter gradients:} for k =1 to L do ott + Update(, 7, 90,) wet + Clip(Update(W;,, VEN Iw?) â1,1) nit! & dn
Ensure: updated weights W t+1, updated BatchNorm parameters θt+1 and updated learn-
# end for
7
# Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
Algorithm 2 Shift based Batch Normalizing Transform, applied to activation x over a mini-batch. AP2(«) = sign(x) x 2"""4(le82l"I) is the approximate power-of-2|*| and <> stands for both left and right binary shift. Require: Values of x over a mini-batch: B = {21...m}; Parameters to be learned: y, 8 Ensure: {y; = BN(2;,7, 3)} Be i yo, {mini-batch mean}
Require: Values of x over a mini-batch: B = {21...m}; Parameters to be learned: y, 8 Ensure: {y; = BN(2;,7, 3)} Be i yo, v; {mini-batch mean} C(x) (a: â pe) {centered input} oR Hd Ti (C(ai)K>AP2(C(a;))) {apx variance} #; â C(ai) <> AP2((,/o% + â¬)7+) {normalize} yi <â AP2(7) <> 4; {scale and shift}
Algorithm 3 Shift based AdaMax learning rule (Kingma and Ba, 2014b). g2 t indicates the element-wise square gt ⦠gt. Good default settings are α = 2â10, 1 â β1 = 2â3, 1 â β2 = 2â10. All operations on vectors are element-wise. With βt 2 we denote β1 and β2 to the power t. Require: Previous parameters θtâ1, their gradient gt, and learning rate α. Ensure: Updated parameters θt
Require: Previous parameters 6,_1, their gradient g,, and learning rate a. Ensure: Updated parameters 6; {Biased 1st and 2nd raw moment estimates:} me â By m1 + (1 â Br) - 9 vt â max(B + v¢-1, gel) {Updated parameters: } 0, + 0-1 â(a@ K> (1â f1)) tr KS v7)
# 2.6 First Layer
In a BNN, only the binarized values of the weights and activations are used in all calcula- tions. As the output of one layer is the input of the next, the inputs of all the layers are binary, with the exception of the ï¬rst layer. However, we do not believe this to be a major issue. First, in computer vision, the input representation typically has far fewer channels (e.g, red, green and blue) than internal representations (e.g., 512). Consequently, the ï¬rst layer of a ConvNet is often the smallest convolution layer, both in terms of parameters and computations (Szegedy et al., 2014). Second, it is relatively easy to handle continuous- valued inputs as ï¬xed point numbers, with m bits of precision. For example, in the common case of 8-bit ï¬xed point inputs:
8 s=a-w?, s= > ariaâ. w), (8) n=1
where x is a vector of 1024 8-bit inputs, x8 1 is the most signiï¬cant bit of the ï¬rst input, wb is a vector of 1024 1-bit weights, and s is the resulting weighted sum. This method is used in Algorithm 4.
8
Quantized Neural Networks
Algorithm 4 Running a BNN with L layers. Require: 8-bit input vector a0, binary weights W b, and BatchNorm parameters θ. Ensure: the MLP output aL.
{1. First layer:} a1 â 0 for n = 1 to 8 do a1 â a1 + 2nâ1 à XnorDotProduct(an 0, Wb 1 ) end for ab 1 â Sign(BatchNorm(a1, θ1)) {2. Remaining hidden layers:} for k = 2 to L â 1 do ak â XnorDotProduct(ab ab k â Sign(BatchNorm(ak, θk)) end for {3. Output layer:} aL â XnorDotProduct(ab aL â BatchNorm(aL, θL) kâ1, W b k ) Lâ1, W b L)
# 3. Qunatized Neural network - More than 1-bit
Observing Eq. (8), we can see that using 2-bit activations simply doubles the number of times we need to run our XnorPopCount Kernel (i.e., directly proportional to the activa- tion bitwidth). This idea was recently proposed by Zhou et al. (2016) (DoReFa net) and Miyashita et al. (2016) (published on arXive shortly after our preliminary technical report was published there). However, in contrast to to Zhou et al., we did not ï¬nd it useful to initialize the network with weights obtained by training the network with full precision weights. Moreover, the Zhou et al. network did not quantize the weights of the ï¬rst con- volutional layer and the last fully-connected layer, whereas we binarized both. We followed the quantization schemes suggested by Miyashita et al. (2016), namely, linear quantization:
LinearQuant (x, bitwidth) = Clip (rouna ( ) x bitwidth, minV, mazV ) (9) bitwidth and logarithmic quantization:
LogQuant(x, bitwidth) (x) = Clip (AP2(x), minV, maxV ) , (10)
where minV and maxV are the minimum and maximum scale range respectively. Where AP2(x) is the approximate-power-of-2 of x as described in Section 2.4. In our experiments (detailed in Section 4) we applied the above quantization schemes on the weights, activations and gradients and tested them on the more challenging ImageNet dataset.
# 4. Benchmark Results
# 4.1 Results on MNIST,SVHN, and CIFAR-10
We performed two sets of experiments, each based on a diï¬erent framework, namely Torch7 and Theano. Other than the framework, the two sets of experiments are very similar:
9
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
Table 1: Classiï¬cation test error rates of DNNs trained on MNIST (fully connected archi- tecture), CIFAR-10 and SVHN (convnet). No unsupervised pre-training or data augmentation was used.
Data set MNIST SVHN CIFAR-10 Binarized activations+weights, during training and test BNN (Torch7) BNN (Theano) Committee Machinesâ Array Baldassi et al. (2015) 1.40% 0.96% 1.35% 2.53% 2.80% - 10.15% 11.40% - Binarized weights, during training and test BinaryConnect Courbariaux et al. (2015a) 1.29± 0.08% 2.30% 9.90% Binarized activations+weights, during test EBP Cheng et al. (2015) Bitwise DNNs Kim and Smaragdis (2016) 2.2± 0.1% 1.33% - - - - Ternary weights, binary activations, during test Hwang and Sung (2014) 1.45% No binarization (standard results) - - No reg Maxout Networks Goodfellow et al. (2013b) Gated pooling Lee et al. (2015) 1.3± 0.2% 0.94% - 2.44% 2.47% 1.69% 10.94% 11.68% 7.62%
⢠In both sets of experiments, we obtain near state-of-the-art results with BNNs on MNIST, CIFAR-10 and the SVHN benchmark datasets.
⢠In our Torch7 experiments, the activations are stochastically binarized at train-time, whereas in our Theano experiments they are deterministically binarized.
⢠In our Torch7 experiments, we use the shift-based BN and AdaMax variants, which are detailed in Algorithms 2 and 3, whereas in our Theano experiments, we use vanilla BN and ADAM.
Results are reported in Table 1. Implementation details are reported in Appendix A.
MNIST MNIST is an image classiï¬cation benchmark dataset (LeCun et al., 1998). It consists of a training set of 60K and a test set of 10K 28 à 28 gray-scale images representing digits ranging from 0 to 9. The Multi-Layer-Perceptron (MLP) we train on MNIST consists of 3 hidden layers. In our Theano implementation we used hidden layers of size 4096 whereas in our Torch implementation we used much smaller size 2048. This diï¬erence explains the accuracy gap between the two implementations.
CIFAR-10 CIFAR-10 is an image classiï¬cation benchmark dataset. It consists of a train- ing set of size 50K and a test set of size 10K, where instances are 32 à 32 color images representing airplanes, automobiles, birds, cats, deer, dogs, frogs, horses, ships and trucks. Both implementations share the same structure as reported in Appendix A. Since the Torch implementation uses stochastic binarization, it achieved slightly better results.
10
Quantized Neural Networks
Figure 1: Training curves for diï¬erent methods on the CIFAR-10 dataset. The dotted lines represent the training costs (square hinge losses) and the continuous lines the corresponding validation error rates. Although BNNs are slower to train, they are nearly as accurate as 32-bit ï¬oat DNNs.
CIFAR-10 TRAINING CURVES 25.00% 20.00% 15.00% 10.00% 5.00% VALIDATION ERROR RATE (%) \ we 0.00% () 100 200 300 400 500 EPOCH â âBASELINEâ âBNN (THEANO) â â BNN (TORCH7)
SVHN Street View House Numbers (SVHN) is also an image classiï¬cation benchmark dataset. It consists of a training set of size 604K examples and a test set of size 26K, where instances are 32 à 32 color images representing digits ranging from 0 to 9. Here again we obtained a small improvement in the performance by using stochastic binarization scheeme.
# 4.2 Results on ImageNet
To test the strength of our method, we applied it to the challenging ImageNet classiï¬cation task, which is probably the most important classiï¬cation benchmark dataset. It consists of a training set of size 1.2M samples and test set of size 50K. Each instance is labeled with one of 1000 categories including objects, animals, scenes, and even some abstract shapes. On ImageNet, it is customary to report two error rates: top-1 and top-5, where the top-x error rate is the fraction of test images for which the correct label is not among the x labels considered most probable by the model. Considerable research has been concerned with compressing ImageNet architectures while preserving high accuracy. Previous approaches include pruning near zero weights (Gong et al., 2014; Han et al., 2015a) using matrix factorization techniques (Zhang et al., 2015), quantizing the weights (Gupta et al., 2015), using shared weights (Chen et al., 2015) and applying Huï¬man codes (Han et al., 2015a) among others.
To the best of our knowledge, before the ï¬rst revision of this paper was published on arXive, no one had reported on successfully quantizing the networkâs activations. On the contrary, a recent work (Han et al., 2015a) showed that accuracy signiï¬cantly deteriorates when trying to quantize convolutional layersâ weights below 4-bit (FC layers are more ro- bust to quantization and can operate quite well with only 2 bits). In the present work we
11
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
attempted to tackle the diï¬cult task of binarizing both weights and activations. Employ- ing the well-known AlexNet and GoogleNet architectures, we applied our techniques and achieved 41.8% top-1 and 67.1% top-5 accuracy using AlexNet and 47.1% top-1 and 69.1% top-5 accuracy using GoogleNet. While these performance results leave room for improve- ment (relative to full precision nets), they are by far better than all previous attempts to compress ImageNet architectures using less than 4-bit precision for the weights. Moreover, this advantage is achieved while also binarizing neuron activations.
# 4.3 Relaxing âhard tanhâ boundaries
We discovered that after training the network it is useful to widen the âhard tanhâ bound- aries and retrain the network. As explained in Section 2.3, the straight-through estimator (which can be written as âhard tanhâ) cancels gradients coming from neurons with absolute values higher than 1. Hence, towards the last training iterations most of the gradient values are zero and the weight values cease to update. By relaxing the âhard tanhâ boundaries we allow more gradients to ï¬ow in the back-propagation phase and improve top-1 accuracies by 1.5% on AlexNet topology using vanilla BNN.
# 4.4 2-bit activations
While training BNNs on the ImageNet dataset we noticed that we could not force the training set error rate to converge to zero. In fact the training error rate stayed fairly close to the validation error rate. This observation led us to investigate a more relaxed activation quantization (more than 1-bit). As can be seen in Table 2, the results are quite impressive and illustrate an approximate 5.6% drop in performance (top-1 accuracy) relative to ï¬oating point representation, using only 1-bit weights and 2-bit activation. Following Miyashita et al. (2016), we also tried quantizing the gradients and discovered that only logarithmic quantization works. With 6-bit gradients we achieved 46.8% degradation. Those results are presently state-of-the-art, surpassing those obtained by the DoReFa net (Zhou et al., 2016). As opposed to DoReFa, we utilized a deterministic quantization process rather than a stochastic one. Moreover, it is important to note that while quantizing the gradients, DoReFa assigns for each instance in a mini-batch its own scaling factor, which increases the number of MAC operations.
While AlexNet can be compressed rather easily, compressing GoogleNet is much harder due to its small number of parameters. When using vanilla BNNs, we observed a large degra- dation in the top-1 results. However, by using QNNs with 4-bit weights and activation, we were able to achieve 66.5% top-1 accuracy (only a 5.5% drop in performance compared to the 32-bit ï¬oating point architecture), which is the current state-of-the-art-compression result over GoogleNet. Moreover, by using QNNs with 6-bit weights, activations and gradi- ents we achieved 66.4% top-1 accuracy. Full implementation details of our experiments are reported in Appendix A.6.
# 4.5 Language Models
Recurrent neural networks (RNNs) are very demanding in memory and computational power in comparison to feed forward networks. There are a large variety of recurrent models with
12
Quantized Neural Networks
Table 2: Classiï¬cation test error rates of the AlexNet model trained on the ImageNet 1000 classiï¬cation task. No unsupervised pre-training or data augmentation was used.
Model Top-1 Binarized activations+weights, during training and test Top-5 BNN Xnor-Nets4 (Rastegari et al., 2016) 41.8% 67.1% 44.2% 69.2% Binary weights and Quantize activations during training and test QNN 2-bit activation DoReFaNet 2-bit activation4 (Zhou et al., 2016) 51.03% 73.67% 50.7% 72.57% Quantize weights, during test Deep Compression 4/2-bit (conv/FC layer) (Han et al., 2015a) (Gysel et al., 2016) - 2-bit 55.34% 77.67% 0.01% -% No Quantization (standard results) AlexNet - our implementation 56.6% 80.2%
Table 3: Classiï¬cation test error rates of the GoogleNet model trained on the ImageNet 1000 classiï¬cation task. No unsupervised pre-training or data augmentation was used.
Model Top-1 Top-5 Binarized activations+weights, during training and test BNN 47.1% 69.1% Quantize weights and activations during training and test QNN 4-bit Quantize activation,weights and gradients during training and test QNN 6-bit 66.5% 83.4% 66.4% 83.1% No Quantization (standard results) GoogleNet - our implementation 71.6% 91.2%
the Long Short Term Memory networks (LSTMs) introduced by Hochreiter and Schmidhu- ber (1997) are being the most popular model. LSTMs are a special kind of RNN, capable of learning long-term dependencies using unique gating mechanisms. Recently, Ott et al. (2016) tried to quantize the RNNs weight matrices using similar techniques as described in Section 2. They observed that the weight binarization methods do not work with RNNs. However, by using 2-bits (i.e., â1, 0, 1), they have been able to achieve similar and even higher accuracy on several datasets. Here we report on the ï¬rst attempt to quantize both weights and activations by trying to evaluate the accuracy of quantized recurrent models trained on the Penn Treebank dataset. The Penn Treebank Corpus (Marcus et al., 1993) contains 10K unique words. We followed the same setting as in (Mikolov and Zweig, 2012) which resulted in 18.55K words for training set, 14.5K and 16K words in the validation
13
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
and test sets respectively. We experimented with both vanilla RNNs and LSTMs. For our vanilla RNN model we used one hidden layers of size 2048 and ReLU as the activation function. For our LSTM model we use 1 hidden layer of size 300. Our RNN implementation was constructed to predict the next character hence performance was measured using the bits-per-character (BPC) metric. In the LSTM model we tried to predict the next word so performance was measured using the perplexity per word (PPW) metric. Similar to (Ott et al., 2016), our preliminary results indicate that binarization of weight matrices lead to large accuracy degradation. However, as can be seen in Table 4, with 4-bits activations and weights we can achieve similar accuracies as their 32-bit ï¬oating point counterparts.
Table 4: Language Models results on Penn Treebank dataset. Language Models results on Penn Treebank dataset. FP stands for 32-bit ï¬oating point
Model Layers Hidden Units bits(weights) bits(activation) Accuracy 1.81 BPC RNN 1.67 BPC RNN 1.11 BPC RNN 1.05 BPC RNN 1.05 BPC RNN 220 PPW LSTM 110 PPW LSTM 100 PPW LSTM 97 PPW LSTM 97 PPW LSTM
# 5. High Power Eï¬ciency during the Forward Pass
Table 5: Energy consumption of multiply- accumulations; see Horowitz (2014)
Operation 8-bit Integer 32-bit Integer 16-bit Floating Point 32-bit Floating Point MUL ADD 0.03pJ 0.2pJ 0.1pJ 3.1pJ 0.4pJ 1.1pJ 0.9pJ 3.7pJ
Table 6: Energy consumption of memory accesses; see Horowitz (2014)
# Memory size 8K 32K 1M DRAM
64-bit Cache 10pJ 20pJ 100pJ 1.3-2.6nJ
8K OpJ 32K 20pJ M O0pJ DRAM .3-2.6nJ
4 First and last layers were not binarized (i.e., using 32-bit precision weights and activation)
14
Quantized Neural Networks
Computer hardware, be it general-purpose or specialized, is composed of memories, arithmetic operators and control logic. During the forward pass (both at run-time and train-time), BNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations, which might lead to vastly improved power-eï¬ciency. Moreover, a binarized CNN can lead to binary convolution kernel repetitions, and we argue that dedicated hardware could reduce the time complexity by 60% .
Figure 2: Binary weight ï¬lters, sampled from of the ï¬rst convolution layer. Since we have only 2k2 unique 2D ï¬lters (where k is the ï¬lter size), ï¬lter replication is very common. For instance, on our CIFAR-10 ConvNet, only 42% of the ï¬lters are unique.
i on Fo. a â7 Ls rial
Memory Size and Accesses Improving computing performance has always been and re- mains a challenge. Over the last decade, power has been the main constraint on performance (Horowitz, 2014). This is why considerable research eï¬orts have been devoted to reducing the energy consumption of neural networks. Horowitz (2014) provides rough numbers for the energy consumed by the computation (the given numbers are for 45nm technology), as summarized in Tables 5 and 6. Importantly, we can see that memory accesses typically consume more energy than arithmetic operations, and memory access cost increases with memory size. In comparison with 32-bit DNNs, BNNs require 32 times smaller memory size and 32 times fewer memory accesses. This is expected to reduce energy consumption drastically (i.e., by a factor larger than 32).
XNOR-Count Applying a DNN mainly involves convolutions and matrix multiplica- tions. The key arithmetic operation of deep learning is thus the multiply-accumulate oper- ation. Artiï¬cial neurons are basically multiply-accumulators computing weighted sums of their inputs. In BNNs, both the activations and the weights are constrained to either â1 or +1. As a result, most of the 32-bit ï¬oating point multiply-accumulations are replaced
15
# Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
by 1-bit XNOR-count operations. This could have a big impact on dedicated deep learning hardware. For instance, a 32-bit ï¬oating point multiplier costs about 200 Xilinx FPGA slices (Govindu et al., 2004; Beauchamp et al., 2006), whereas a 1-bit XNOR gate only costs a single slice.
When using a ConvNet architecture with binary weights, the number of unique filters is bounded by the filter size. For example, in our implementation we use filters of size 3 x 3, so the maximum number of unique 2D filters is 2° = 512. However, this should not prevent expanding the number of feature maps beyond this number, since the actual filter is a 3D matrix. Assuming we have M, filters in the @ convolutional layer, we have to store a 4D weight matrix of size My x My_1 x k x k. Consequently, the number of unique filters is gk Me-1 When necessary, we apply each filter on the map and perform the required multiply-accumulate (MAC) operations (in our case, using XNOR. and popcount operations). Since we now have binary filters, many 2D filters of size k x k repeat themselves. By using dedicated hardware/software, we can apply only the unique 2D filters on each feature map and sum the results to receive each 3D filterâs convolutional result. Note that an inverse filter (i-e., [-1,1,-1] is the inverse of [1,-1,1]) can also be treated as a repetition; it is merely a multiplication of the original filter by -1. For example, in our ConvNet architecture trained on the CIFAR-10 benchmark, there are only 42% unique filters per layer on average. Hence we can reduce the number of the XNOR-popcount operations by 3.
QNNs complexity scale up linearly with the number of bits per weight/activation, since it requires the application of the XNOR kernel several times (see Section 3). As of now, QNNs still supply the best compression to accuracy ratio. Moreover, quantizing the gradients allows us to use the XNOR kernel for the backward pass, leading to fully ï¬xed point layers with low bitwidth. By accelerating the training phase, QNNs can play an important role in future power demanding tasks.
# 6. Seven Times Faster on GPU at Run-Time
It is possible to speed up GPU implementations of QNNs, by using a method sometimes called SIMD (single instruction, multiple data) within a register (SWAR). The basic idea of SWAR is to concatenate groups of 32 binary variables into 32-bit registers, and thus obtain a 32-times speed-up on bitwise operations (e.g., XNOR). Using SWAR, it is possible to evaluate 32 connections with only 3 instructions:
a1+ = popcount(xnor(a32b 0 , w32b 1 )), (11)
where a1 is the resulting weighted sum, and a32b are the concatenated inputs and 0 weights. Those 3 instructions (accumulation, popcount, xnor) take 1+4+1 = 6 clock cycles on recent Nvidia GPUs (and if they were to become a fused instruction, it would only take a single clock cycle). Consequently, we obtain a theoretical Nvidia GPU speed-up of factor of 32/6 â 5.3. In practice, this speed-up is quite easy to obtain as the memory bandwidth to computation ratio is also increased 6 times.
In order to validate those theoretical results, we programmed two GPU kernels:
⢠An unoptimized matrix multiplication kernel that serves as our baseline.
16
Quantized Neural Networks
⢠The XNOR kernel, which is nearly identical to the baseline, except that it uses the SWAR method, as in Equation (11).
The two GPU kernels return identical outputs when their inputs are constrained to â1 or +1 (but not otherwise). The XNOR kernel is about 23 times faster than the baseline kernel and 3.4 times faster than cuBLAS, as shown in Figure 3. Last but not least, the MLP from Section 4 runs 7 times faster with the XNOR kernel than with the baseline kernel, without suï¬ering any loss in classiï¬cation accuracy (see Figure 3). As MNISTâs images are not binary, the ï¬rst layerâs computations are always performed by the baseline kernel. The last three columns show that the MLP accuracy does not depend on which kernel is used.
Figure 3: The ï¬rst 3 columns show the time it takes to perform a 8192 à 8192 à 8192 (binary) matrix multiplication on a GTX750 Nvidia GPU, depending on which kernel is used. The next three columns show the time it takes to run the MLP from Section 3 on the full MNIST test set. The last three columns show that the MLP accuracy does not depend on the kernel
GPU KERNELSâ EXECUTION TIMES 1 : I ,. aan MATRIX MULT. (5) MNISTMLP (s) MLP TEST ERROR (%) b w nN MBASELINEKERNEL mCUBLAS/THEANO mm XNOR KERNEL
# 7. Discussion and Related Work
Until recently, the use of extremely low-precision networks (binary in the extreme case) was believed to substantially degrade the network performance (Courbariaux et al., 2014). Soudry et al. (2014) and Cheng et al. (2015) proved the contrary by showing that good performance could be achieved even if all neurons and weights are binarized to ±1 . This was done using Expectation BackPropagation (EBP), a variational Bayesian approach, which infers networks with binary weights and neurons by updating the posterior distributions over the weights. These distributions are updated by diï¬erentiating their parameters (e.g., mean values) via the back propagation (BP) algorithm. Esser et al. (2015) implemented a fully binary network at run time using a very similar approach to EBP, showing signiï¬cant
17
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
improvement in energy eï¬ciency. The drawback of EBP is that the binarized parameters are only used during inference.
The probabilistic idea behind EBP was extended in the BinaryConnect algorithm of Courbariaux et al. (2015a). In BinaryConnect, the real-valued version of the weights is saved and used as a key reference for the binarization process. The binarization noise is independent between diï¬erent weights, either by construction (by using stochastic quanti- zation) or by assumption (a common simpliï¬cation; see Spang and Schultheiss, 1962). The noise would have little eï¬ect on the next neuronâs input because the input is a summation over many weighted neurons. Thus, the real-valued version could be updated using the back propagated error by simply ignoring the binarization noise in the update. With this method, Courbariaux et al. (2015a) were the ï¬rst to binarize weights in CNNs and achieved near state-of-the-art performance on several datasets. They also argued that noisy weights provide a form of regularization, which could help to improve generalization, as previously shown by Wan et al. (2013). This method binarized weights while still maintaining full precision neurons.
Lin et al. (2015a) carried over the work of Courbariaux et al. (2015a) to the back- propagation process by quantizing the representations at each layer of the network, to convert some of the remaining multiplications into binary shifts by restricting the neuronsâ values to be power-of-two integers. Lin et al. (2015a)âs work and ours seem to share sim- ilar characteristics .However, their approach continues to use full precision weights during the test phase. Moreover, Lin et al. (2015a) quantize the neurons only during the back propagation process, and not during forward propagation.
Other research (Baldassi et al., 2015) showed that full binary training and testing is possible in an array of committee machines with randomized input, where only one weight layer is being adjusted. Gong et al. (2014) aimed to compress a fully trained high precision network by using quantization or matrix factorization methods. These methods required training the network with full precision weights and neurons, thus requiring numerous MAC operations (which the proposed QNN algorithm avoids). Hwang and Sung (2014) focused on a ï¬xed-point neural network design and achieved performance almost identical to that of the ï¬oating-point architecture. Kim and Smaragdis (2016) retrained neural networks with binary weights and activations.
As far as we know, before the ï¬rst revision of this paper was published on arXive, no work succeeded in binarizing weights and neurons, at the inference phase and the entire training phase of a deep network. This was achieved in the present work. We relied on the idea that binarization can be done stochastically, or be approximated as random noise. This was previously done for the weights by Courbariaux et al. (2015a), but our BNNs extend this to the activations. Note that the binary activations are especially important for ConvNets, where there are typically many more neurons than free weights. This allows highly eï¬cient operation of the binarized DNN at run time, and at the forward-propagation phase during training. Moreover, our training method has almost no multiplications, and therefore might be implemented eï¬ciently in dedicated hardware. However, we have to save the value of the full precision weights. This is a remaining computational bottleneck during training, since it is an energy-consuming operation.
Shortly after the ï¬rst version of this paper was posted on arXiv, several papers tried to improve and extend it. Rastegari et al. (2016) made a small modiï¬cation to our algo-
18
Quantized Neural Networks
rithm (namely multiplying the binary weights and input by their L1 norm) and published promising results on the ImageNet dataset. Note that their method, named Xnor-Net, re- quires additional multiplication by a diï¬erent scaling factor for each patch in each sample (Rastegari et al., 2016) Section 3.2 Eq. 10 and ï¬gure 2). This in itself, requires many mul- tiplications and prevents eï¬cient implementation of XnorNet on known hardware designs. Moreover, (Rastegari et al., 2016) didnât quantize ï¬rst and last layers, therefore XNOR-Net are only partially binarized NNs. Miyashita et al. (2016) suggested a more relaxed quan- tization (more than 1-bit) for both the weights and activation. Their idea was to quantize both and use shift operations as in our Eq. (4). They proposed to quantize the param- eters in their non-uniform, base-2 logarithmic representation. This idea was inspired by the fact that the weights and activations in a trained network naturally have non-uniform distributions. They moreover showed that they can quantize the gradients as well to 6-bit without signiï¬cant losses in performance (on the Cifar-10 dataset). Zhou et al. (2016) ap- plied similar ideas to the ImageNet dataset and showed that by using 1-bit weights, 2-bit activations and 6-bit gradients they can achieve 46.1% top-1 accuracies using the AlexNet architecture. They named this method DoReFa net. Here we outperform DoReFa net and achieve 46.8% using a 1-2-6 bit quantization scheme (weight-activation-gradients) and 51% using a 1-2-32 quantization scheme. These results conï¬rm that we can achieve comparable results even on a large dataset by applying the Xnor kernel several times. Merolla et al. (2016) showed that DNN can be robust to more than just weight binarization. They applied several diï¬erent distortions to the weights, including additive and multiplicative noise, and a class of non-linear projections.This was shown to improve robustness to other distortions and even boost results. Zheng and Tang tried to apply our binarization scheme to recurrent neural network for language modeling and achieved comparable results as well. Andri et al. (2016) even created a hardware implementation to speed up BNNs.
# Conclusion
We have introduced BNNs, which binarize deep neural networks and can lead to dramatic improvements in both power consumption and computation speed. During the forward pass (both at run-time and train-time), BNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. Our estimates indicate that power eï¬ciency can be improved by more than one order of magnitude (see Section 5). In terms of speed, we programmed a binary matrix multiplication GPU kernel that enabled running MLP over the MNIST datset 7 times faster (than with an unoptimized GPU kernel) without any loss of accuracy (see Section 6).
We have shown that BNNs can handle MNIST, CIFAR-10 and SVHN while achiev- ing nearly state-of-the-art accuracy. While our results for the challenging ImageNet are not on par with the best results achievable with full precision networks, they signiï¬cantly improve all previous attempts to compress ImageNet-capable architectures. Moreover, by quantizing the weights and activations to more than 1-bit (i.e., QNNs), we have been able to achieve comparable results to the 32-bit ï¬oating point architectures (see Section 4.4 and supplementary material - Appendix B). A major open research avenue would be to further improve our results on ImageNet. Substantial progress in this direction might go a long way towards facilitating DNN usability in low power instruments such as mobile phones.
19
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
# Acknowledgments
We would like to express our appreciation to Elad Hoï¬er, for his technical assistance and constructive comments. We thank our fellow MILA lab members who took the time to read the article and give us some feedback. We thank the developers of Torch, (Collobert et al., 2011) a Lua based environment, and Theano (Bergstra et al., 2010; Bastien et al., 2012), a Python library that allowed us to easily develop fast and optimized code for GPU. We also thank the developers of Pylearn2 (Goodfellow et al., 2013a) and Lasagne (Dieleman et al., 2015), two deep learning libraries built on the top of Theano. We thank Yuxin Wu for helping us compare our GPU kernels with cuBLAS. We are also grateful for funding from NSERC, the Canada Research Chairs, Compute Canada, and CIFAR. We are also grateful for funding from CIFAR, NSERC, IBM, Samsung. This research was supported by The Israel Science Foundation (grant No. 1890/14)
20
Quantized Neural Networks
# Appendix A. Implementation Details
In this section we give full implementation details over our MNIST,SVHN, CIFAR-10 and ImageNet datasets.
# A.1 MLP on MNIST (Theano)
MNIST is an image classiï¬cation benchmark dataset (LeCun et al., 1998). It consists of a training set of 60K and a test set of 10K 28 à 28 gray-scale images representing digits ranging from 0 to 9. In order for this benchmark to remain a challenge, we did not use any convolution, data-augmentation, preprocessing or unsupervised learning. The Multi-Layer- Perceptron (MLP) we train on MNIST consists of 3 hidden layers of 4096 binary units and a L2-SVM output layer; L2-SVM has been shown to perform better than Softmax on several classiï¬cation benchmarks (Tang, 2013; Lee et al., 2014). We regularize the model with Dropout (Srivastava et al., 2014). The square hinge loss is minimized with the ADAM adaptive learning rate method (Kingma and Ba, 2014b). We use an exponentially decaying global learning rate, as per Algorithm 1, and also scale the learning rates of the weights with their initialization coeï¬cients from (Glorot and Bengio, 2010), as suggested by Courbariaux et al. (2015a). We use Batch Normalization with a minibatch of size 100 to speed up the training. As is typical, we use the last 10K samples of the training set as a validation set for early stopping and model selection. We report the test error rate associated with the best validation error rate after 1000 epochs (we do not retrain on the validation set).
# A.2 MLP on MNIST (Torch7)
We use a similar architecture as in our Theano experiments, without dropout, and with 2048 binary units per layer instead of 4096. Additionally, we use the shift base AdaMax and BN (with a minibatch of size 100) instead of the vanilla implementations, to reduce the number of multiplications. Likewise, we decay the learning rate by using a 1-bit right shift every 10 epochs.
# A.3 ConvNet on CIFAR-10 (Theano)
CIFAR-10 is an image classiï¬cation benchmark dataset. It consists of a training set of size 50K and a test set of size 10K, where instances are 32 à 32 color images representing airplanes, automobiles, birds, cats, deer, dogs, frogs, horses, ships and trucks. We do not use data-augmentation (which can really be a game changer for this dataset; see Graham 2014). The architecture of our ConvNet is identical to that used by Courbariaux et al. (2015b) except for the binarization of the activations. The Courbariaux et al. (2015a) architecture is itself mainly inspired by VGG (Simonyan and Zisserman, 2015). The square hinge loss is minimized with ADAM. We use an exponentially decaying learning rate, as we did for MNIST. We scale the learning rates of the weights with their initialization coeï¬cients from (Glorot and Bengio, 2010). We use Batch Normalization with a minibatch of size 50 to speed up the training. We use the last 5000 samples of the training set as a validation set. We report the test error rate associated with the best validation error rate after 500 training epochs (we do not retrain on the validation set).
21
# Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
Table 7: Architecture of our CIFAR-10 ConvNet. We only use âsameâ convolutions as in VGG (Simonyan and Zisserman, 2015).
CIFAR-10 ConvNet architecture
Input: 32 Ã 32 - RGB image 3 Ã 3 - 128 convolution layer BatchNorm and Binarization layers 3 Ã 3 - 128 convolution and 2 Ã 2 max-pooling layers BatchNorm and Binarization layers 3 Ã 3 - 256 convolution layer BatchNorm and Binarization layers 3 Ã 3 - 256 convolution and 2 Ã 2 max-pooling layers BatchNorm and Binarization layers 3 Ã 3 - 512 convolution layer BatchNorm and Binarization layers 3 Ã 3 - 512 convolution and 2 Ã 2 max-pooling layers BatchNorm and Binarization layers 1024 fully connected layer BatchNorm and Binarization layers 1024 fully connected layer BatchNorm and Binarization layers 10 fully connected layer BatchNorm layer (no binarization) Cost: Mean square hinge loss
# A.4 ConvNet on CIFAR-10 (Torch7)
We use the same architecture as in our Theano experiments. We apply shift-based AdaMax and BN (with a minibatch of size 200) instead of the vanilla implementations to reduce the number of multiplications. Likewise, we decay the learning rate by using a 1-bit right shift every 50 epochs.
# A.5 ConvNet on SVHN
SVHN is also an image classiï¬cation benchmark dataset. It consists of a training set of size 604K examples and a test set of size 26K, where instances are 32 à 32 color images representing digits ranging from 0 to 9. In both sets of experiments, we follow the same procedure used for the CIFAR-10 experiments, with a few notable exceptions: we use half the number of units in the convolution layers, and we train for 200 epochs instead of 500 (because SVHN is a much larger dataset than CIFAR-10).
# A.6 ConvNet on ImageNet
ImageNet classiï¬cation task consists of a training set of size 1.2M samples and test set of size 50K. Each instance is labeled with one of 1000 categories including objects, animals, scenes, and even some abstract shapes.
22
Quantized Neural Networks
AlexNet: Our AlexNet implementation consists of 5 convolution layers followed by 3 fully connected layers (see Section 8). Additionally, we use Adam as our optimization method and batch-normalization layers (with a minibatch of size 512). Likewise, we decay the learning rate by 0.1 every 20 epochs.
GoogleNet: Our GoogleNet implementation consist of 2 convolution layers followed by 10 inception layers, spatial-average-pooling and a fully connected classiï¬er. We also used the 2 auxilary classiï¬ers. Additionally, we use Adam (Kingma and Ba, 2014a) as our optimization method and batch-normalization layers (with a minibatch of size 64). Likewise, we decay the learning rate by 0.1 every 10 epochs.
Table 8: Our AlexNet Architecture. AlexNet ConvNet architecture Input: 32 Ã 32 - RGB image 11 Ã 11 - 64 convolution layer and 3 Ã 3 max-pooling layers BatchNorm and Binarization layers 5 Ã 5 - 192 convolution layer and 3 Ã 3 max-pooling layers BatchNorm and Binarization layers 3 Ã 3 - 384 convolution layer BatchNorm and Binarization layers 3 Ã 3 - 256 convolution layer BatchNorm and Binarization layers 3 Ã 3 - 256 convolution layer BatchNorm and Binarization layers 4096 fully connected layer BatchNorm and Binarization layers 4096 fully connected layer BatchNorm and Binarization layers 1000 fully connected layer BatchNorm layer (no binarization) SoftMax layer (no binarization) Cost: Negative log likelihood
# References
Renzo Andri, Lukas Cavigelli, Davide Rossi, and Luca Benini. Yodann: An ultra-low power convolutional neural network accelerator based on binary weights. arXiv preprint arXiv:1606.05487, 2016.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLRâ2015, arXiv:1409.0473, 2015.
Carlo Baldassi, Alessandro Ingrosso, Carlo Lucibello, Luca Saglietti, and Riccardo Zecchina. Subdominant Dense Clusters Allow for Simple Learning and High Computational Per- formance in Neural Networks with Discrete Synapses. Physical Review Letters, 115(12): 1â5, 2015. ISSN 10797114. doi: 10.1103/PhysRevLett.115.128101.
Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and
23
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
Michael J Beauchamp, Scott Hauck, Keith D Underwood, and K Scott Hemmert. Em- bedded ï¬oating-point units in FPGAs. In Proceedings of the 2006 ACM/SIGDA 14th international symposium on Field programmable gate arrays, pages 12â20. ACM, 2006.
Yoshua Bengio. Estimating or propagating gradients through stochastic neurons. Technical Report arXiv:1305.2982, Universite de Montreal, 2013.
James Bergstra, Olivier Breuleux, Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, Guil- laume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientiï¬c Computing Conference (SciPy), June 2010. Oral Presentation.
Tianshi Chen, Zidong Du, Ninghui Sun, Jia Wang, Chengyong Wu, Yunji Chen, and Olivier Temam. Diannao: A small-footprint high-throughput accelerator for ubiquitous machine- learning. In Proceedings of the 19th international conference on Architectural support for programming languages and operating systems, pages 269â284. ACM, 2014a.
Wenlin Chen, James T Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen. Compressing neural networks with the hashing trick. arXiv preprint arXiv:1504.04788, 2015.
Yunji Chen, Tao Luo, Shaoli Liu, Shijin Zhang, Liqiang He, Jia Wang, Ling Li, Tianshi Chen, Zhiwei Xu, Ninghui Sun, et al. Dadiannao: A machine-learning supercomputer. In Microarchitecture (MICRO), 2014 47th Annual IEEE/ACM International Symposium on, pages 609â622. IEEE, 2014b.
Zhiyong Cheng, Daniel Soudry, Zexi Mao, and Zhenzhong Lan. Training binary multilayer neural networks for image classiï¬cation using expectation backpropgation. arXiv preprint arXiv:1503.03562, 2015.
Adam Coates, Brody Huval, Tao Wang, David Wu, Bryan Catanzaro, and Ng Andrew. Deep learning with COTS HPC systems. In Proceedings of the 30th international conference on machine learning, pages 1337â1345, 2013.
Ronan Collobert, Koray Kavukcuoglu, and Cl´ement Farabet. Torch7: A matlab-like envi- ronment for machine learning. In BigLearn, NIPS Workshop, 2011.
Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Training deep neural net- works with low precision multiplications. ArXiv e-prints, abs/1412.7024, December 2014.
Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Train- ing deep neural networks with binary weights during propagations. ArXiv e-prints, abs/1511.00363, November 2015a.
Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. BinaryConnect: Training Deep Neural Networks with binary weights during propagations. Nips, pages 1â9, 2015b. URL http://arxiv.org/abs/1511.00363.
24
Quantized Neural Networks
Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard Schwartz, and John Makhoul. Fast and robust neural network joint models for statistical machine translation. In Proc. ACLâ2014, 2014.
Sander Dieleman, Jan Schlter, Colin Raï¬el, Eben Olson, Sren Kaae Snderby, Daniel Nouri, Daniel Maturana, Martin Thoma, Eric Battenberg, Jack Kelly, Jeï¬rey De Fauw, Michael Heilman, diogo149, Brian McFee, Hendrik Weideman, takacsg84, peterderivaz, Jon, in- stagibbs, Dr. Kashif Rasul, CongLiu, Britefury, and Jonas Degrave. Lasagne: First release., August 2015. URL http://dx.doi.org/10.5281/zenodo.27878.
Steve K Esser, Rathinakumar Appuswamy, Paul Merolla, John V Arthur, and Dharmen- dra S Modha. Backpropagation for energy-eï¬cient neuromorphic computing. In Advances in Neural Information Processing Systems, pages 1117â1125, 2015.
Cl´ement Farabet, Yann LeCun, Koray Kavukcuoglu, Eugenio Culurciello, Berin Martini, Polina Akselrod, and Selcuk Talay. Large-scale FPGA-based convolutional networks. Machine Learning on Very Large Data Sets, 1, 2011a.
Cl´ement Farabet, Berin Martini, Benoit Corda, Polina Akselrod, Eugenio Culurciello, and Yann LeCun. Neuï¬ow: A runtime reconï¬gurable dataï¬ow processor for vision. In Com- puter Vision and Pattern Recognition Workshops (CVPRW), 2011 IEEE Computer So- ciety Conference on, pages 109â116. IEEE, 2011b.
Xavier Glorot and Yoshua Bengio. Understanding the diï¬culty of training deep feedforward neural networks. In AISTATSâ2010, 2010.
Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115, 2014.
Ian J. Goodfellow, David Warde-Farley, Pascal Lamblin, Vincent Dumoulin, Mehdi Mirza, Razvan Pascanu, James Bergstra, Fr´ed´eric Bastien, and Yoshua Bengio. Pylearn2: a machine learning research library. arXiv preprint arXiv:1308.4214, 2013a.
Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout Networks. arXiv preprint, pages 1319â1327, 2013b. URL http://arxiv.org/ abs/1302.4389.
Gokul Govindu, Ling Zhuo, Seonil Choi, and Viktor Prasanna. Analysis of high-performance ï¬oating-point arithmetic on FPGAs. In Parallel and Distributed Processing Symposium, 2004. Proceedings. 18th International, page 149. IEEE, 2004.
Benjamin Graham. Spatially-sparse convolutional neural networks. arXiv preprint arXiv:1409.6070, 2014.
Alex Graves. Practical variational inference for neural networks. In Advances in Neural Information Processing Systems, pages 2348â2356, 2011.
Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. CoRR, abs/1502.02551, 392, 2015.
25
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
Philipp Gysel, Mohammad Motamedi, and Soheil Ghiasi. Hardware-oriented approximation of convolutional neural networks. arXiv preprint arXiv:1604.03168, 2016.
Huizi Mao Han, Song and William J. Dally. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huï¬man Coding. arXiv preprint, pages 1â11, 2015. URL http://arxiv.org/abs/1510.00149.
Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neu- ral networks with pruning, trained quantization and huï¬man coding. arXiv preprint arXiv:1510.00149, 2015a.
Song Han, Jeï¬ Pool, John Tran, and William Dally. Learning both weights and connections for eï¬cient neural network. In Advances in Neural Information Processing Systems, pages 1135â1143, 2015b.
Geoï¬rey Hinton. Neural networks for machine learning. Coursera, video lectures, 2012.
Geoï¬rey Hinton, Li Deng, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, An- drew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara Sainath, and Brian Kingsbury. Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Processing Magazine, 29(6):82â97, Nov. 2012.
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997.
IEEE Interational Solid State Circuits Conference, pages 10â14, 2014. ISSN 0018-9200. doi: 10.1109/JSSC.2014.2361354.
Kyuyeon Hwang and Wonyong Sung. Fixed-point feedforward deep neural network design using weights+ 1, 0, and- 1. In Signal Processing Systems (SiPS), 2014 IEEE Workshop on, pages 1â6. IEEE, 2014.
Sergey Ioï¬e and Christian Szegedy. Batch normalization: Accelerating deep network train- ing by reducing internal covariate shift. 2015.
M. Kim and P. Smaragdis. Bitwise Neural Networks. ArXiv e-prints, January 2016.
Diederik Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs], pages 1â13, 2014a. URL http://arxiv.org/abs/1412.6980$\ delimiter"026E30F$nhttp://www.arxiv.org/pdf/1412.6980.pdf.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014b.
A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet classiï¬cation with deep convolutional neural networks. In NIPSâ2012. 2012.
Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haï¬ner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, November 1998.
26
Quantized Neural Networks
Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply- supervised nets. arXiv preprint arXiv:1409.5185, 2014.
Chen-Yu Lee, Patrick W Gallagher, and Zhuowen Tu. Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. arXiv preprint arXiv:1509.08985, 2015.
Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. Neural net- works with few multiplications. ArXiv e-prints, abs/1510.03009, October 2015a.
Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. Neural Net- works with Few Multiplications. Iclr, pages 1â8, 2015b. URL http://arxiv.org/abs/ 1510.03009.
Chris Lomont. Fast inverse square root. Tech-315 nical Report, page 32, 2003.
Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313â 330, 1993.
Paul Merolla, Rathinakumar Appuswamy, John Arthur, Steve K Esser, and Dharmendra Modha. Deep neural networks are robust to weight binarization and other non-linear distortions. arXiv preprint arXiv:1606.01981, 2016.
Tomas Mikolov and Geoï¬rey Zweig. Context dependent recurrent neural network language model. In SLT, pages 234â239, 2012.
Daisuke Miyashita, Edward H Lee, and Boris Murmann. Convolutional neural networks using logarithmic data representation. arXiv preprint arXiv:1603.01025, 2016.
Volodymyr Mnih, Koray Kavukcuoglo, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidgeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharsan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518:529â533, 2015.
Alexander Mordvintsev, Christopher Olah, and Mike Tyka. Inceptionism: Going deeper into neural networks, 2015. URL http://googleresearch.blogspot.co.uk/2015/06/ inceptionism-going-deeper-into-neural.html. Accessed: 2015-06-30.
Joachim Ott, Zhouhan Lin, Ying Zhang, Shih-Chii Liu, and Yoshua Bengio. Recurrent neural networks with limited numerical precision. arXiv preprint arXiv:1608.06902, 2016.
Phi-Hung Pham, Darko Jelaca, Clement Farabet, Berin Martini, Yann LeCun, and Eu- In Circuits genio Culurciello. Neuï¬ow: Dataï¬ow vision processing system-on-a-chip. and Systems (MWSCAS), 2012 IEEE 55th International Midwest Symposium on, pages 1044â1047. IEEE, 2012.
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: arXiv preprint Imagenet classiï¬cation using binary convolutional neural networks. arXiv:1603.05279, 2016.
27
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014.
Tara Sainath, Abdel rahman Mohamed, Brian Kingsbury, and Bhuvana Ramabhadran. Deep convolutional neural networks for LVCSR. In ICASSP 2013, 2013.
David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanc- tot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489, Jan 2016. ISSN 0028-0836. URL http://dx.doi.org/10.1038/ nature16961. Article.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
Daniel Soudry, Itay Hubara, and Ron Meir. Expectation backpropagation: Parameter-free training of multilayer neural networks with continuous or discrete weights. In NIPSâ2014, 2014.
H Spang and P Schultheiss. Reduction of quantizing noise by use of feedback. IRE Trans- actions on Communications Systems, 10(4):373â380, 1962.
Nitish Srivastava, Geoï¬rey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhut- dinov. Dropout: A simple way to prevent neural networks from overï¬tting. Journal of Machine Learning Research, 15:1929â1958, 2014. URL http://jmlr.org/papers/v15/ srivastava14a.html.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In NIPSâ2014, 2014.
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. Technical report, arXiv:1409.4842, 2014.
Yichuan Tang. Deep learning using linear support vector machines. Workshop on Challenges in Representation Learning, ICML, 2013.
Naoya Torii, Hirotaka Kokubo, Dai Yamamoto, Kouichi Itoh, Masahiko Takenaka, and Tsutomu Matsumoto. Asic implementation of random number generators using sr latches and its evaluation. EURASIP Journal on Information Security, 2016(1):1â12, 2016.
Improving the speed of neural networks on CPUs. In Proc. Deep Learning and Unsupervised Feature Learning NIPS Workshop, 2011.
Li Wan, Matthew Zeiler, Sixin Zhang, Yann LeCun, and Rob Fergus. Regularization of neural networks using dropconnect. In ICMLâ2013, 2013.
28
Quantized Neural Networks
Xiangyu Zhang, Jianhua Zou, Xiang Ming, Kaiming He, and Jian Sun. Eï¬cient and accu- rate approximations of nonlinear convolutional networks. pages 1984â1992, 2015.
Weiyi Zheng and Yina Tang. Binarized neural networks for language modeling.
Shuchang Zhou, Zekun Ni, Xinyu Zhou, He Wen, Yuxin Wu, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016.
29 | {
"id": "1509.08985"
} |
1609.06038 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | 7 1 0 2
r p A 6 2 ] L C . s c [
3 v 8 3 0 6 0 . 9 0 6 1 : v i X r a
# Enhanced LSTM for Natural Language Inference
# Qian Chen University of Science and Technology of China cq1231@mail.ustc.edu.cn
# Xiaodan Zhu National Research Council Canada xiaodan.zhu@nrc-cnrc.gc.ca
Zhenhua Ling University of Science and Technology of China zhling@ustc.edu.cn
Si Wei iFLYTEK Research siwei@iflytek.com
# Hui Jiang York University hj@cse.yorku.ca
# Diana Inkpen University of Ottawa diana@site.uottawa.ca
# Abstract
Reasoning and inference are central to hu- man and artiï¬cial intelligence. Modeling inference in human language is very chal- lenging. With the availability of large an- notated data (Bowman et al., 2015), it has recently become feasible to train neural network based inference models, which have shown to be very effective. In this paper, we present a new state-of-the-art re- sult, achieving the accuracy of 88.6% on the Stanford Natural Language Inference Dataset. Unlike the previous top models that use very complicated network architec- tures, we ï¬rst demonstrate that carefully de- signing sequential inference models based on chain LSTMs can outperform all previ- ous models. Based on this, we further show that by explicitly considering recursive ar- chitectures in both local inference model- ing and inference composition, we achieve additional improvement. Particularly, in- corporating syntactic parsing information contributes to our best resultâit further im- proves the performance even when added to the already very strong model.
# Introduction
condition for true natural language understanding is a mastery of open-domain natural language in- ference.â The previous work has included extensive research on recognizing textual entailment.
Speciï¬cally, natural language inference (NLI) is concerned with determining whether a natural- language hypothesis h can be inferred from a premise p, as depicted in the following example from MacCartney (2009), where the hypothesis is regarded to be entailed from the premise.
p: Several airlines polled saw costs grow more than expected, even after adjusting for inï¬ation.
h: Some of the companies in the poll reported cost increases.
The most recent years have seen advances in modeling natural language inference. An impor- tant contribution is the creation of a much larger annotated dataset, the Stanford Natural Language Inference (SNLI) dataset (Bowman et al., 2015). The corpus has 570,000 human-written English sentence pairs manually labeled by multiple human subjects. This makes it feasible to train more com- plex inference models. Neural network models, which often need relatively large annotated data to estimate their parameters, have shown to achieve the state of the art on SNLI (Bowman et al., 2015, 2016; Munkhdalai and Yu, 2016b; Parikh et al., 2016; Sha et al., 2016; Paria et al., 2016).
Reasoning and inference are central to both human and artiï¬cial intelligence. Modeling inference in human language is notoriously challenging but is a basic problem towards true natural language un- derstanding, as pointed out by MacCartney and Manning (2008), âa necessary (if not sufï¬cient)
While some previous top-performing models use rather complicated network architectures to achieve the state-of-the-art results (Munkhdalai and Yu, 2016b), we demonstrate in this paper that enhanc- ing sequential inference models based on chain
models can outperform all previous results, sug- gesting that the potentials of such sequential in- ference approaches have not been fully exploited yet. More speciï¬cally, we show that our sequential inference model achieves an accuracy of 88.0% on the SNLI benchmark.
Exploring syntax for NLI is very attractive to us. In many problems, syntax and semantics interact closely, including in semantic composition (Partee, 1995), among others. Complicated tasks such as natural language inference could well involve both, which has been discussed in the context of rec- ognizing textual entailment (RTE) (Mehdad et al., 2010; Ferrone and Zanzotto, 2014). In this pa- per, we are interested in exploring this within the neural network frameworks, with the presence of relatively large training data. We show that by explicitly encoding parsing information with re- cursive networks in both local inference modeling and inference composition and by incorporating it into our framework, we achieve additional im- provement, increasing the performance to a new state of the art with an 88.6% accuracy.
# 2 Related Work
Early work on natural language inference has been performed on rather small datasets with more con- ventional methods (refer to MacCartney (2009) for a good literature survey), which includes a large bulk of work on recognizing textual entail- ment, such as (Dagan et al., 2005; Iftene and Balahur-Dobrescu, 2007), among others. More recently, Bowman et al. (2015) made available the SNLI dataset with 570,000 human annotated sen- tence pairs. They also experimented with simple classiï¬cation models as well as simple neural net- works that encode the premise and hypothesis in- dependently. Rocktäschel et al. (2015) proposed neural attention-based models for NLI, which cap- tured the attention information. In general, atten- tion based models have been shown to be effec- tive in a wide range of tasks, including machine translation (Bahdanau et al., 2014), speech recogni- tion (Chorowski et al., 2015; Chan et al., 2016), im- age caption (Xu et al., 2015), and text summariza- tion (Rush et al., 2015; Chen et al., 2016), among others. For NLI, the idea allows neural models to pay attention to speciï¬c areas of the sentences.
A variety of more advanced networks have been developed since then (Bowman et al., 2016; Ven- drov et al., 2015; Mou et al., 2016; Liu et al., 2016;
Munkhdalai and Yu, 2016a; Rocktäschel et al., 2015; Wang and Jiang, 2016; Cheng et al., 2016; Parikh et al., 2016; Munkhdalai and Yu, 2016b; Sha et al., 2016; Paria et al., 2016). Among them, more relevant to ours are the approaches proposed by Parikh et al. (2016) and Munkhdalai and Yu (2016b), which are among the best performing mod- els.
Parikh et al. (2016) propose a relatively sim- ple but very effective decomposable model. The model decomposes the NLI problem into subprob- lems that can be solved separately. On the other hand, Munkhdalai and Yu (2016b) propose much more complicated networks that consider sequen- tial LSTM-based encoding, recursive networks, and complicated combinations of attention mod- els, which provide about 0.5% gain over the results reported by Parikh et al. (2016).
It is, however, not very clear if the potential of the sequential inference networks has been well exploited for NLI. In this paper, we ï¬rst revisit this problem and show that enhancing sequential infer- ence models based on chain networks can actually outperform all previous results. We further show that explicitly considering recursive architectures to encode syntactic parsing information for NLI could further improve the performance.
# 3 Hybrid Neural Inference Models
We present here our natural language inference net- works which are composed of the following major components: input encoding, local inference mod- eling, and inference composition. Figure 1 shows a high-level view of the architecture. Vertically, the ï¬gure depicts the three major components, and hor- izontally, the left side of the ï¬gure represents our sequential NLI model named ESIM, and the right side represents networks that incorporate syntactic parsing information in tree LSTMs.
In our notation, we have two sentences a = (a1,...,ag,) and b = (bi,..., bg,), where ais a premise and b a hypothesis. The a; or b; ⬠R' is an embedding of /-dimensional vector, which can be initialized with some pre-trained word embed- dings and organized with parse trees. The goal is to predict a label y that indicates the logic relationship between a and b.
# 3.1 Input Encoding
We employ bidirectional LSTM (BiLSTM) as one of our basic building blocks for NLI. We ï¬rst use it
Prediction Softmax Softmax Average&Max Root&Average&Max Inference Composition Hypothesis Fi Local Inference Modeling PO 0-0 el PO 0-0 el Premise Premise > <<>> Premise <> <> Hypothesis Hypothesis Hypothesis Input Encoding Fa: Hypothesis Tree-LSTM > <> Premise <> <> Hypothesis BiLSTM Input
Figure 1: A high-level view of our hybrid neural inference networks.
to encode the input premise and hypothesis (Equa- tion (1) and (2)). Here BiLSTM learns to represent a word (e.g., ai) and its context. Later we will also use BiLSTM to perform inference composition to construct the ï¬nal prediction, where BiLSTM en- codes local inference information and its interac- tion. To bookkeep the notations for later use, we write as ¯ai the hidden (output) state generated by the BiLSTM at time i over the input sequence a. The same is applied to ¯bj:
¯ai = BiLSTM(a, i), i â ¯bj = BiLSTM(b, j),
(1)
[1,..., 4a], [1,..-, Co].
â
(2)
# j â
â
Due to the space limit, we will skip the descrip- tion of the basic chain LSTM and readers can refer to Hochreiter and Schmidhuber (1997) for details. Brieï¬y, when modeling a sequence, an LSTM em- ploys a set of soft gates together with a memory cell to control message ï¬ows, resulting in an effec- tive modeling of tracking long-distance informa- tion/dependencies in a sequence.
A bidirectional LSTM runs a forward and back- ward LSTM on a sequence starting from the left and the right end, respectively. The hidden states
generated by these two LSTMs at each time step are concatenated to represent that time step and its context. Note that we used LSTM memory blocks in our models. We examined other recurrent memory blocks such as GRUs (Gated Recurrent Units) (Cho et al., 2014) and they are inferior to LSTMs on the heldout set for our NLI task.
As discussed above, it is intriguing to explore the effectiveness of syntax for natural language inference; for example, whether it is useful even when incorporated into the best-performing models. To this end, we will also encode syntactic parse trees of a premise and hypothesis through tree- LSTM (Zhu et al., 2015; Tai et al., 2015; Le and Zuidema, 2015), which extends the chain LSTM to a recursive network (Socher et al., 2011).
Speciï¬cally, given the parse of a premise or hy- pothesis, a tree node is deployed with a tree-LSTM memory block depicted as in Figure 2 and com- puted with Equations (3â10). In short, at each node, an input vector xt and the hidden vectors of its two children (the left child hL tâ1) are taken in as the input to calculate the current nodeâs hidden vector ht.
hey | hf, hh, i hf, Input Gate (i Output Gate (% he, Cell x P+ (Pâ{ + is R hey Left Forget Gate Right Forge} Gate fF ATS L R L R L R hiix, bi chy ef) hhix, bey
Figure 2: A tree-LSTM memory block.
We describe the updating of a node at a high level with Equation (3) to facilitate references later in the paper, and the detailed computation is described in (4â10). Speciï¬cally, the input of a node is used to conï¬gure four gates: the input gate it, output gate ot, and the two forget gates f L t . The memory cell ct considers each childâs cell vector, tâ1 and cR cL tâ1, which are gated by the left forget
# gate f L
# t and right forget gate f R
# t , respectively.
gate f/ and right forget gate f/*, respectively.
h, = TrLSTM(x,, h/_,, hh ,), hy, = o © tanh(cz), 0, = o(W.x, + UFhY , + UFh? |), co =f oct, +f oct, +i,0u, ff =o(Wyx, + UF ht, + UF" he ff? = o(Wyx, + UP he, + UP Phe), ip = o (Wx, + UPhi_, + UP hf), u, = tanh(W,x;, + UPhy_; + UPh?,),
1), 1),
where Ï is the sigmoid function, wise multiplication of two vectors, and all W RdÃl, U
R®â¢! U © R**4 are weight matrices to be learned. In the current input encoding layer, x; is used to encode a word embedding for a leaf node. Since a non-leaf node does not correspond to a specific word, we use a special vector xâ, as its input, which is like an unknown word. However, in the inference composition layer that we discuss later, the goal of using tree-LSTM is very different; the input x; will be very different as wellâit will encode local inference information and will have values at all tree nodes.
# 3.2 Local Inference Modeling
Modeling local subsentential inference between a premise and hypothesis is the basic component for determining the overall inference between these two statements. To closely examine local infer- ence, we explore both the sequential and syntactic tree models that have been discussed above. The former helps collect local inference for words and their context, and the tree LSTM helps collect lo- cal information between (linguistic) phrases and clauses.
Locality of inference Modeling local inference needs to employ some forms of hard or soft align- ment to associate the relevant subcomponents be- tween a premise and a hypothesis. This includes early methods motivated from the alignment in conventional automatic machine translation (Mac- Cartney, 2009). In neural network models, this is often achieved with soft attention.
Parikh et al. (2016) decomposed this process: the word sequence of the premise (or hypothesis) is regarded as a bag-of-word embedding vector and inter-sentence âalignmentâ (or attention) is computed individually to softly align each word
GB)
(4)
(5)
(6)
(7)
(8)
()
(10)
to the content of hypothesis (or premise, respec- tively). While their basic framework is very effec- tive, achieving one of the previous best results, us- ing a pre-trained word embedding by itself does not automatically consider the context around a word in NLI. Parikh et al. (2016) did take into account the word order and context information through an optional distance-sensitive intra-sentence attention. In this paper, we argue for leveraging attention over the bidirectional sequential encoding of the input, as discussed above. We will show that this plays an important role in achieving our best results, and the intra-sentence attention used by Parikh et al. (2016) actually does not further improve over our model, while the overall framework they proposed is very effective.
Our soft alignment layer computes the attention weights as the similarity of a hidden state tuple <¯ai, ¯bj> between a premise and a hypothesis with Equation (11). We did study more complicated relationships between ¯ai and ¯bj with multilayer perceptrons, but observed no further improvement on the heldout data.
eij = ¯aT i ¯bj. (11)
In the formula, ¯ai and ¯bj are computed earlier in Equations (1) and (2), or with Equation (3) when tree-LSTM is used. Again, as discussed above, we will use bidirectional LSTM and tree-LSTM to en- code the premise and hypothesis, respectively. In our sequential inference model, unlike in Parikh et al. (2016) which proposed to use a function F (¯ai), i.e., a feedforward neural network, to map the original word representation for calculating eij, we instead advocate to use BiLSTM, which en- codes the information in premise and hypothesis very well and achieves better performance shown in the experiment section. We tried to apply the F (.) function on our hidden states before computing eij and it did not further help our models.
Local inference collected over sequences Lo- cal inference is determined by the attention weight eij computed above, which is used to obtain the local relevance between a premise and hypothesis. For the hidden state of a word in a premise, i.e., ¯ai (already encoding the word itself and its context), the relevant semantics in the hypothesis is iden- tiï¬ed and composed using eij, more speciï¬cally
with Equation (12).
Lo . exp(ei;) a; = ay 2S exten)
~ fo exp(e;;) b=>> PMG) ai Vj E[L,...,4], (13) fat Loker CXP(CR;)
where A; is a weighted summation of {bj} - In- tuitively, the content in {bj} , that is relevant to a; will be selected and represented as a;. The same is performed for each word in the hypothesis with Equation (13).
Local inference collected over parse trees We use tree models to help collect local inference in- formation over linguistic phrases and clauses in this layer. The tree structures of the premise and hypothesis are produced by a constituency parser. Once the hidden states of a tree are all computed with Equation (3), we treat all tree nodes equally as we do not have further heuristics to discrimi- nate them, but leave the attention weights to ï¬gure out their relationship. So, we use Equation (11) to compute the attention weights for all node pairs between a premise and hypothesis. This connects all words, constituent phrases, and clauses between the premise and hypothesis. We then collect the in- formation between all the pairs with Equations (12) and (13) and feed them into the next layer.
inference information Enhancement of local In our models, we further enhance the local in- ference information collected. We compute the difference and the element-wise product for the tu- ple <¯a, Ëa> as well as for <¯b, Ëb>. We expect that such operations could help sharpen local inference information between elements in the tuples and cap- ture inference relationships such as contradiction. The difference and element-wise product are then concatenated with the original vectors, ¯a and Ëa, or ¯b and Ëb, respectively (Mou et al., 2016; Zhang et al., 2017). The enhancement is performed for both the sequential and the tree models.
ma = [¯a; Ëa; ¯a mb = [¯b; Ëb; ¯b
(14)
Ëa], Ëb].
â
(15)
â
This process could be regarded as a special case of modeling some high-order interaction between the tuple elements. Along this direction, we have
also further modeled the interaction by feeding the tuples into feedforward neural networks and added the top layer hidden states to the above concate- nation. We found that it does not further help the inference accuracy on the heldout dataset.
# Inference Composition
To determine the overall inference relationship be- tween a premise and hypothesis, we explore a com- position layer to compose the enhanced local in- ference information ma and mb. We perform the composition sequentially or in its parse context using BiLSTM and tree-LSTM, respectively.
The composition layer In our sequential infer- ence model, we keep using BiLSTM to compose local inference information sequentially. The for- mulas for BiLSTM are similar to those in Equations (1) and (2) in their forms so we skip the details, but the aim is very different hereâthey are used to cap- ture local inference information ma and mb and their context here for inference composition.
In the tree composition, the high-level formulas of how a tree node is updated to compose local inference is as follows:
1, hR va,t = TrLSTM(F (ma,t), hL 1), t t â â 1, hR vb,t = TrLSTM(F (mb,t), hL 1). t t â â
We propose to control model complexity in this layer, since the concatenation we described above to compute ma and mb can signiï¬cantly increase the overall parameter size to potentially overï¬t the models. We propose to use a mapping F as in Equation (16) and (17). More speciï¬cally, we use a 1-layer feedforward neural network with the ReLU activation. This function is also applied to BiLSTM in our sequential inference composition.
Pooling Our inference model converts the result- ing vectors obtained above to a ï¬xed-length vector with pooling and feeds it to the ï¬nal classiï¬er to determine the overall inference relationship.
We consider that summation (Parikh et al., 2016) could be sensitive to the sequence length and hence less robust. We instead suggest the following strat- egy: compute both average and max pooling, and concatenate all these vectors to form the ï¬nal ï¬xed length vector v. Our experiments show that this leads to signiï¬cantly better results than summa- tion. The ï¬nal ï¬xed length vector v is calculated
(16)
(17)
as follows:
la Vai la Va,ave = » > Va,max =MAXVa,i, (18) la i=1 i=1
by ; Vbij b Vo,ave = » G7 Vbamax = MAX Vp, j, (19) â7 *b j=1 j=l
v = [va,ave; va,max; vb,ave; vb,max]. (20)
Note that for tree composition, Equation (20) is slightly different from that in sequential com- position. Our tree composition will concatenate also the hidden states computed for the roots with Equations (16) and (17), which are not shown here. We then put v into a ï¬nal multilayer perceptron (MLP) classiï¬er. The MLP has a hidden layer with tanh activation and softmax output layer in our ex- periments. The entire model (all three components described above) is trained end-to-end. For train- ing, we use multi-class cross-entropy loss.
Overall inference models Our model can be based only on the sequential networks by removing all tree components and we call it Enhanced Se- quential Inference Model (ESIM) (see the left part of Figure 1). We will show that ESIM outperforms all previous results. We will also encode parse in- formation with tree LSTMs in multiple layers as described (see the right side of Figure 1). We train this model and incorporate it into ESIM by averag- ing the predicted probabilities to get the ï¬nal label for a premise-hypothesis pair. We will show that parsing information complements very well with ESIM and further improves the performance, and we call the ï¬nal model Hybrid Inference Model (HIM).
# 4 Experimental Setup
Data The Stanford Natural Language Inference (SNLI) corpus (Bowman et al., 2015) focuses on three basic relationships between a premise and a potential hypothesis: the premise entails the hy- pothesis (entailment), they contradict each other (contradiction), or they are not related (neutral). The original SNLI corpus contains also âthe otherâ category, which includes the sentence pairs lacking consensus among multiple human annotators. As in the related work, we remove this category. We used the same split as in Bowman et al. (2015) and other previous work.
The parse trees used in this paper are produced by the Stanford PCFG Parser 3.5.3 (Klein and Man- ning, 2003) and they are delivered as part of the SNLI corpus. We use classiï¬cation accuracy as the evaluation metric, as in related work.
Training We use the development set to select models for testing. To help replicate our results, we publish our code1. Below, we list our training details. We use the Adam method (Kingma and Ba, 2014) for optimization. The ï¬rst momentum is set to be 0.9 and the second 0.999. The initial learning rate is 0.0004 and the batch size is 32. All hidden states of LSTMs, tree-LSTMs, and word embeddings have 300 dimensions.
We use dropout with a rate of 0.5, which is applied to all feedforward connections. We use pre-trained 300-D Glove 840B vectors (Penning- ton et al., 2014) to initialize our word embeddings. Out-of-vocabulary (OOV) words are initialized ran- domly with Gaussian samples. All vectors includ- ing word embedding are updated during training.
# 5 Results
Overall performance Table 1 shows the results of different models. The ï¬rst row is a baseline classiï¬er presented by Bowman et al. (2015) that considers handcrafted features such as BLEU score of the hypothesis with respect to the premise, the overlapped words, and the length difference be- tween them, etc.
The next group of models (2)-(7) are based on sentence encoding. The model of Bowman et al. (2016) encodes the premise and hypothe- sis with two different LSTMs. The model in Ven- drov et al. (2015) uses unsupervised âskip-thoughtsâ pre-training in GRU encoders. The approach pro- posed by Mou et al. (2016) considers tree-based CNN to capture sentence-level semantics, while the model of Bowman et al. (2016) introduces a stack-augmented parser-interpreter neural network (SPINN) which combines parsing and interpreta- tion within a single tree-sequence hybrid model. The work by Liu et al. (2016) uses BiLSTM to gen- erate sentence representations, and then replaces average pooling with intra-attention. The approach proposed by Munkhdalai and Yu (2016a) presents a memory augmented neural network, neural se- mantic encoders (NSE), to encode sentences.
The next group of methods in the table, models
# 1https://github.com/lukecq1231/nli
Model #Para. Train Test (1) Handcrafted features (Bowman et al., 2015) - 99.7 78.2 (2) 300D LSTM encoders (Bowman et al., 2016) (3) 1024D pretrained GRU encoders (Vendrov et al., 2015) (4) 300D tree-based CNN encoders (Mou et al., 2016) (5) 300D SPINN-PI encoders (Bowman et al., 2016) (6) 600D BiLSTM intra-attention encoders (Liu et al., 2016) (7) 300D NSE encoders (Munkhdalai and Yu, 2016a) 3.0M 83.9 15M 98.8 3.5M 83.3 3.7M 89.2 2.8M 84.5 3.0M 86.2 80.6 81.4 82.1 83.2 84.2 84.6 (8) 100D LSTM with attention (Rocktäschel et al., 2015) (9) 300D mLSTM (Wang and Jiang, 2016) (10) 450D LSTMN with deep attention fusion (Cheng et al., 2016) (11) 200D decomposable attention model (Parikh et al., 2016) (12) Intra-sentence attention + (11) (Parikh et al., 2016) (13) 300D NTI-SLSTM-LSTM (Munkhdalai and Yu, 2016b) (14) 300D re-read LSTM (Sha et al., 2016) (15) 300D btree-LSTM encoders (Paria et al., 2016) 250K 85.3 1.9M 92.0 3.4M 88.5 380K 89.5 580K 90.5 3.2M 88.5 2.0M 90.7 2.0M 88.6 83.5 86.1 86.3 86.3 86.8 87.3 87.5 87.6 (16) 600D ESIM (17) HIM (600D ESIM + 300D Syntactic tree-LSTM) 4.3M 92.6 7.7M 93.5 88.0 88.6
Table 1: Accuracies of the models on SNLI. Our ï¬nal model achieves the accuracy of 88.6%, the best result observed on SNLI, while our enhanced sequential encoding model attains an accuracy of 88.0%, which also outperform the previous models.
(8)-(15), are inter-sentence attention-based model. The model marked with Rocktäschel et al. (2015) is LSTMs enforcing the so called word-by-word attention. The model of Wang and Jiang (2016) ex- tends this idea to explicitly enforce word-by-word matching between the hypothesis and the premise. Long short-term memory-networks (LSTMN) with deep attention fusion (Cheng et al., 2016) link the current word to previous words stored in memory. Parikh et al. (2016) proposed a decomposable atten- tion model without relying on any word-order in- formation. In general, adding intra-sentence atten- tion yields further improvement, which is not very surprising as it could help align the relevant text spans between premise and hypothesis. The model of Munkhdalai and Yu (2016b) extends the frame- work of Wang and Jiang (2016) to a full n-ary tree model and achieves further improvement. Sha et al. (2016) proposes a special LSTM variant which con- siders the attention vector of another sentence as an inner state of LSTM. Paria et al. (2016) use a neu- ral architecture with a complete binary tree-LSTM encoders without syntactic information.
We ensemble our ESIM model with syntactic tree-LSTMs (Zhu et al., 2015) based on syntactic parse trees and achieve signiï¬cant improvement over our best sequential encoding model ESIM, at- taining an accuracy of 88.6%. This shows that syn- tactic tree-LSTMs complement well with ESIM.
93.5 88.6 (17) HIM (ESIM + syn.tree) 91.9 88.2 (18) ESIM + tree 92.6 88.0 (16) ESIM 92.9 87.1 (19) ESIM - ave./max (20) ESIM - diff./prod. 91.5 87.0 (21) ESIM - inference BiLSTM 91.3 87.3 (22) ESIM - encoding BiLSTM 88.7 86.3 91.6 87.2 (23) ESIM - P-based attention 91.4 86.5 (24) ESIM - H-based attention 92.9 87.8 (25) syn.tree
Table 2: Ablation performance of the models.
The table shows that our ESIM model achieves an accuracy of 88.0%, which has already outper- formed all the previous models, including those using much more complicated network architec- tures (Munkhdalai and Yu, 2016b).
Ablation analysis We further analyze the ma- jor components that are of importance to help us achieve good performance. From the best model, we ï¬rst replace the syntactic tree-LSTM with the full tree-LSTM without encoding syntactic parse information. More speciï¬cally, two adjacent words in a sentence are merged to form a parent node, and
# 2 A
1 - 3 - 5 - 7 - 8 - 21 - 9 - 16 - 23 - 10 - 18 - 25 - 12 - 27 - 4 man 6 wearing 11 a 13 white 14 shirt 15 and 17 a 19 blue 20 jeans 22 reading 24 a 26 newspaper 28 while 29 standing
(a) Binarized constituency tree of premise
1 -
2 - 6 - 8 - 5 - 9 - 12 - 14 - 3 A 4 man 7 is 10 sitting 11 down 13 reading 15 a 16 newspaper
# 3 A
17 .
(b) Binarized constituency tree of hypothesis
i | is é 79 a | sitting 10 dowlt 11} + ding 13 | readin, + 4 Z 14- ] al5} 4 newspaper 16 | ivL a.m, A an A a. ORBEA LOK IER ODDO Veâ Boa os a OO BD ae ae NOOR % eZ % ee S e % % 4, ee oe
(d) Input gate of tree-LSTM in inference composi- tion (l2-norm)
pscoeeeoeeedtt oO : yy % 6 & Ge? %, tS © 2 S44 e 2, % % °° S Sy a] NI e & es S %, ie
(e) Input gate of BiLSTM in inference composition (l2-norm)
<bosx ". é man itting a sitting + dows | reading ary newspaper <eoss ; ee IB? Ae ee? BERe GAB & °% Ger CVe Bese ae & BN iS s a & a
(c) Normalized attention weights of tree-LSTM
(f) Normalized attention weights of BiLSTM
Figure 3: An example for analysis. Subï¬gures (a) and (b) are the constituency parse trees of the premise and hypothesis, respectively. â-â means a non-leaf or a null node. Subï¬gures (c) and (f) are attention visualization of the tree model and ESIM, respectively. The darker the color, the greater the value. The premise is on the x-axis and the hypothesis is on y-axis. Subï¬gures (d) and (e) are input gatesâ l2-norm of tree-LSTM and BiLSTM in inference composition, respectively.
this process continues and results in a full binary tree, where padding nodes are inserted when there are no enough leaves to form a full tree. Each tree node is implemented with a tree-LSTM block (Zhu et al., 2015) same as in model (17). Table 2 shows that with this replacement, the performance drops
_ â_ _
# to 88.2%.
Furthermore, we note the importance of the layer performing the enhancement for local inference in- formation in Section 3.2 and the pooling layer in inference composition in Section 3.3. Table 2 sug- gests that the NLI task seems very sensitive to the
layers. If we remove the pooling layer in infer- ence composition and replace it with summation as in Parikh et al. (2016), the accuracy drops to 87.1%. If we remove the difference and element- wise product from the local inference enhancement layer, the accuracy drops to 87.0%. To provide some detailed comparison with Parikh et al. (2016), replacing bidirectional LSTMs in inference compo- sition and also input encoding with feedforward neural network reduces the accuracy to 87.3% and 86.3% respectively.
The difference between ESIM and each of the other models listed in Table 2 is statistically signif- icant under the one-tailed paired t-test at the 99% signiï¬cance level. The difference between model (17) and (18) is also signiï¬cant at the same level. Note that we cannot perform signiï¬cance test be- tween our models with the other models listed in Table 1 since we do not have the output of the other models.
If we remove the premise-based attention from ESIM (model 23), the accuracy drops to 87.2% on the test set. The premise-based attention means when the system reads a word in a premise, it uses soft attention to consider all relevant words in hy- pothesis. Removing the hypothesis-based atten- tion (model 24) decrease the accuracy to 86.5%, where hypothesis-based attention is the attention performed on the other direction for the sentence pairs. The results show that removing hypothesis- based attention affects the performance of our model more, but removing the attention from the other direction impairs the performance too.
The stand-alone syntactic tree-LSTM model achieves an accuracy of 87.8%, which is compa- rable to that of ESIM. We also computed the or- acle score of merging syntactic tree-LSTM and ESIM, which picks the right answer if either is right. Such an oracle/upper-bound accuracy on test set is 91.7%, which suggests how much tree-LSTM and ESIM could ideally complement each other. As far as the speed is concerned, training tree-LSTM takes about 40 hours on Nvidia-Tesla K40M and ESIM takes about 6 hours, which is easily extended to larger scale of data.
Further analysis We showed that encoding syn- tactic parsing information helps recognize natural language inferenceâit additionally improves the strong system. Figure 3 shows an example where tree-LSTM makes a different and correct decision. In subï¬gure (d), the larger values at the input gates
on nodes 9 and 10 indicate that those nodes are important in making the ï¬nal decision. We observe that in subï¬gure (c), nodes 9 and 10 are aligned to node 29 in the premise. Such information helps the system decide that this pair is a contradiction. Ac- cordingly, in subï¬gure (e) of sequential BiLSTM, the words sitting and down do not play an impor- tant role for making the ï¬nal decision. Subï¬gure (f) shows that sitting is equally aligned with reading and standing and the alignment for word down is not that useful.
# 6 Conclusions and Future Work
We propose neural network models for natural lan- guage inference, which achieve the best results reported on the SNLI benchmark. The results are ï¬rst achieved through our enhanced sequential in- ference model, which outperformed the previous models, including those employing more compli- cated network architectures, suggesting that the potential of sequential inference models have not been fully exploited yet. Based on this, we further show that by explicitly considering recursive ar- chitectures in both local inference modeling and inference composition, we achieve additional im- provement. Particularly, incorporating syntactic parsing information contributes to our best result: it further improves the performance even when added to the already very strong model.
Future work interesting to us includes exploring the usefulness of external resources such as Word- Net and contrasting-meaning embedding (Chen et al., 2015) to help increase the coverage of word- level inference relations. Modeling negation more closely within neural network frameworks (Socher et al., 2013; Zhu et al., 2014) may help contradic- tion detection.
# Acknowledgments
The ï¬rst and the third author of this paper were supported in part by the Science and Technology Development of Anhui Province, China (Grants No. 2014z02006), the Fundamental Research Funds for the Central Universities (Grant No. WK2350000001) and the Strategic Priority Re- search Program of the Chinese Academy of Sci- ences (Grant No. XDB02070006).
# References
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473. http://arxiv.org/abs/1409.0473.
Samuel Bowman, Gabor Angeli, Christopher Potts, and D. Christopher Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Associa- tion for Computational Linguistics, pages 632â642. https://doi.org/10.18653/v1/D15-1075.
Samuel Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, D. Christopher Manning, and Christopher Potts. 2016. A fast uniï¬ed model for parsing and sentence understanding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1466â1477. https://doi.org/10.18653/v1/P16-1139.
William Chan, Navdeep Jaitly, Quoc V. Le, and Oriol Vinyals. 2016. Listen, attend and spell: A neural network for large vocabulary conversa- In 2016 IEEE Interna- tional speech recognition. tional Conference on Acoustics, Speech and Sig- nal Processing, ICASSP 2016, Shanghai, China, March 20-25, 2016. IEEE, pages 4960â4964. https://doi.org/10.1109/ICASSP.2016.7472621.
Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, and Hui Jiang. 2016. Distraction-based neural net- works for modeling document. In Subbarao Kamb- hampati, editor, Proceedings of the Twenty-Fifth International Joint Conference on Artiï¬cial Intel- ligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016. IJCAI/AAAI Press, pages 2754â2760. http://www.ijcai.org/Abstract/16/391.
Zhigang Chen, Wei Lin, Qian Chen, Xiaoping Chen, Si Wei, Hui Jiang, and Xiaodan Zhu. 2015. Re- visiting word embedding for contrasting meaning. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers). Associ- ation for Computational Linguistics, pages 106â115. https://doi.org/10.3115/v1/P15-1011.
Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 551â561. http://aclweb.org/anthology/D16-1053.
Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the proper- ties of neural machine translation: Encoder-decoder approaches. In Dekai Wu, Marine Carpuat, Xavier Carreras, and Eva Maria Vecchi, editors, Proceed- ings of SSST@EMNLP 2014, Eighth Workshop on
Syntax, Semantics and Structure in Statistical Trans- lation, Doha, Qatar, 25 October 2014. Associ- ation for Computational Linguistics, pages 103â 111. http://aclweb.org/anthology/W/W14/W14- 4012.pdf.
Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, and Yoshua Bengio. 2015. Kyunghyun Cho, Attention-based models for speech recognition. In Corinna Cortes, Neil D. Lawrence, Daniel D. Lee, Masashi Sugiyama, and Roman Garnett, ed- itors, Advances in Neural Information Process- ing Systems 28: Annual Conference on Neu- ral Information Processing Systems 2015, De- cember 7-12, 2015, Montreal, Quebec, Canada. http://papers.nips.cc/paper/5847- pages 577â585. attention-based-models-for-speech-recognition.
Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL recognising textual entailment challenge. In Machine Learning Challenges, Eval- uating Predictive Uncertainty, Visual Object Classi- ï¬cation and Recognizing Textual Entailment, First PASCAL Machine Learning Challenges Workshop, MLCW 2005, Southampton, UK, April 11-13, 2005, Revised Selected Papers. pages 177â190.
Lorenzo Ferrone and Massimo Fabio Zanzotto. 2014. Towards syntax-aware compositional distributional semantic models. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. Dublin City Univer- sity and Association for Computational Linguistics, http://aclweb.org/anthology/C14- pages 721â730. 1068.
and Long ber. Computation Neural https://doi.org/10.1162/neco.1997.9.8.1735.
Adrian Iftene and Alexandra Balahur-Dobrescu. 2007. Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, Association for Computational Linguistics, chapter Hypothe- sis Transformation and Semantic Variability Rules Used in Recognizing Textual Entailment, pages 125â 130. http://aclweb.org/anthology/W07-1421.
Diederik P. Kingma and Jimmy Ba. 2014. Adam: CoRR A method for stochastic optimization. abs/1412.6980. http://arxiv.org/abs/1412.6980.
Dan Klein and Christopher D. Manning. 2003. Ac- curate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computa- tional Linguistics. http://aclweb.org/anthology/P03- 1054.
Phong Le and Willem Zuidema. 2015. Compositional distributional semantics with long short term mem- ory. In Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics. Associ- ation for Computational Linguistics, pages 10â19. https://doi.org/10.18653/v1/S15-1002.
Yang Liu, Chengjie Sun, Lei Lin, and Xiao- language bidirectional LSTM model CoRR abs/1605.09090. long Wang. 2016. inference and inner-attention. http://arxiv.org/abs/1605.09090. Learning natural using
Bill MacCartney. 2009. Natural Language Inference. Ph.D. thesis, Stanford University.
Bill MacCartney and Christopher D. Manning. Modeling semantic containment and 2008. In language inference. exclusion in natural Proceedings of the 22Nd International Confer- ence on Computational Linguistics - Volume 1. Association for Computational Linguistics, Strouds- burg, PA, USA, COLING â08, pages 521â528. http://dl.acm.org/citation.cfm?id=1599081.1599147.
Yashar Mehdad, Alessandro Moschitti, and Mas- simo Fabio Zanzotto. 2010. Syntactic/semantic In structures for textual entailment recognition. Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Associ- ation for Computational Linguistics, pages 1020â 1028. http://aclweb.org/anthology/N10-1146.
Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2016. Natural language inference by tree-based convolution and heuris- the 54th An- In Proceedings of tic matching. nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Associa- tion for Computational Linguistics, pages 130â136. https://doi.org/10.18653/v1/P16-2022.
Tsendsuren Munkhdalai and Hong Yu. 2016a. Neu- CoRR abs/1607.04315. ral semantic encoders. http://arxiv.org/abs/1607.04315.
Tsendsuren Munkhdalai and Hong Yu. 2016b. Neu- ral tree indexers for text understanding. CoRR abs/1607.04492. http://arxiv.org/abs/1607.04492.
Biswajit Paria, K. M. Annervaz, Ambedkar Dukkipati, Ankush Chatterjee, and Sanjay Podder. 2016. A neu- ral architecture mimicking humans end-to-end for natural language inference. CoRR abs/1611.04741. http://arxiv.org/abs/1611.04741.
Ankur Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention In Proceed- model for natural language inference. ings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing. Association for Computational Linguistics, pages 2249â2255. http://aclweb.org/anthology/D16-1244.
Barbara Partee. 1995. Lexical semantics and composi- tionality. Invitation to Cognitive Science 1:311â360.
Jeffrey Pennington, Richard Socher, and Christo- GloVe: Global vectors pher Manning. 2014. In Proceedings of the for word representation. 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP). Association
for Computational Linguistics, pages 1532â1543. https://doi.org/10.3115/v1/D14-1162.
Tim Rocktäschel, Edward Grefenstette, Karl Moritz and Phil Blun- entailment about CoRR abs/1509.06664.
Alexander Rush, Sumit Chopra, and Jason We- ston. 2015. A neural attention model for ab- In Proceed- stractive sentence summarization. ings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing. Associa- tion for Computational Linguistics, pages 379â389. https://doi.org/10.18653/v1/D15-1044.
Lei Sha, Baobao Chang, Zhifang Sui, and Sujian Li. 2016. Reading and thinking: Re-read LSTM unit In Proceedings for textual entailment recognition. of COLING 2016, the 26th International Confer- ence on Computational Linguistics: Technical Pa- pers. The COLING 2016 Organizing Committee, pages 2870â2879. http://aclweb.org/anthology/C16- 1270.
Richard Socher, Cliff Chiung-Yu Lin, Andrew Y. Ng, and Christopher D. Manning. 2011. Parsing natu- ral scenes and natural language with recursive neu- In Lise Getoor and Tobias Scheffer, ral networks. editors, Proceedings of the 28th International Con- ference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 - July 2, 2011. Omnipress, pages 129â136.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, D. Christopher Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- In Proceedings of the 2013 Conference on bank. Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1631â1642. http://aclweb.org/anthology/D13-1170.
Sheng Kai Tai, Richard Socher, and D. Christopher Manning. 2015. Improved semantic representations from tree-structured long short-term memory net- In Proceedings of the 53rd Annual Meet- works. ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, pages 1556â1566. https://doi.org/10.3115/v1/P15-1150.
Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Order-embeddings of CoRR abs/1511.06361. Raquel Urtasun. 2015. images and language. http://arxiv.org/abs/1511.06361.
Shuohang Wang and Jing Jiang. 2016. Learning nat- In Proceed- ural language inference with LSTM. ings of the 2016 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies. Asso- ciation for Computational Linguistics, pages 1442â 1451. https://doi.org/10.18653/v1/N16-1170.
Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdi- and Yoshua Bengio. nov, Richard S. Zemel, image 2015. Show, attend and tell: Neural caption generation with visual attention. In the 32nd International Confer- Proceedings of ICML 2015, Lille, ence on Machine Learning, France, 2015. 2048â2057. pages http://jmlr.org/proceedings/papers/v37/xuc15.html.
Junbei Zhang, Xiaodan Zhu, Qian Chen, Lirong Ex- adapta- an- abs/arXiv:1703.04617v2.
Xiaodan Zhu, Hongyu Guo, Saif Mohammad, and Svet- lana Kiritchenko. 2014. An empirical study on the effect of negation words on sentiment. In Proceed- ings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 304â313. https://doi.org/10.3115/v1/P14- 1029.
Xiaodan Zhu, Parinaz Sobhani, and Hongyu Guo. 2015. Long short-term memory over recursive structures. In Proceedings of the 32nd International ICML 2015, Conference on Machine Learning, Lille, France, 6-11 July 2015. pages 1604â1612. http://jmlr.org/proceedings/papers/v37/zhub15.html. | {
"id": "1703.04617"
} |
1609.04836 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | 7 1 0 2
# b e F 9
] G L . s c [
2 v 6 3 8 4 0 . 9 0 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# ON LARGE-BATCH TRAINING FOR DEEP LEARNING: GENERALIZATION GAP AND SHARP MINIMA
Nitish Shirish Keskarâ Northwestern University Evanston, IL 60208 keskar.nitish@u.northwestern.edu
Dheevatsa Mudigere Intel Corporation Bangalore, India dheevatsa.mudigere@intel.com
Jorge Nocedal Northwestern University Evanston, IL 60208 j-nocedal@northwestern.edu
Mikhail Smelyanskiy Intel Corporation Santa Clara, CA 95054 mikhail.smelyanskiy@intel.com
# Ping Tak Peter Tang Intel Corporation Santa Clara, CA 95054 peter.tang@intel.com
# ABSTRACT
The stochastic gradient descent (SGD) method and its variants are algorithms of choice for many Deep Learning tasks. These methods operate in a small-batch regime wherein a fraction of the training data, say 32â512 data points, is sampled to compute an approximation to the gradient. It has been observed in practice that when using a larger batch there is a degradation in the quality of the model, as measured by its ability to generalize. We investigate the cause for this generaliza- tion drop in the large-batch regime and present numerical evidence that supports the view that large-batch methods tend to converge to sharp minimizers of the training and testing functionsâand as is well known, sharp minima lead to poorer generalization. In contrast, small-batch methods consistently converge to ï¬at min- imizers, and our experiments support a commonly held view that this is due to the inherent noise in the gradient estimation. We discuss several strategies to attempt to help large-batch methods eliminate this generalization gap.
1
# INTRODUCTION
Deep Learning has emerged as one of the cornerstones of large-scale machine learning. Deep Learn- ing models are used for achieving state-of-the-art results on a wide variety of tasks including com- puter vision, natural language processing and reinforcement learning; see (Bengio et al., 2016) and the references therein. The problem of training these networks is one of non-convex optimization. Mathematically, this can be represented as:
M . . 1 min f(@@) = 37 > fia), (1)
where fi is a loss function for data point i â {1, 2, · · · , M } which captures the deviation of the model prediction from the data, and x is the vector of weights being optimized. The process of optimizing this function is also called training of the network. Stochastic Gradient Descent (SGD) (Bottou, 1998; Sutskever et al., 2013) and its variants are often used for training deep networks.
âWork was performed when author was an intern at Intel Corporation
1
Published as a conference paper at ICLR 2017
These methods minimize the objective function f by iteratively taking steps of the form:
Ukp1 = Le â Ab (i > vas) ; (2) iC By,
where By, C {1,2,+-- , 1} is the batch sampled from the data set and ax, is the step size at iteration k. These methods can be interpreted as gradient descent using noisy gradients, which and are often referred to as mini-batch gradients with batch size |B;,|. SGD and its variants are employed in a small-batch regime, where |B,| < M and typically |B,| ⬠{32,64,--- ,512}. These configura- tions have been successfully used in practice for a large number of applications; see e.g. [2013). Many theoretical properties of these methods are known. These include guarantees of: (a) convergence to minimizers of strongly-convex functions and to stationary points for non-convex functions 2016), (b) saddle-point avoidance (Ge et al.|2015} [Lee et al.|[2016), and (c) robustness to input data (Hardt et al.] 2015).
Stochastic gradient methods have, however, a major drawback: owing to the sequential nature of the iteration and small batch sizes, there is limited avenue for parallelization. While some efforts have been made to parallelize SGD for Deep Learning (Dean et al., 2012; Das et al., 2016; Zhang et al., 2015), the speed-ups and scalability obtained are often limited by the small batch sizes. One natu- ral avenue for improving parallelism is to increase the batch size |Bk|. This increases the amount of computation per iteration, which can be effectively distributed. However, practitioners have ob- served that this leads to a loss in generalization performance; see e.g. (LeCun et al., 2012). In other words, the performance of the model on testing data sets is often worse when trained with large- batch methods as compared to small-batch methods. In our experiments, we have found the drop in generalization (also called generalization gap) to be as high as 5% even for smaller networks.
In this paper, we present numerical results that shed light into this drawback of large-batch methods. We observe that the generalization gap is correlated with a marked sharpness of the minimizers obtained by large-batch methods. This motivates efforts at remedying the generalization problem, as a training algorithm that employs large batches without sacriï¬cing generalization performance would have the ability to scale to a much larger number of nodes than is possible today. This could potentially reduce the training time by orders-of-magnitude; we present an idealized performance model in the Appendix C to support this claim.
The paper is organized as follows. In the remainder of this section, we deï¬ne the notation used in this paper, and in Section 2 we present our main ï¬ndings and their supporting numerical evidence. In Section 3 we explore the performance of small-batch methods, and in Section 4 we brieï¬y discuss the relationship between our results and recent theoretical work. We conclude with open questions concerning the generalization gap, sharp minima, and possible modiï¬cations to make large-batch training viable. In Appendix E, we present some attempts to overcome the problems of large-batch training.
1.1 NOTATION
We use the notation fi to denote the composition of loss function and a prediction function corre- sponding to the ith data point. The vector of weights is denoted by x and is subscripted by k to denote an iteration. We use the term small-batch (SB) method to denote SGD, or one of its variants like ADAM (Kingma & Ba, 2015) and ADAGRAD (Duchi et al., 2011), with the proviso that the gradient approximation is based on a small mini-batch. In our setup, the batch Bk is randomly sam- pled and its size is kept ï¬xed for every iteration. We use the term large-batch (LB) method to denote any training algorithm that uses a large mini-batch. In our experiments, ADAM is used to explore the behavior of both a small or a large batch method.
2 DRAWBACKS OF LARGE-BATCH METHODS
2.1 OUR MAIN OBSERVATION
As mentioned in Section 1, practitioners have observed a generalization gap when using large-batch methods for training deep learning models. Interestingly, this is despite the fact that large-batch methods usually yield a similar value of the training function as small-batch methods. One may put
2
Published as a conference paper at ICLR 2017
forth the following as possible causes for this phenomenon: (i) LB methods over-ï¬t the model; (ii) LB methods are attracted to saddle points; (iii) LB methods lack the explorative properties of SB methods and tend to zoom-in on the minimizer closest to the initial point; (iv) SB and LB methods converge to qualitatively different minimizers with differing generalization properties. The data presented in this paper supports the last two conjectures.
The main observation of this paper is as follows:
The lack of generalization ability is due to the fact that large-batch methods tend to converge to sharp minimizers of the training function. These minimizers are characterized by a signif- icant number of large positive eigenvalues in â2f (x), and tend to generalize less well. In contrast, small-batch methods converge to ï¬at minimizers characterized by having numerous small eigenvalues of â2f (x). We have observed that the loss function landscape of deep neural networks is such that large-batch methods are attracted to regions with sharp minimizers and that, unlike small-batch methods, are unable to escape basins of attraction of these minimizers.
The concept of sharp and ï¬at minimizers have been discussed in the statistics and machine learning literature. (Hochreiter & Schmidhuber, 1997) (informally) deï¬ne a ï¬at minimizer ¯x as one for which the function varies slowly in a relatively large neighborhood of ¯x. In contrast, a sharp minimizer Ëx is such that the function increases rapidly in a small neighborhood of Ëx. A ï¬at minimum can be de- scribed with low precision, whereas a sharp minimum requires high precision. The large sensitivity of the training function at a sharp minimizer negatively impacts the ability of the trained model to generalize on new data; see Figure 1 for a hypothetical illustration. This can be explained through the lens of the minimum description length (MDL) theory, which states that statistical models that require fewer bits to describe (i.e., are of low complexity) generalize better (Rissanen, 1983). Since ï¬at minimizers can be speciï¬ed with lower precision than to sharp minimizers, they tend to have bet- ter generalization performance. Alternative explanations are proffered through the Bayesian view of learning (MacKay, 1992), and through the lens of free Gibbs energy; see e.g. Chaudhari et al. (2016).
Training Function f (x) Flat Minimum Sharp Minimum
# Testing Function
Figure 1: A Conceptual Sketch of Flat and Sharp Minima. The Y-axis indicates value of the loss function and the X-axis the variables (parameters)
2.2 NUMERICAL EXPERIMENTS
In this section, we present numerical results to support the observations made above. To this end, we make use of the visualization technique employed by (Goodfellow et al., 2014b) and a proposed heuristic metric of sharpness (Equation (4)). We consider 6 multi-class classiï¬cation network con- ï¬gurations for our experiments; they are described in Table 1. The details about the data sets and network conï¬gurations are presented in Appendices A and B respectively. As is common for such problems, we use the mean cross entropy loss as the objective function f .
The networks were chosen to exemplify popular conï¬gurations used in practice like AlexNet (Krizhevsky et al., 2012) and VGGNet (Simonyan & Zisserman, 2014). Results on other networks
3
Published as a conference paper at ICLR 2017
Table 1: Network Conï¬gurations
Name Network Type F1 F2 C1 C2 C3 C4 Fully Connected Fully Connected (Shallow) Convolutional (Deep) Convolutional (Shallow) Convolutional (Deep) Convolutional Architecture Data set Section B.1 MNIST (LeCun et al., 1998a) TIMIT (Garofolo et al., 1993) Section B.2 CIFAR-10 (Krizhevsky & Hinton, 2009) Section B.3 CIFAR-10 Section B.4 CIFAR-100 (Krizhevsky & Hinton, 2009) Section B.3 CIFAR-100 Section B.4
and using other initialization strategies, activation functions, and data sets showed similar behavior. Since the goal of our work is not to achieve state-of-the-art accuracy or time-to-solution on these tasks but rather to characterize the nature of the minima for LB and SB methods, we only describe the ï¬nal testing accuracy in the main paper and ignore convergence trends.
For all experiments, we used 10% of the training data as batch size for the large-batch experiments and 256 data points for small-batch experiments. We used the ADAM optimizer for both regimes. Experiments with other optimizers for the large-batch experiments, including ADAGRAD (Duchi et al., 2011), SGD (Sutskever et al., 2013) and adaQN (Keskar & Berahas, 2016), led to similar results. All experiments were conducted 5 times from different (uniformly distributed random) starting points and we report both mean and standard-deviation of measured quantities. The baseline performance for our setup is presented Table 2. From this, we can observe that on all networks, both approaches led to high training accuracy but there is a signiï¬cant difference in the generalization performance. The networks were trained, without any budget or limits, until the loss function ceased to improve.
Table 2: Performance of small-batch (SB) and large-batch (LB) variants of ADAM on the 6 networks listed in Table 1
Training Accuracy Testing Accuracy Name F1 F2 C1 C2 C3 C4 SB 99.66% ± 0.05% 99.92% ± 0.01% 98.03% ± 0.07% 97.81% ± 0.07% 99.99% ± 0.03% 98.35% ± 2.08% 64.02% ± 0.2% 59.45% ± 1.05% 99.89% ± 0.02% 99.66% ± 0.2% 80.04% ± 0.12% 77.26% ± 0.42% 99.99% ± 0.04% 99.99% ± 0.01% 89.24% ± 0.12% 87.26% ± 0.07% 99.56% ± 0.44% 99.88% ± 0.30% 49.58% ± 0.39% 46.45% ± 0.43% 57.81% ± 0.17% 99.10% ± 1.23% 99.57% ± 1.84% 63.08% ± 0.5% LB SB LB
We emphasize that the generalization gap is not due to over-ï¬tting or over-training as commonly observed in statistics. This phenomenon manifest themselves in the form of a testing accuracy curve that, at a certain iterate peaks, and then decays due to the model learning idiosyncrasies of the training data. This is not what we observe in our experiments; see Figure 2 for the trainingâtesting curve of the F2 and C1 networks, which are representative of the rest. As such, early-stopping heuristics aimed at preventing models from over-ï¬tting would not help reduce the generalization gap. The difference between the training and testing accuracies for the networks is due to the speciï¬c choice of the network (e.g. AlexNet, VGGNet etc.) and is not the focus of this study. Rather, our goal is to study the source of the testing performance disparity of the two regimes, SB and LB, on a given network model.
# 2.2.1 PARAMETRIC PLOTS
We first present parametric 1-D plots of the function as described in (Goodfellow et al.|/2014b). Let «3 and x7 indicate the solutions obtained by running ADAM using small and large batch sizes respectively. We plot the loss function, on both training and testing data sets, along a line-segment containing the two points. Specifically, for a ⬠[â1, 2], we plot the function f (ax + (1 â a)x?) and also superimpose the classification accuracy at the intermediate points; see hiewel For this
1The code to reproduce the parametric plot on exemplary networks can be found in our GitHub repository: https://github.com/keskarnitish/large-batch-training.
4
Published as a conference paper at ICLR 2017
# Accuracy
(a) Network F2 (b) Network C1
Figure 2: Training and testing accuracy for SB and LB methods as a function of epochs.
experiment, we randomly chose a pair of SB and LB minimizers from the 5 trials used to generate the data in Table|2] The plots show that the LB minima are strikingly sharper than the SB minima in this one-dimensional manifold. The plots in Figure[3]only explore a linear slice of the function, but in Figure{7]in Appendix|D] we plot f(sin(S#)«% + cos($#)a*) to monitor the function along a curved path between the two minimizers . There too, the relative sharpness of the minima is evident.
2.2.2 SHARPNESS OF MINIMA
So far, we have used the term sharp minimizer loosely, but we noted that this concept has received attention in the literature (Hochreiter & Schmidhuber, 1997). Sharpness of a minimizer can be characterized by the magnitude of the eigenvalues of â2f (x), but given the prohibitive cost of this computation in deep learning applications, we employ a sensitivity measure that, although imperfect, is computationally feasible, even for large networks. It is based on exploring a small neighborhood of a solution and computing the largest value that the function f can attain in that neighborhood. We use that value to measure the sensitivity of the training function at the given local minimizer. Now, since the maximization process is not accurate, and to avoid being mislead by the case when a large value of f is attained only in a tiny subspace of Rn, we perform the maximization both in the entire space Rn as well as in random manifolds. For that purpose, we introduce an n à p matrix A, whose columns are randomly generated. Here p determines the dimension of the manifold, which in our experiments is chosen as p = 100.
Specifically, let C. denote a box around the solution over which the maximization of f is performed, and let A ⬠Râ*? be the matrix defined above. In order to ensure invariance of sharpness to problem dimension and sparsity, we define the constraint set C, as:
Ce = {2 © RP: -e(\(Ata);| 41) < zi <e([(ATx)i] +1) Vie {1,2,--- ,p}}, (3) where At denotes the pseudo-inverse of A. Thus ⬠controls the size of the box. We can now define our measure of sharpness (or sensitivity). Metric 2.1. Given x ⬠R", ⬠> 0. and A ⬠R"*?, we define the (C., A)-sharpness of f at x as:
(maxyec, f(« + Ay)) â f(x) 1+ f(x) Oxf (â¬, A) : x 100. (4)
Unless speciï¬ed otherwise, we use this metric for sharpness for the rest of the paper; if A is not spec- iï¬ed, it is assumed to be the identity matrix, In. (We note in passing that, in the convex optimization literature, the term sharp minimum has a different deï¬nition (Ferris, 1988), but that concept is not useful for our purposes.)
In Tables[3]and|4| we present the values of the sharpness metric (4) for the minimizers of the various problems. Table[3]explores the full-space (i.e., A = I) whereas Table[4]uses a randomly sampled n x 100 dimensional matrix A. We report results with two values of â¬, (107°,5 - 1074). In all experiments, we solve the maximization problem in Equation (7) inexactly by applying 10 iterations of L-BFGS-B . This limit on the number of iterations was necessitated by the
5
Published as a conference paper at ICLR 2017
(a) F1 (b) F2 (c) C1 (d) C2 (e) C3 (f) C4
Figure 3: Parametric Plots â Linear (Left vertical axis corresponds to cross-entropy loss, f , and right vertical axis corresponds to classiï¬cation accuracy; solid line indicates training data set and dashed line indicated testing data set); α = 0 corresponds to the SB minimizer and α = 1 to the LB minimizer.
large cost of evaluating the true objective f. Both tables show a 1-2 order-of-magnitude difference between the values of our metric for the SB and LB regimes. These results reinforce the view that the solutions obtained by a large-batch method defines points of larger sensitivity of the training function. In Appedix [E] we describe approaches to attempt to remedy this generalization problem of LB methods. These approaches include data augmentation, conservative training and adversarial training. Our preliminary findings show that these approaches help reduce the generalization gap but still lead to relatively sharp minimizers and as such, do not completely remedy the problem. Note that Metric 2.1 is closely related to the spectrum of V? f(a). Assuming ⬠to be small enough, when A = [,,, the value (a) relates to the largest eigenvalue of V? f(a) and when A is randomly sampled it approximates the Ritz value of V? f(a) projected onto the column-space of A.
6
Published as a conference paper at ICLR 2017
Table 3: Sharpness of Minima in Full Space; ⬠is defined in (3).
e=10°° e=5-10-4 | LB LB Fi 205.14 £ 69.52 0.27 | 42.90 £17.14 Fy 310.64 + 38.46 0.05 | 93.15 +6.81 Cy 707.23 + 43.04 0.88 | 227.31 + 23.23 Co 925.32 + 38.29 0.86 | 175.31 + 18.28 C3 258.75 + 8.96 0.99 | 105.11 + 13.22 C4 421.84 + 36.97 + 0.87 | 109.35 + 16.57
Table 4: Sharpness of Minima in Random Subspaces of Dimension 100
«= 107% e=5- LB SB Fy r 0.00 9.22 + 0.56 0.05 + 0.00 £0.14 Fy 0.02 23.63 0.05 + 0.00 0.19 Cy 0.23 | 137.25 0.71 £0.15 7.48 C2 £0.34 25.09 0.31 + 0.08 0.52 C3 2.20 | 236.03 4.03 + 1.45 27.39 Cy | 6.05 £1.13 72.99 + 10.96 | 1.89+0.33 | 19.85 + 4.12
We conclude this section by noting that the sharp minimizers identiï¬ed in our experiments do not resemble a cone, i.e., the function does not increase rapidly along all (or even most) directions. By sampling the loss function in a neighborhood of LB solutions, we observe that it rises steeply only along a small dimensional subspace (e.g. 5% of the whole space); on most other directions, the function is relatively ï¬at.
# 3 SUCCESS OF SMALL-BATCH METHODS
It is often reported that when increasing the batch size for a problem, there exists a threshold after which there is a deterioration in the quality of the model. This behavior can be observed for the F2 and C1 networks in Figure 4. In both of these experiments, there is a batch size (â 15000 for F2 and â 500 for C1) after which there is a large drop in testing accuracy. Notice also that the upward drift in value of the sharpness is considerably reduced around this threshold. Similar thresholds exist for the other networks in Table 1.
Let us now consider the behavior of SB methods, which use noisy gradients in the step computation. From the results reported in the previous section, it appears that noise in the gradient pushes the iterates out of the basin of attraction of sharp minimizers and encourages movement towards a ï¬atter minimizer where noise will not cause exit from that basin. When the batch size is greater than the threshold mentioned above, the noise in the stochastic gradient is not sufï¬cient to cause ejection from the initial basin leading to convergence to sharper a minimizer.
To explore that in more detail, consider the following experiment. We train the network for 100 epochs using ADAM with a batch size of 256, and retain the iterate after each epoch in memory. Using these 100 iterates as starting points we train the network using a LB method for 100 epochs and receive a 100 piggybacked (or warm-started) large-batch solutions. We plot in Figure 5 the testing accuracy and sharpness of these large-batch solutions, along with the testing accuracy of the small-batch iterates. Note that when warm-started with only a few initial epochs, the LB method does not yield a generalization improvement. The concomitant sharpness of the iterates also stays high. On the other hand, after certain number of epochs of warm-starting, the accuracy improves and sharpness of the large-batch iterates drop. This happens, apparently, when the SB method has ended its exploration phase and discovered a ï¬at minimizer; the LB method is then able to converge towards it, leading to good testing accuracy.
It has been speculated that LB methods tend to be attracted to minimizers close to the starting point x0, whereas SB methods move away and locate minimizers that are farther away. Our numerical
7
Published as a conference paper at ICLR 2017
(a) F2 (b) C1
Figure 4: Testing Accuracy and Sharpness v/s Batch Size. The X-axis corresponds to the batch size used for training the network for 100 epochs, left Y-axis corresponds to the testing accuracy at the final iterate and right Y-axis corresponds to the sharpness of that iterate. We report sharpness for two values of â¬: 1073 and 5 - 1074.
(a) F2 (b) C1
Figure 5: Warm-starting experiments. The upper ï¬gures report the testing accuracy of the SB method (blue line) and the testing accuracy of the warm started (piggybacked) LB method (red line), as a function of the number of epochs of the SB method. The lower ï¬gures plot the sharpness mea- sure (4) for the solutions obtained by the piggybacked LB method v/s the number of warm-starting epochs of the SB method.
8
# Sharpness
Published as a conference paper at ICLR 2017
(a) F2 (b) C1
# 8 av? é
Figure 6: Sharpness v/s Cross Entropy Loss for SB and LB methods.
experiments support this view: we observed that the ratio of ||a* â a||2 and ||2% â x||2 was in the range of 3-10.
In order to further illustrate the qualitative difference between the solutions obtained by SB and LB methods, we plot in Figure 6 our sharpness measure (4) against the loss function (cross entropy) for one random trial of the F2 and C1 networks. For larger values of the loss function, i.e., near the initial point, SB and LB method yield similar values of sharpness. As the loss function reduces, the sharpness of the iterates corresponding to the LB method rapidly increases, whereas for the SB method the sharpness stays relatively constant initially and then reduces, suggesting an exploration phase followed by convergence to a ï¬at minimizer.
# 4 DISCUSSION AND CONCLUSION
In this paper, we present numerical experiments that support the view that convergence to sharp minimizers gives rise to the poor generalization of large-batch methods for deep learning. To this end, we provide one-dimensional parametric plots and perturbation (sharpness) measures for a vari- ety of deep learning architectures. In Appendix E, we describe our attempts to remedy the problem, including data augmentation, conservative training and robust optimization. Our preliminary inves- tigation suggests that these strategies do not correct the problem; they improve the generalization of large-batch methods but still lead to relatively sharp minima. Another prospective remedy includes the use of dynamic sampling where the batch size is increased gradually as the iteration progresses (Byrd et al., 2012; Friedlander & Schmidt, 2012). The potential viability of this approach is sug- gested by our warm-starting experiments (see Figure 5) wherein high testing accuracy is achieved using a large-batch method that is warm-start with a small-batch method.
Recently, a number of researchers have described interesting theoretical properties of the loss sur- face of deep neural networks; see e.g. (Choromanska et al., 2015; Soudry & Carmon, 2016; Lee et al., 2016). Their work shows that, under certain regularity assumptions, the loss function of deep learning models is fraught with many local minimizers and that many of these minimizers corre- spond to a similar loss function value. Our results are in alignment these observations since, in our experiments, both sharp and ï¬at minimizers have very similar loss function values. We do not know, however, if the theoretical models mentioned above provide information about the existence and density of sharp minimizers of the loss surface.
Our results suggest some questions: (a) can one prove that large-batch (LB) methods typically con- verge to sharp minimizers of deep learning training functions? (In this paper, we only provided some numerical evidence.); (b) what is the relative density of the two kinds of minima?; (c) can one design neural network architectures for various tasks that are suitable to the properties of LB methods?; (d) can the networks be initialized in a way that enables LB methods to succeed?; (e) is it possible, through algorithmic or regulatory means to steer LB methods away from sharp minimizers?
9
Published as a conference paper at ICLR 2017
# REFERENCES
Yoshua Bengio, Ian Goodfellow, and Aaron Courville. Deep learning. Book in preparation for MIT Press, 2016. URL http://www.deeplearningbook.org.
Dimitris Bertsimas, Omid Nohadani, and Kwong Meng Teo. Robust optimization for unconstrained simulation-based problems. Operations Research, 58(1):161â178, 2010.
L´eon Bottou. Online learning and stochastic approximations. On-line learning in neural networks, 17(9):142, 1998.
L´eon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning. arXiv preprint arXiv:1606.04838, 2016.
Richard H Byrd, Peihuang Lu, Jorge Nocedal, and Ciyou Zhu. A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientiï¬c Computing, 16(5):1190â1208, 1995.
Richard H Byrd, Gillian M Chin, Jorge Nocedal, and Yuchen Wu. Sample size selection in opti- mization methods for machine learning. Mathematical programming, 134(1):127â155, 2012.
Pratik Chaudhari, Anna Choromanska, Stefano Soatto, and Yann LeCun. Entropy-sgd: Biasing gradient descent into wide valleys. arXiv preprint arXiv:1611.01838, 2016.
Anna Choromanska, Mikael Henaff, Michael Mathieu, G´erard Ben Arous, and Yann LeCun. The loss surfaces of multilayer networks. In AISTATS, 2015.
Dipankar Das, Sasikanth Avancha, Dheevatsa Mudigere, Karthikeyan Vaidynathan, Srinivas Srid- haran, Dhiraj Kalamkar, Bharat Kaul, and Pradeep Dubey. Distributed deep learning using syn- chronous stochastic gradient descent. arXiv preprint arXiv:1602.06709, 2016.
Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior, Paul Tucker, Ke Yang, Quoc V Le, et al. Large scale distributed deep networks. In Advances in neural information processing systems, pp. 1223â1231, 2012.
J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121â2159, 2011.
Michael Charles Ferris. Weak sharp minima and penalty functions in mathematical programming. PhD thesis, University of Cambridge, 1988.
Michael P Friedlander and Mark Schmidt. Hybrid deterministic-stochastic methods for data ï¬tting. SIAM Journal on Scientiï¬c Computing, 34(3):A1380âA1405, 2012.
John S Garofolo, Lori F Lamel, William M Fisher, Jonathan G Fiscus, David S Pallett, Nancy L Dahlgren, and Victor Zue. Timit acoustic-phonetic continuous speech corpus. Linguistic data consortium, Philadelphia, 33, 1993.
Rong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping from saddle pointsonline stochastic gradient for tensor decomposition. In Proceedings of The 28th Conference on Learning Theory, pp. 797â842, 2015.
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014a.
Ian J Goodfellow, Oriol Vinyals, and Andrew M Saxe. Qualitatively characterizing neural network optimization problems. arXiv preprint arXiv:1412.6544, 2014b.
Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur- In 2013 IEEE international conference on acoustics, speech and signal rent neural networks. processing, pp. 6645â6649. IEEE, 2013.
M. Hardt, B. Recht, and Y. Singer. Train faster, generalize better: Stability of stochastic gradient descent. arXiv preprint arXiv:1509.01240, 2015.
Sepp Hochreiter and J¨urgen Schmidhuber. Flat minima. Neural Computation, 9(1):1â42, 1997.
10
Published as a conference paper at ICLR 2017
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
Nitish Shirish Keskar and Albert S. Berahas. adaQN: An Adaptive Quasi-Newton Algorithm for Training RNNs, pp. 1â16. Springer International Publishing, Cham, 2016.
D. Kingma and J. Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR 2015), 2015.
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convo- lutional neural networks. In Advances in neural information processing systems, pp. 1097â1105, 2012.
Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998a.
Yann LeCun, Corinna Cortes, and Christopher JC Burges. The mnist database of handwritten digits, 1998b.
Yann A LeCun, L´eon Bottou, Genevieve B Orr, and Klaus-Robert M¨uller. Efï¬cient backprop. In Neural networks: Tricks of the trade, pp. 9â48. Springer, 2012.
Jason D Lee, Max Simchowitz, Michael I Jordan, and Benjamin Recht. Gradient descent converges to minimizers. University of California, Berkeley, 1050:16, 2016.
Mu Li, Tong Zhang, Yuqiang Chen, and Alexander J Smola. Efï¬cient mini-batch training for stochastic optimization. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 661â670. ACM, 2014.
David JC MacKay. A practical bayesian framework for backpropagation networks. Neural compu- tation, 4(3):448â472, 1992.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier- stra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
Hossein Mobahi. Training recurrent neural networks by diffusion. arXiv preprint arXiv:1601.04114, 2016.
Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al. The kaldi speech recognition In IEEE 2011 workshop on automatic speech recognition and understanding, number toolkit. EPFL-CONF-192584. IEEE Signal Processing Society, 2011.
Jorma Rissanen. A universal prior for integers and estimation by minimum description length. The Annals of statistics, pp. 416â431, 1983.
Uri Shaham, Yutaro Yamada, and Sahand Negahban. Understanding adversarial training: Increasing local stability of neural nets through robust optimization. arXiv preprint arXiv:1511.05432, 2015.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Daniel Soudry and Yair Carmon. No bad local minima: Data independent training error guarantees for multilayer neural networks. arXiv preprint arXiv:1605.08361, 2016.
Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overï¬tting. Journal of Machine Learning Research, 15(1):1929â1958, 2014.
I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum In Proceedings of the 30th International Conference on Machine Learning in deep learning. (ICML 2013), pp. 1139â1147, 2013.
11
Published as a conference paper at ICLR 2017
Sixin Zhang, Anna E Choromanska, and Yann LeCun. Deep learning with elastic averaging sgd. In Advances in Neural Information Processing Systems, pp. 685â693, 2015.
Stephan Zheng, Yang Song, Thomas Leung, and Ian Goodfellow. Improving the robustness of deep neural networks via stability training. arXiv preprint arXiv:1604.04326, 2016.
# A DETAILS ABOUT DATA SETS
We summarize the data sets used in our experiments in Table 5. TIMIT is a speech recognition data set which is pre-processed using Kaldi (Povey et al., 2011) and trained using a fully-connected network. The rest of the data sets are used without any pre-processing.
Table 5: Data Sets
Data Set MNIST TIMIT CIFAR-10 CIFAR-100 # Data Points Test Train 10000 60000 310621 721329 10000 50000 10000 50000 # Features 28 Ã 28 360 32 Ã 32 32 Ã 32 # Classes Reference 10 1973 10 100 (LeCun et al., 1998a;b) (Garofolo et al., 1993) (Krizhevsky & Hinton, 2009) (Krizhevsky & Hinton, 2009)
B ARCHITECTURE OF NETWORKS
B.1 NETWORK F1
For this network, we use a 784-dimensional input layer followed by 5 batch-normalized (Ioffe & Szegedy, 2015) layers of 512 neurons each with ReLU activations. The output layer consists of 10 neurons with the softmax activation.
B.2 NETWORK F2
The network architecture for F2 is similar to F1. We use a 360-dimensional input layer followed by 7 batch-normalized layers of 512 neurons with ReLU activation. The output layer consists of 1973 neurons with the softmax activation.
B.3 NETWORKS C1 AND C3
The C1 network is a modiï¬ed version of the popular AlexNet conï¬guration (Krizhevsky et al., 2012). For simplicity, denote a stack of n convolution layers of a ï¬lters and a Kernel size of b à c with stride length of d as nÃ[a, b, c, d]. The C1 conï¬guration uses 2 sets of [64, 5, 5, 2]âMaxPool(3) followed by 2 dense layers of sizes (384, 192) and ï¬nally, an output layer of size 10. We use batch- normalization for all layers and ReLU activations. We also use Dropout (Srivastava et al., 2014) of 0.5 retention probability for the two dense layers. The conï¬guration C3 is identical to C1 except it uses 100 softmax outputs instead of 10.
B.4 NETWORKS C2 AND C4
The C2 network is a modiï¬ed version of the popular VGG conï¬guration (Simonyan & Zisserman, 2014). The C3 network uses the conï¬guration: 2Ã[64, 3, 3, 1], 2Ã[128, 3, 3, 1], 3Ã[256, 3, 3, 1], 3à [512, 3, 3, 1], 3 à [512, 3, 3, 1] which a MaxPool(2) after each stack. This stack is followed by a 512- dimensional dense layer and ï¬nally, a 10-dimensional output layer. The activation and properties of each layer is as in B.3. As is the case with C3 and C1, the conï¬guration C4 is identical to C2 except that it uses 100 softmax outputs instead of 10.
12
Published as a conference paper at ICLR 2017
# C PERFORMANCE MODEL
As mentioned in Section 1, a training algorithm that operates in the large-batch regime without suffering from a generalization gap would have the ability to scale to much larger number of nodes than is currently possible. Such and algorithm might also improve training time through faster convergence. We present an idealized performance model that demonstrates our goal.
For LB method to be competitive with SB method, the LB method must (i) converge to minimizers that generalize well, and (ii) do it in a reasonably number of iterations, which we analyze here. Let I, and Ip be number of iterations required by SB and LB methods to reach the point of comparable test accuracy, respectively. Let B, and By, be corresponding batch sizes and P be number of processors being used for training. Assume that P < Be, and let f,(P) be the parallel efficiency of the SB method. For simplicity, we assume that f(P), the parallel efficiency of the LB method, is 1.0. In other words, we assume that the LB method is perfectly scalable due to use of a large batch size.
For LB to be faster than SB, we must have
Be Bs Ij; <1,ââ.. âP ~ PIP)
In other words, the ratio of iterations of LB to the iterations of SB should be
Ie Bs 1. ~ PB
For example, if f,(P) = 0.2 and B,/By = 0.1, the LB method must converge in at most half as many iterations as the SB method to see performance benefits. We refer the reader to 2016) for a more detailed model and a commentary on the effect of batch-size on the performance.
# D CURVILINEAR PARAMETRIC PLOTS
The parametric plots for the curvilinear path from x to x7, i.e., f(sin(S#)a7 + cos($#)a*) can be found in Figure[7]
# E ATTEMPTS TO IMPROVE LB METHODS
In this section, we discuss a few strategies that aim to remedy the problem of poor generalization for large-batch methods. As in Section 2, we use 10% as the percentage batch-size for large-batch experiments and 256 for small-batch methods. For all experiments, we use ADAM as the optimizer irrespective of batch-size.
E.1 DATA AUGMENTATION
Given that large-batch methods appear to be attracted to sharp minimizers, one can ask whether it is possible to modify the geometry of the loss function so that it is more benign to large-batch meth- ods. The loss function depends both on the geometry of the objective function and to the size and properties of the training set. One approach we consider is data augmentation; see e.g. (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014). The application of this technique is domain speciï¬c but generally involves augmenting the data set through controlled modiï¬cations on the training data. For instance, in the case of image recognition, the training set can be augmented through translations, rotations, shearing and ï¬ipping of the training data. This technique leads to regularization of the network and has been employed for improving testing accuracy on several data sets.
In our experiments, we train the 4 image-based (convolutional) networks using aggressive data aug- mentation and present the results in Table 6. For the augmentation, we use horizontal reï¬ections, random rotations up to 10⦠and random translation of up to 0.2 times the size of the image. It is evident from the table that, while the LB method achieves accuracy comparable to the SB method (also with training data augmented), the sharpness of the minima still exists, suggesting sensitivity to images contained in neither training or testing set. In this section, we exclude parametric plots and sharpness values for the SB method owing to space constraints and the similarity to those presented in Section 2.2.
13
Published as a conference paper at ICLR 2017
(a) F1 (b) F2 (c) C1 (d) C2 (e) C3 (f) C4
Figure 7: Parametric Plots â Curvilinear (Left vertical axis corresponds to cross-entropy loss, f , and right vertical axis corresponds to classiï¬cation accuracy; solid line indicates training data set and dashed line indicated testing data set); α = 0 corresponds to the SB minimizer while α = 1 corresponds to the LB minimizer
Table 6: Effect of Data Augmentation
Testing Accuracy Sharpness (LB method) Baseline (SB) | Augmented LB e=10% | «=5-104 Cy | 83.63% £ 0.14% | 82.50% £ 0.67% | 231.77 £ 30.50 45.89 + 3.82 Cz | 89.82% £0.12% | 90.26% + 1.15% | 468.65 + 47.86 | 105.22 C3 | 54.55% : » | 53.03% + 0.33% | 103.68 + 11.93 37.6 C4 | 63.05% + 0.5% 65.88 + 0.138% 271.06 + 29.69 45.31 4
14
Published as a conference paper at ICLR 2017
Table 7: Effect of Conservative Training
Testing Accuracy Sharpness (LB method) Baseline (SB) | Conservative LB e=10-% | ©=5-10-+ Fi 0.07% | 98.12% + 0.01% 63.81 | 46.02 + 12.58 F | 64.02% 40.2% | 61.94% + 1.10% 51.63 | 190.77 + 25.33 C, | 80.04% + 0.12% | 78.41% + 0.22% 34.91 | 171.19 + 15.13 C2 | 89.24% + 0.05% | 88.495% + 0.63% 108.88 + 47.36 0.39% | 45.98% +0.54% | 337.92 110.69 Cy | 63.08% +0.10% | 62.514
E.2 CONSERVATIVE TRAINING
In (Li et al., 2014), the authors argue that the convergence rate of SGD for the large-batch setting can be improved by obtaining iterates through the following proximal sub-problem.
1 A ; hoa = arg min Bil > fi(a) + =||x â ve |l5 (5) | sll iC By,
The motivation for this strategy is, in the context of large-batch methods, to better utilize a batch before moving onto the next one. The minimization problem is solved inexactly using 3â5 itera- tions of gradient descent, co-ordinate descent or L-BFGS. (Li et al., 2014) report that this not only improves the convergence rate of SGD but also leads to improved empirical performance on con- vex machine learning problems. The underlying idea of utilizing a batch is not speciï¬c to convex problems and we can apply the same framework for deep learning, however, without theoretical guarantees. Indeed, similar algorithms were proposed in (Zhang et al., 2015) and (Mobahi, 2016) for Deep Learning. The former placed emphasis on parallelization of small-batch SGD and asyn- chrony while the latter on a diffusion-continuation mechanism for training. The results using the conservative training approach are presented in Figure 7. In all experiments, we solve the problem (5) using 3 iterations of ADAM and set the regularization parameter λ to be 10â3. Again, there is a statistically signiï¬cant improvement in the testing accuracy of the large-batch method but it does not solve the problem of sensitivity.
# E.3 ROBUST TRAINING
A natural way of avoiding sharp minima is through robust optimization techniques. These methods attempt to optimize a worst-case cost as opposed to the nominal (or true) cost. Mathematically, given an ⬠> 0, these techniques solve the problem
min @(x) = mae, f(a + Ax) (6)
Geometrically, classical (nominal) optimization attempts to locate the lowest point of a valley, while robust optimization attempts to lower an eâdisc down the loss surface. We refer an interested reader to (2010), and the references therein, for a review of non-convex robust optimiza- tion. A direct application of this technique is, however, not feasible in our context since each itera- tion is prohibitively expensive because it involves solving a large-scale second-order conic program (SOCP).
15
Published as a conference paper at ICLR 2017
Worst-Case (Robust) â $4) < O02) lominal Cost FG) > fa)
# Cost
Figure 8: Illustration of Robust Optimization
In the context of Deep Learning, there are two inter-dependent forms of robustness: robustness to the data and robustness to the solution. The former exploits the fact that the function f is inherently a statistical model, while the latter treats f as a black-box function. In (Shaham et al., 2015), the authors prove the equivalence between robustness of the solution (with respect to the data) and adversarial training (Goodfellow et al., 2014a).
Given the partial success of the data augmentation strategy, it is natural to question the efï¬cacy of adversarial training. As described in (Goodfellow et al., 2014a), adversarial training also aims to artiï¬cially increase the training set but, unlike randomized data augmentation, uses the modelâs sensitivity to construct new examples. Despite its intuitive appeal, in our experiments, we found that this strategy did not improve generalization. Similarly, we observed no generalization beneï¬t from the stability training proposed by (Zheng et al., 2016). In both cases, the testing accuracy, sharpness values and the parametric plots were similar to the unmodiï¬ed (baseline) case discussed in Section 2. It remains to be seen whether adversarial training (or any other form of robust training) can increase the viability of large-batch training.
16 | {
"id": "1502.03167"
} |
1609.04747 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | 7 1 0 2
n u J 5 1 ] G L . s c [
2 v 7 4 7 4 0 . 9 0 6 1 : v i X r a
# An overview of gradient descent optimization algorithmsâ
Sebastian Ruder Insight Centre for Data Analytics, NUI Galway Aylien Ltd., Dublin ruder.sebastian@gmail.com
# Abstract
Gradient descent optimization algorithms, while increasingly popular, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by. This article aims to provide the reader with intuitions with regard to the behaviour of different algorithms that will allow her to put them to use. In the course of this overview, we look at different variants of gradient descent, summarize challenges, introduce the most common optimization algorithms, review architectures in a parallel and distributed setting, and investigate additional strategies for optimizing gradient descent.
# Introduction
Gradient descent is one of the most popular algorithms to perform optimization and by far the most common way to optimize neural networks. At the same time, every state-of-the-art Deep Learning library contains implementations of various algorithms to optimize gradient descent (e.g. lasagneâs2, caffeâs3, and kerasâ4 documentation). These algorithms, however, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by.
This article aims at providing the reader with intuitions with regard to the behaviour of different algorithms for optimizing gradient descent that will help her put them to use. In Section 2, we are ï¬rst going to look at the different variants of gradient descent. We will then brieï¬y summarize challenges during training in Section 3. Subsequently, in Section 4, we will introduce the most common optimization algorithms by showing their motivation to resolve these challenges and how this leads to the derivation of their update rules. Afterwards, in Section 5, we will take a short look at algorithms and architectures to optimize gradient descent in a parallel and distributed setting. Finally, we will consider additional strategies that are helpful for optimizing gradient descent in Section 6.
Gradient descent is a way to minimize an objective function J(θ) parameterized by a modelâs parameters θ â Rd by updating the parameters in the opposite direction of the gradient of the objective function âθJ(θ) w.r.t. to the parameters. The learning rate η determines the size of the steps we take to reach a (local) minimum. In other words, we follow the direction of the slope of the surface created by the objective function downhill until we reach a valley.5
âThis paper originally appeared as a blog post at http://sebastianruder.com/ optimizing-gradient-descent/index.html on 19 January 2016.
2http://lasagne.readthedocs.org/en/latest/modules/updates.html 3http://caffe.berkeleyvision.org/tutorial/solver.html 4http://keras.io/optimizers/ 5If you are unfamiliar with gradient descent, you can ï¬nd a good introduction on optimizing neural networks
at http://cs231n.github.io/optimization-1/.
# 2 Gradient descent variants
There are three variants of gradient descent, which differ in how much data we use to compute the gradient of the objective function. Depending on the amount of data, we make a trade-off between the accuracy of the parameter update and the time it takes to perform an update.
# 2.1 Batch gradient descent
Vanilla gradient descent, aka batch gradient descent, computes the gradient of the cost function w.r.t. to the parameters θ for the entire training dataset:
θ = θ â η · âθJ(θ) (1)
As we need to calculate the gradients for the whole dataset to perform just one update, batch gradient descent can be very slow and is intractable for datasets that do not ï¬t in memory. Batch gradient descent also does not allow us to update our model online, i.e. with new examples on-the-ï¬y.
In code, batch gradient descent looks something like this:
for i in range ( nb_epochs ):
params_grad = evaluate_gradient ( loss_function , data , params ) params = params - learning_rate * params_grad
For a pre-deï¬ned number of epochs, we ï¬rst compute the gradient vector params_grad of the loss function for the whole dataset w.r.t. our parameter vector params. Note that state-of-the-art deep learning libraries provide automatic differentiation that efï¬ciently computes the gradient w.r.t. some parameters. If you derive the gradients yourself, then gradient checking is a good idea.6
We then update our parameters in the direction of the gradients with the learning rate determining how big of an update we perform. Batch gradient descent is guaranteed to converge to the global minimum for convex error surfaces and to a local minimum for non-convex surfaces.
# 2.2 Stochastic gradient descent
Stochastic gradient descent (SGD) in contrast performs a parameter update for each training example x(i) and label y(i):
θ = θ â η · âθJ(θ; x(i); y(i)) (2)
Batch gradient descent performs redundant computations for large datasets, as it recomputes gradients for similar examples before each parameter update. SGD does away with this redundancy by performing one update at a time. It is therefore usually much faster and can also be used to learn online. SGD performs frequent updates with a high variance that cause the objective function to ï¬uctuate heavily as in Figure 1.
While batch gradient descent converges to the minimum of the basin the parameters are placed in, SGDâs ï¬uctuation, on the one hand, enables it to jump to new and potentially better local minima. On the other hand, this ultimately complicates convergence to the exact minimum, as SGD will keep overshooting. However, it has been shown that when we slowly decrease the learning rate, SGD shows the same convergence behaviour as batch gradient descent, almost certainly converging to a local or the global minimum for non-convex and convex optimization respectively. Its code fragment simply adds a loop over the training examples and evaluates the gradient w.r.t. each example. Note that we shufï¬e the training data at every epoch as explained in Section 6.1.
for i in range ( nb_epochs ): np . random . shuffle ( data ) for example in data : params_grad = evaluate_gradient ( loss_function , example , params ) params = params - learning_rate * params_grad
6Refer to http://cs231n.github.io/neural-networks-3/ for some great tips on how to check gradi- ents properly.
2
10 (7 a ee ec)
Figure 1: SGD ï¬uctuation (Source: Wikipedia)
# 2.3 Mini-batch gradient descent
Mini-batch gradient descent ï¬nally takes the best of both worlds and performs an update for every mini-batch of n training examples:
θ = θ â η · âθJ(θ; x(i:i+n); y(i:i+n)) (3)
This way, it a) reduces the variance of the parameter updates, which can lead to more stable conver- gence; and b) can make use of highly optimized matrix optimizations common to state-of-the-art deep learning libraries that make computing the gradient w.r.t. a mini-batch very efï¬cient. Common mini-batch sizes range between 50 and 256, but can vary for different applications. Mini-batch gradient descent is typically the algorithm of choice when training a neural network and the term SGD usually is employed also when mini-batches are used. Note: In modiï¬cations of SGD in the rest of this post, we leave out the parameters x(i:i+n); y(i:i+n) for simplicity.
In code, instead of iterating over examples, we now iterate over mini-batches of size 50:
for i in range ( nb_epochs ): np . random . shuffle ( data ) for batch in get_batches ( data , batch_size =50):
params_grad = evaluate_gradient ( loss_function , batch , params ) params = params - learning_rate * params_grad
# 3 Challenges
Vanilla mini-batch gradient descent, however, does not guarantee good convergence, but offers a few challenges that need to be addressed:
⢠Choosing a proper learning rate can be difï¬cult. A learning rate that is too small leads to painfully slow convergence, while a learning rate that is too large can hinder convergence and cause the loss function to ï¬uctuate around the minimum or even to diverge.
Learning rate schedules [18] try to adjust the learning rate during training by e.g. annealing, i.e. reducing the learning rate according to a pre-deï¬ned schedule or when the change in objective between epochs falls below a threshold. These schedules and thresholds, however, have to be deï¬ned in advance and are thus unable to adapt to a datasetâs characteristics [4]. ⢠Additionally, the same learning rate applies to all parameter updates. If our data is sparse and our features have very different frequencies, we might not want to update all of them to the same extent, but perform a larger update for rarely occurring features.
⢠Another key challenge of minimizing highly non-convex error functions common for neural networks is avoiding getting trapped in their numerous suboptimal local minima. Dauphin et al. [5] argue that the difï¬culty arises in fact not from local minima but from saddle points, i.e. points where one dimension slopes up and another slopes down. These saddle points are usually surrounded by a plateau of the same error, which makes it notoriously hard for SGD to escape, as the gradient is close to zero in all dimensions.
3
# 4 Gradient descent optimization algorithms
In the following, we will outline some algorithms that are widely used by the Deep Learning community to deal with the aforementioned challenges. We will not discuss algorithms that are infeasible to compute in practice for high-dimensional data sets, e.g. second-order methods such as Newtonâs method7.
# 4.1 Momentum
SGD has trouble navigating ravines, i.e. areas where the surface curves much more steeply in one dimension than in another [20], which are common around local optima. In these scenarios, SGD oscillates across the slopes of the ravine while only making hesitant progress along the bottom towards the local optimum as in Figure 2a.
â¬C
(a) SGD without momentum (b) SGD with momentum
Figure 2: Source: Genevieve B. Orr
Momentum [17] is a method that helps accelerate SGD in the relevant direction and dampens oscillations as can be seen in Figure 2b. It does this by adding a fraction γ of the update vector of the past time step to the current update vector8
vt = γvtâ1 + ηâθJ(θ) θ = θ â vt (4)
The momentum term γ is usually set to 0.9 or a similar value.
Essentially, when using momentum, we push a ball down a hill. The ball accumulates momentum as it rolls downhill, becoming faster and faster on the way (until it reaches its terminal velocity, if there is air resistance, i.e. γ < 1). The same thing happens to our parameter updates: The momentum term increases for dimensions whose gradients point in the same directions and reduces updates for dimensions whose gradients change directions. As a result, we gain faster convergence and reduced oscillation.
# 4.2 Nesterov accelerated gradient
However, a ball that rolls down a hill, blindly following the slope, is highly unsatisfactory. We would like to have a smarter ball, a ball that has a notion of where it is going so that it knows to slow down before the hill slopes up again.
Nesterov accelerated gradient (NAG) [14] is a way to give our momentum term this kind of prescience. We know that we will use our momentum term γ vtâ1 to move the parameters θ. Computing θ âγ vtâ1 thus gives us an approximation of the next position of the parameters (the gradient is missing for the full update), a rough idea where our parameters are going to be. We can now effectively look ahead by calculating the gradient not w.r.t. to our current parameters θ but w.r.t. the approximate future position of our parameters:
vt = γ vtâ1 + ηâθJ(θ â γvtâ1) θ = θ â vt (5)
7https://en.wikipedia.org/wiki/Newton%27s_method_in_optimization 8Some implementations exchange the signs in the equations.
4
Figure 3: Nesterov update (Source: G. Hintonâs lecture 6c)
Again, we set the momentum term γ to a value of around 0.9. While Momentum ï¬rst computes the current gradient (small blue vector in Figure 3) and then takes a big jump in the direction of the updated accumulated gradient (big blue vector), NAG ï¬rst makes a big jump in the direction of the previous accumulated gradient (brown vector), measures the gradient and then makes a correction (green vector). This anticipatory update prevents us from going too fast and results in increased responsiveness, which has signiï¬cantly increased the performance of RNNs on a number of tasks [2].9
Now that we are able to adapt our updates to the slope of our error function and speed up SGD in turn, we would also like to adapt our updates to each individual parameter to perform larger or smaller updates depending on their importance.
# 4.3 Adagrad
Adagrad [8] is an algorithm for gradient-based optimization that does just this: It adapts the learning rate to the parameters, performing larger updates for infrequent and smaller updates for frequent parameters. For this reason, it is well-suited for dealing with sparse data. Dean et al. [6] have found that Adagrad greatly improved the robustness of SGD and used it for training large-scale neural nets at Google, which â among other things â learned to recognize cats in Youtube videos10. Moreover, Pennington et al. [16] used Adagrad to train GloVe word embeddings, as infrequent words require much larger updates than frequent ones.
Previously, we performed an update for all parameters θ at once as every parameter θi used the same learning rate η. As Adagrad uses a different learning rate for every parameter θi at every time step t, we ï¬rst show Adagradâs per-parameter update, which we then vectorize. For brevity, we set gt,i to be the gradient of the objective function w.r.t. to the parameter θi at time step t:
gt,i = âθtJ(θt,i) (6)
The SGD update for every parameter θi at each time step t then becomes:
θt+1,i = θt,i â η · gt,i (7)
In its update rule, Adagrad modiï¬es the general learning rate η at each time step t for every parameter θi based on the past gradients that have been computed for θi:
n VGin be Iti (8) Oi = O14 ~~
G, ⬠R® here is a diagonal matrix where each diagonal element i, i is the sum of the squares of the gradients w.r.t. 9; up to time step while ⬠is a smoothing term that avoids division by zero (usually on the order of le â 8). Interestingly, without the square root operation, the algorithm performs much worse.
9Refer to http://cs231n.github.io/neural-networks-3/ for another explanation of the intuitions behind NAG, while Ilya Sutskever gives a more detailed overview in his PhD thesis [19].
10http://www.wired.com/2012/06/google-x-neural-network/ 11Duchi et al. [8] give this matrix as an alternative to the full matrix containing the outer products of all previous gradients, as the computation of the matrix square root is infeasible even for a moderate number of parameters d.
5
As G; contains the sum of the squares of the past gradients w.r.t. to all parameters 0 along its diagonal, we can now vectorize our implementation by performing an element-wise matrix-vector multiplication © between G;, and g:
O41 = 8 - Wee © o- 9)
One of Adagradâs main beneï¬ts is that it eliminates the need to manually tune the learning rate. Most implementations use a default value of 0.01 and leave it at that.
Adagradâs main weakness is its accumulation of the squared gradients in the denominator: Since every added term is positive, the accumulated sum keeps growing during training. This in turn causes the learning rate to shrink and eventually become inï¬nitesimally small, at which point the algorithm is no longer able to acquire additional knowledge. The following algorithms aim to resolve this ï¬aw.
# 4.4 Adadelta
Adadelta [22] is an extension of Adagrad that seeks to reduce its aggressive, monotonically decreasing learning rate. Instead of accumulating all past squared gradients, Adadelta restricts the window of accumulated past gradients to some ï¬xed size w.
Instead of inefï¬ciently storing w previous squared gradients, the sum of gradients is recursively deï¬ned as a decaying average of all past squared gradients. The running average E[g2]t at time step t then depends (as a fraction γ similarly to the Momentum term) only on the previous average and the current gradient:
E[g2]t = γE[g2]tâ1 + (1 â γ)g2 t (10)
We set γ to a similar value as the momentum term, around 0.9. For clarity, we now rewrite our vanilla SGD update in terms of the parameter update vector âθt:
âθt = âη · gt,i θt+1 = θt + âθt (11)
The parameter update vector of Adagrad that we derived previously thus takes the form:
AG, = -â_ om (12) VGi +e
We now simply replace the diagonal matrix Gt with the decaying average over past squared gradients E[g2]t:
Ad, = -ââ_ (13) VE +e
As the denominator is just the root mean squared (RMS) error criterion of the gradient, we can replace it with the criterion short-hand:
âθt = â η RM S[g]t gt (14)
The authors note that the units in this update (as well as in SGD, Momentum, or Adagrad) do not match, i.e. the update should have the same hypothetical units as the parameter. To realize this, they ï¬rst deï¬ne another exponentially decaying average, this time not of squared gradients but of squared parameter updates:
E[âθ2]t = γE[âθ2]tâ1 + (1 â γ)âθ2 t (15)
6
The root mean squared error of parameter updates is thus:
RMS[AO], = V/E[A0?], + â¬
(16)
Since RM S[âθ]t is unknown, we approximate it with the RMS of parameter updates until the previous time step. Replacing the learning rate η in the previous update rule with RM S[âθ]tâ1 ï¬nally yields the Adadelta update rule:
âθt = â RM S[âθ]tâ1 RM S[g]t gt θt+1 = θt + âθt (17)
With Adadelta, we do not even need to set a default learning rate, as it has been eliminated from the update rule.
# 4.5 RMSprop
RMSprop is an unpublished, adaptive learning rate method proposed by Geoff Hinton in Lecture 6e of his Coursera Class12.
RMSprop and Adadelta have both been developed independently around the same time stemming from the need to resolve Adagradâs radically diminishing learning rates. RMSprop in fact is identical to the ï¬rst update vector of Adadelta that we derived above:
Elg?|e = 0.9E |g? |e-1 + 0-197 0 (18) JE +e Orsi = 01 â
RMSprop as well divides the learning rate by an exponentially decaying average of squared gradients. Hinton suggests γ to be set to 0.9, while a good default value for the learning rate η is 0.001.
# 4.6 Adam
Adaptive Moment Estimation (Adam) [10] is another method that computes adaptive learning rates for each parameter. In addition to storing an exponentially decaying average of past squared gradients vt like Adadelta and RMSprop, Adam also keeps an exponentially decaying average of past gradients mt, similar to momentum:
mt = β1mtâ1 + (1 â β1)gt vt = β2vtâ1 + (1 â β2)g2 t (19)
mt and vt are estimates of the ï¬rst moment (the mean) and the second moment (the uncentered variance) of the gradients respectively, hence the name of the method. As mt and vt are initialized as vectors of 0âs, the authors of Adam observe that they are biased towards zero, especially during the initial time steps, and especially when the decay rates are small (i.e. β1 and β2 are close to 1).
They counteract these biases by computing bias-corrected ï¬rst and second moment estimates:
Ëmt = Ëvt = mt 1 â βt 1 vt 1 â βt 2 (20)
# 12http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf
7
They then use these to update the parameters just as we have seen in Adadelta and RMSprop, which yields the Adam update rule:
1 Vite O41 =O â mu (21)
The authors propose default values of 0.9 for £1, 0.999 for 82, and 10~® for e. They show empiri- cally that Adam works well in practice and compares favorably to other adaptive learning-method algorithms.
# 4.7 AdaMax
The v, factor in the Adam update rule scales the gradient inversely proportionally to the £2 norm of the past gradients (via the v,_1 term) and current gradient |g;|?:
vt = β2vtâ1 + (1 â β2)|gt|2 (22)
We can generalize this update to the ¢, norm. Note that Kingma and Ba also parameterize 82 as 93:
vt = βp 2 vtâ1 + (1 â βp 2 )|gt|p (23)
Norms for large p values generally become numerically unstable, which is why ¢; and 2 norms are most common in practice. However, ¢,. also generally exhibits stable behavior. For this reason, the authors propose AdaMax and show that v, with 0. converges to the following more stable value. To avoid confusion with Adam, we use u, to denote the infinity norm-constrained v;:
# ut = βâ
2 vtâ1 + (1 â βâ = max(β2 · vtâ1, |gt|) 2 )|gt|â (24)
â
We can now plug this into the Adam update equation by replacing \/0; + ⬠with u, to obtain the AdaMax update rule:
θt+1 = θt â η ut Ëmt (25)
Note that as ut relies on the max operation, it is not as suggestible to bias towards zero as mt and vt in Adam, which is why we do not need to compute a bias correction for ut. Good default values are again η = 0.002, β1 = 0.9, and β2 = 0.999.
# 4.8 Nadam
As we have seen before, Adam can be viewed as a combination of RMSprop and momentum: RM- Sprop contributes the exponentially decaying average of past squared gradients vt, while momentum accounts for the exponentially decaying average of past gradients mt. We have also seen that Nesterov accelerated gradient (NAG) is superior to vanilla momentum.
Nadam (Nesterov-accelerated Adaptive Moment Estimation) [7] thus combines Adam and NAG. In order to incorporate NAG into Adam, we need to modify its momentum term mt.
First, let us recall the momentum update rule using our current notation :
gt = âθtJ(θt) mt = γmtâ1 + ηgt (26) θt+1 = θt â mt
8
where J is our objective function, γ is the momentum decay term, and η is our step size. Expanding the third equation above yields:
θt+1 = θt â (γmtâ1 + ηgt) (27)
This demonstrates again that momentum involves taking a step in the direction of the previous momentum vector and a step in the direction of the current gradient.
NAG then allows us to perform a more accurate step in the gradient direction by updating the parameters with the momentum step before computing the gradient. We thus only need to modify the gradient gt to arrive at NAG:
gt = âθtJ(θt â γmtâ1) mt = γmtâ1 + ηgt θt+1 = θt â mt (28)
Dozat proposes to modify NAG the following way: Rather than applying the momentum step twice â one time for updating the gradient gt and a second time for updating the parameters θt+1 â we now apply the look-ahead momentum vector directly to update the current parameters:
gt = âθtJ(θt) mt = γmtâ1 + ηgt θt+1 = θt â (γmt + ηgt) (29)
Notice that rather than utilizing the previous momentum vector mtâ1 as in Equation 27, we now use the current momentum vector mt to look ahead. In order to add Nesterov momentum to Adam, we can thus similarly replace the previous momentum vector with the current momentum vector. First, recall that the Adam update rule is the following (note that we do not need to modify Ëvt):
mz, = Bymy-1 + (1 â 81) ge » me ms Tw (30) 7) n Ory = 0; - t+1 t Vii + le
Expanding the second equation with the deï¬nitions of Ëmt and mt in turn gives us:
0, ui Bye ie = Pr)g t 31 Vite 1-fs * 1-8 G1) 141
Note that β1mtâ1 1âβt 1 step. We can thus replace it with Ëmtâ1: is just the bias-corrected estimate of the momentum vector of the previous time
n ~ , = fi)ge _ Tee Fe t 1-5 Our =O ) (32)
This equation looks very similar to our expanded momentum term in Equation 27. We can now add Nesterov momentum just as we did in Equation 29 by simply replacing this bias-corrected estimate of the momentum vector of the previous time step Ëmtâ1 with the bias-corrected estimate of the current momentum vector Ëmt, which gives us the Nadam update rule:
n . , = 6i)ge Tape + Tao) O41 = 01 â (33)
9
# 4.9 Visualization of algorithms
The following two ï¬gures provide some intuitions towards the optimization behaviour of the presented optimization algorithms.13
In Figure 4a, we see the path they took on the contours of a loss surface (the Beale function). All started at the same point and took different paths to reach the minimum. Note that Adagrad, Adadelta, and RMSprop headed off immediately in the right direction and converged similarly fast, while Momentum and NAG were led off-track, evoking the image of a ball rolling down the hill. NAG, however, was able to correct its course sooner due to its increased responsiveness by looking ahead and headed to the minimum.
Figure 4b shows the behaviour of the algorithms at a saddle point, i.e. a point where one dimension has a positive slope, while the other dimension has a negative slope, which pose a difï¬culty for SGD as we mentioned before. Notice here that SGD, Momentum, and NAG ï¬nd it difï¬culty to break symmetry, although the latter two eventually manage to escape the saddle point, while Adagrad, RMSprop, and Adadelta quickly head down the negative slope, with Adadelta leading the charge.
SGD Momentum NAG Adagrad Adadelta SGD Momentum NAG Adagrad Adadelta (a) SGD optimization on loss surface contours (b) SGD optimization on saddle point
SGD Momentum NAG Adagrad Adadelta
SGD Momentum NAG Adagrad Adadelta
# (a) SGD optimization on loss surface contours
# (b) SGD optimization on saddle point
Figure 4: Source and full animations: Alec Radford
As we can see, the adaptive learning-rate methods, i.e. Adagrad, Adadelta, RMSprop, and Adam are most suitable and provide the best convergence for these scenarios.
# 4.10 Which optimizer to use?
So, which optimizer should you use? If your input data is sparse, then you likely achieve the best results using one of the adaptive learning-rate methods. An additional beneï¬t is that you will not need to tune the learning rate but will likely achieve the best results with the default value.
In summary, RMSprop is an extension of Adagrad that deals with its radically diminishing learning rates. It is identical to Adadelta, except that Adadelta uses the RMS of parameter updates in the numerator update rule. Adam, ï¬nally, adds bias-correction and momentum to RMSprop. Insofar, RMSprop, Adadelta, and Adam are very similar algorithms that do well in similar circumstances. Kingma et al. [10] show that its bias-correction helps Adam slightly outperform RMSprop towards the end of optimization as gradients become sparser. Insofar, Adam might be the best overall choice.
Interestingly, many recent papers use vanilla SGD without momentum and a simple learning rate annealing schedule. As has been shown, SGD usually achieves to ï¬nd a minimum, but it might take signiï¬cantly longer than with some of the optimizers, is much more reliant on a robust initialization and annealing schedule, and may get stuck in saddle points rather than local minima. Consequently, if you care about fast convergence and train a deep or complex neural network, you should choose one of the adaptive learning rate methods.
13Also have a look at http://cs231n.github.io/neural-networks-3/ for a description of the same images by Karpathy and another concise overview of the algorithms discussed.
10
# 5 Parallelizing and distributing SGD
Given the ubiquity of large-scale data solutions and the availability of low-commodity clusters, distributing SGD to speed it up further is an obvious choice. SGD by itself is inherently sequential: Step-by-step, we progress further towards the minimum. Running it provides good convergence but can be slow particularly on large datasets. In contrast, running SGD asynchronously is faster, but suboptimal communication between workers can lead to poor convergence. Additionally, we can also parallelize SGD on one machine without the need for a large computing cluster. The following are algorithms and architectures that have been proposed to optimize parallelized and distributed SGD.
# 5.1 Hogwild!
Niu et al. [15] introduce an update scheme called Hogwild! that allows performing SGD updates in parallel on CPUs. Processors are allowed to access shared memory without locking the parameters. This only works if the input data is sparse, as each update will only modify a fraction of all parameters. They show that in this case, the update scheme achieves almost an optimal rate of convergence, as it is unlikely that processors will overwrite useful information.
# 5.2 Downpour SGD
Downpour SGD is an asynchronous variant of SGD that was used by Dean et al. [6] in their DistBelief framework (the predecessor to TensorFlow) at Google. It runs multiple replicas of a model in parallel on subsets of the training data. These models send their updates to a parameter server, which is split across many machines. Each machine is responsible for storing and updating a fraction of the modelâs parameters. However, as replicas donât communicate with each other e.g. by sharing weights or updates, their parameters are continuously at risk of diverging, hindering convergence.
# 5.3 Delay-tolerant Algorithms for SGD
McMahan and Streeter [12] extend AdaGrad to the parallel setting by developing delay-tolerant algorithms that not only adapt to past gradients, but also to the update delays. This has been shown to work well in practice.
# 5.4 TensorFlow
TensorFlow14 [1] is Googleâs recently open-sourced framework for the implementation and deploy- ment of large-scale machine learning models. It is based on their experience with DistBelief and is already used internally to perform computations on a large range of mobile devices as well as on large-scale distributed systems. The distributed version, which was released in April 2016 15 relies on a computation graph that is split into a subgraph for every device, while communication takes place using Send/Receive node pairs.
# 5.5 Elastic Averaging SGD
Zhang et al. [23] propose Elastic Averaging SGD (EASGD), which links the parameters of the workers of asynchronous SGD with an elastic force, i.e. a center variable stored by the parameter server. This allows the local variables to ï¬uctuate further from the center variable, which in theory allows for more exploration of the parameter space. They show empirically that this increased capacity for exploration leads to improved performance by ï¬nding new local optima.
# 6 Additional strategies for optimizing SGD
Finally, we introduce additional strategies that can be used alongside any of the previously mentioned algorithms to further improve the performance of SGD. For a great overview of some other common tricks, refer to [11].
# 14https://www.tensorflow.org/ 15http://googleresearch.blogspot.ie/2016/04/announcing-tensorflow-08-now-with.html
11
# 6.1 Shufï¬ing and Curriculum Learning
Generally, we want to avoid providing the training examples in a meaningful order to our model as this may bias the optimization algorithm. Consequently, it is often a good idea to shufï¬e the training data after every epoch.
On the other hand, for some cases where we aim to solve progressively harder problems, supplying the training examples in a meaningful order may actually lead to improved performance and better convergence. The method for establishing this meaningful order is called Curriculum Learning [3].
Zaremba and Sutskever [21] were only able to train LSTMs to evaluate simple programs using Curriculum Learning and show that a combined or mixed strategy is better than the naive one, which sorts examples by increasing difï¬culty.
# 6.2 Batch normalization
To facilitate learning, we typically normalize the initial values of our parameters by initializing them with zero mean and unit variance. As training progresses and we update parameters to different extents, we lose this normalization, which slows down training and ampliï¬es changes as the network becomes deeper.
Batch normalization [9] reestablishes these normalizations for every mini-batch and changes are back- propagated through the operation as well. By making normalization part of the model architecture, we are able to use higher learning rates and pay less attention to the initialization parameters. Batch normalization additionally acts as a regularizer, reducing (and sometimes even eliminating) the need for Dropout.
# 6.3 Early stopping
According to Geoff Hinton: âEarly stopping (is) beautiful free lunchâ16. You should thus always monitor error on a validation set during training and stop (with some patience) if your validation error does not improve enough.
# 6.4 Gradient noise
Neelakantan et al. [13] add noise that follows a Gaussian distribution N (0, Ï2 update: t ) to each gradient
gt,i = gt,i + N (0, Ï2 t ) (34)
They anneal the variance according to the following schedule:
Ï2 t = η (1 + t)γ (35)
They show that adding this noise makes networks more robust to poor initialization and helps training particularly deep and complex networks. They suspect that the added noise gives the model more chances to escape and ï¬nd new local minima, which are more frequent for deeper models.
# 7 Conclusion
In this article, we have initially looked at the three variants of gradient descent, among which mini- batch gradient descent is the most popular. We have then investigated algorithms that are most commonly used for optimizing SGD: Momentum, Nesterov accelerated gradient, Adagrad, Adadelta, RMSprop, Adam, AdaMax, Nadam, as well as different algorithms to optimize asynchronous SGD. Finally, weâve considered other strategies to improve SGD such as shufï¬ing and curriculum learning, batch normalization, and early stopping.
16NIPS 2015 Tutorial DL-Tutorial-NIPS2015.pdf slides, slide 63, http://www.iro.umontreal.ca/~bengioy/talks/
12
# References
[1] Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man, Rajat Monga, Sherry Moore, Derek Murray, Jon Shlens, Benoit Steiner, Ilya Sutskever, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Oriol Vinyals, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. 2015.
[2] Yoshua Bengio, Nicolas Boulanger-Lewandowski, and Razvan Pascanu. Advances in Optimiz- ing Recurrent Networks. 2012.
[3] Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. Proceedings of the 26th annual international conference on machine learning, pages 41â48, 2009.
[4] C. Darken, J. Chang, and J. Moody. Learning rate schedules for faster stochastic gradient search. Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop, (September):1â11, 1992.
[5] Yann N. Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non- convex optimization. arXiv, pages 1â14, 2014.
[6] Jeffrey Dean, Greg S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao, Marc Aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, and Andrew Y. Ng. Large Scale Distributed Deep Networks. NIPS 2012: Neural Information Processing Systems, pages 1â11, 2012.
[7] Timothy Dozat. Incorporating Nesterov Momentum into Adam. ICLR Workshop, (1):2013â2016, 2016.
[8] John Duchi, Elad Hazan, and Yoram Singer. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Journal of Machine Learning Research, 12:2121â2159, 2011.
[9] Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv preprint arXiv:1502.03167v3, 2015.
[10] Diederik P. Kingma and Jimmy Lei Ba. Adam: a Method for Stochastic Optimization. Interna- tional Conference on Learning Representations, pages 1â13, 2015.
[11] Yann LeCun, Leon Bottou, Genevieve B. Orr, and Klaus Robert Müller. Efï¬cient BackProp. Neural Networks: Tricks of the Trade, 1524:9â50, 1998.
[12] H. Brendan Mcmahan and Matthew Streeter. Delay-Tolerant Algorithms for Asynchronous Distributed Online Learning. Advances in Neural Information Processing Systems (Proceedings of NIPS), pages 1â9, 2014.
[13] Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, and James Martens. Adding Gradient Noise Improves Learning for Very Deep Networks. pages 1â11, 2015.
[14] Yurii Nesterov. A method for unconstrained convex minimization problem with the rate of convergence o(1/k2). Doklady ANSSSR (translated as Soviet.Math.Docl.), 269:543â547.
[15] Feng Niu, Benjamin Recht, R Christopher, and Stephen J Wright. Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent. pages 1â22, 2011.
[16] Jeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global Vectors for Word Representation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1532â1543, 2014.
[17] Ning Qian. On the momentum term in gradient descent learning algorithms. Neural networks : the ofï¬cial journal of the International Neural Network Society, 12(1):145â151, 1999.
[18] Herbert Robbins and Sutton Monro. A Stochastic Approximation Method. The Annals of Mathematical Statistics, 22(3):400â407, 1951.
[19] Ilya Sutskever. Training Recurrent neural Networks. PhD thesis, page 101, 2013.
13
[20] Richard S. Sutton. Two problems with backpropagation and other steepest-descent learning procedures for networks, 1986.
[21] Wojciech Zaremba and Ilya Sutskever. Learning to Execute. pages 1â25, 2014. [22] Matthew D. Zeiler. ADADELTA: An Adaptive Learning Rate Method. arXiv preprint
arXiv:1212.5701, 2012.
[23] Sixin Zhang, Anna Choromanska, and Yann LeCun. Deep learning with Elastic Averaging SGD. Neural Information Processing Systems Conference (NIPS 2015), pages 1â24, 2015.
14 | {
"id": "1502.03167"
} |
1609.03499 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | 6 1 0 2
p e S 9 1 ] D S . s c [
2 v 9 9 4 3 0 . 9 0 6 1 : v i X r a
# WAVENET: A GENERATIVE MODEL FOR RAW AUDIO
A¨aron van den Oord Sander Dieleman Heiga Zenâ Karen Simonyan Oriol Vinyals Alex Graves Nal Kalchbrenner Andrew Senior Koray Kavukcuoglu
{avdnoord, sedielem, heigazen, simonyan, vinyals, gravesa, nalk, andrewsenior, korayk}@google.com Google DeepMind, London, UK â Google, London, UK
# ABSTRACT
This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predic- tive distribution for each audio sample conditioned on all previous ones; nonethe- less we show that it can be efï¬ciently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-of- the-art performance, with human listeners rating it as signiï¬cantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal ï¬delity, and can switch between them by conditioning on the speaker identity. When trained to model music, we ï¬nd that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.
# INTRODUCTION
This work explores raw audio generation techniques, inspired by recent advances in neural autore- gressive generative models that model complex distributions such as images (van den Oord et al., 2016a;b) and text (J´ozefowicz et al., 2016). Modeling joint probabilities over pixels or words using neural architectures as products of conditional distributions yields state-of-the-art generation.
Remarkably, these architectures are able to model distributions over thousands of random variables (e.g. 64Ã64 pixels as in PixelRNN (van den Oord et al., 2016a)). The question this paper addresses is whether similar approaches can succeed in generating wideband raw audio waveforms, which are signals with very high temporal resolution, at least 16,000 samples per second (see Fig. 1).
Niven -â Saal
Figure 1: A second of generated speech.
This paper introduces WaveNet, an audio generative model based on the PixelCNN (van den Oord et al., 2016a;b) architecture. The main contributions of this work are as follows:
⢠We show that WaveNets can generate raw speech signals with subjective naturalness never before reported in the ï¬eld of text-to-speech (TTS), as assessed by human raters.
1
⢠In order to deal with long-range temporal dependencies needed for raw audio generation, we develop new architectures based on dilated causal convolutions, which exhibit very large receptive ï¬elds.
⢠We show that when conditioned on a speaker identity, a single model can be used to gener- ate different voices.
⢠The same architecture shows strong results when tested on a small speech recognition dataset, and is promising when used to generate other audio modalities such as music.
We believe that WaveNets provide a generic and ï¬exible framework for tackling many applications that rely on audio generation (e.g. TTS, music, speech enhancement, voice conversion, source sep- aration).
# 2 WAVENET
In this paper we introduce a new generative model operating directly on the raw audio waveform. The joint probability of a waveform x = {x1, . . . , xT } is factorised as a product of conditional probabilities as follows:
T p(x) =] p(a | 21,.--, 2-1) dd) t=1
t=1 Each audio sample xt is therefore conditioned on the samples at all previous timesteps.
Similarly to PixelCNNs (van den Oord et al., 2016a;b), the conditional probability distribution is modelled by a stack of convolutional layers. There are no pooling layers in the network, and the output of the model has the same time dimensionality as the input. The model outputs a categorical distribution over the next value xt with a softmax layer and it is optimized to maximize the log- likelihood of the data w.r.t. the parameters. Because log-likelihoods are tractable, we tune hyper- parameters on a validation set and can easily measure if the model is overï¬tting or underï¬tting.
2.1 DILATED CAUSAL CONVOLUTIONS
Output O- O Oe Oo Oo 66065650 66606 Hidden Layer 666066066666 ) Hidden Layer Cee eee bs |.
Figure 2: Visualization of a stack of causal convolutional layers.
The main ingredient of WaveNet are causal convolutions. By using causal convolutions, we make sure the model cannot violate the ordering in which we model the data: the prediction p (xt+1 | x1, ..., xt) emitted by the model at timestep t cannot depend on any of the future timesteps xt+1, xt+2, . . . , xT as shown in Fig. 2. For images, the equivalent of a causal convolution is a masked convolution (van den Oord et al., 2016a) which can be implemented by constructing a mask tensor and doing an elementwise multiplication of this mask with the convolution kernel before ap- plying it. For 1-D data such as audio one can more easily implement this by shifting the output of a normal convolution by a few timesteps.
At training time, the conditional predictions for all timesteps can be made in parallel because all timesteps of ground truth x are known. When generating with the model, the predictions are se- quential: after each sample is predicted, it is fed back into the network to predict the next sample.
2
Because models with causal convolutions do not have recurrent connections, they are typically faster to train than RNNs, especially when applied to very long sequences. One of the problems of causal convolutions is that they require many layers, or large ï¬lters to increase the receptive ï¬eld. For example, in Fig. 2 the receptive ï¬eld is only 5 (= #layers + ï¬lter length - 1). In this paper we use dilated convolutions to increase the receptive ï¬eld by orders of magnitude, without greatly increasing computational cost.
A dilated convolution (also called `a trous, or convolution with holes) is a convolution where the ï¬lter is applied over an area larger than its length by skipping input values with a certain step. It is equivalent to a convolution with a larger ï¬lter derived from the original ï¬lter by dilating it with zeros, but is signiï¬cantly more efï¬cient. A dilated convolution effectively allows the network to operate on a coarser scale than with a normal convolution. This is similar to pooling or strided convolutions, but here the output has the same size as the input. As a special case, dilated convolution with dilation 1 yields the standard convolution. Fig. 3 depicts dilated causal convolutions for dilations 1, 2, 4, and 8. Dilated convolutions have previously been used in various contexts, e.g. signal processing (Holschneider et al., 1989; Dutilleux, 1989), and image segmentation (Chen et al., 2015; Yu & Koltun, 2016).
@ ¢g . : ' , 4 . . : ? ' : Output i â : ' i : t : : ' i : : ' Dilation = 8 Hidden Layer Dilation = 4 Hidden Layer Dilation = 2 Hidden Layer Dilation = 4 Input
Figure 3: Visualization of a stack of dilated causal convolutional layers.
Stacked dilated convolutions enable networks to have very large receptive ï¬elds with just a few lay- ers, while preserving the input resolution throughout the network as well as computational efï¬ciency. In this paper, the dilation is doubled for every layer up to a limit and then repeated: e.g.
1, 2, 4, . . . , 512, 1, 2, 4, . . . , 512, 1, 2, 4, . . . , 512. The intuition behind this conï¬guration is two-fold. First, exponentially increasing the dilation factor results in exponential receptive ï¬eld growth with depth (Yu & Koltun, 2016). For example each 1, 2, 4, . . . , 512 block has receptive ï¬eld of size 1024, and can be seen as a more efï¬cient and dis- criminative (non-linear) counterpart of a 1Ã1024 convolution. Second, stacking these blocks further increases the model capacity and the receptive ï¬eld size.
2.2 SOFTMAX DISTRIBUTIONS
One approach to modeling the conditional distributions p (xt | x1, . . . , xtâ1) over the individual audio samples would be to use a mixture model such as a mixture density network (Bishop, 1994) or mixture of conditional Gaussian scale mixtures (MCGSM) (Theis & Bethge, 2015). However, van den Oord et al. (2016a) showed that a softmax distribution tends to work better, even when the data is implicitly continuous (as is the case for image pixel intensities or audio sample values). One of the reasons is that a categorical distribution is more ï¬exible and can more easily model arbitrary distributions because it makes no assumptions about their shape.
Because raw audio is typically stored as a sequence of 16-bit integer values (one per timestep), a softmax layer would need to output 65,536 probabilities per timestep to model all possible values. To make this more tractable, we ï¬rst apply a µ-law companding transformation (ITU-T, 1988) to the data, and then quantize it to 256 possible values:
f (xt) = sign(xt) ln (1 + µ |xt|) ln (1 + µ) ,
3
where â1 < xt < 1 and µ = 255. This non-linear quantization produces a signiï¬cantly better reconstruction than a simple linear quantization scheme. Especially for speech, we found that the reconstructed signal after quantization sounded very similar to the original.
2.3 GATED ACTIVATION UNITS
We use the same gated activation unit as used in the gated PixelCNN (van den Oord et al., 2016b):
z = tanh (Wy, *x) Oo (Won * x), (2)
where * denotes a convolution operator, © denotes an element-wise multiplication operator, o(-) is a sigmoid function, k is the layer index, f and g denote filter and gate, respectively, and W is a learnable convolution filter. In our initial experiments, we observed that this non-linearity worked significantly better than the rectified linear activation function (Nair & Hinton) [2010) for modeling audio signals.
# 2.4 RESIDUAL AND SKIP CONNECTIONS
f E@fa@ 1x 1H) Softmax + Output Skip-connections Dilated Conv -9- 4 Causal Conv r Input
Figure 4: Overview of the residual block and the entire architecture.
Both residual (He et al., 2015) and parameterised skip connections are used throughout the network, to speed up convergence and enable training of much deeper models. In Fig. 4 we show a residual block of our model, which is stacked many times in the network.
2.5 CONDITIONAL WAVENETS
Given an additional input h, WaveNets can model the conditional distribution p (x | h) of the audio given this input. Eq. (1) now becomes
T p(x|b)=]] p (a | a,-..,ae-1,h). (3) t=1
By conditioning the model on other input variables, we can guide WaveNetâs generation to produce audio with the required characteristics. For example, in a multi-speaker setting we can choose the speaker by feeding the speaker identity to the model as an extra input. Similarly, for TTS we need to feed information about the text as an extra input.
We condition the model on other inputs in two different ways: global conditioning and local condi- tioning. Global conditioning is characterised by a single latent representation h that inï¬uences the output distribution across all timesteps, e.g. a speaker embedding in a TTS model. The activation function from Eq. (2) now becomes:
z = tanh (Wp, *x + Vinh) Oa (Wyn *x+ Vonh) .
4
where Vâ,k is a learnable linear projection, and the vector V T sion. â,kh is broadcast over the time dimen-
For local conditioning we have a second timeseries ht, possibly with a lower sampling frequency than the audio signal, e.g. linguistic features in a TTS model. We ï¬rst transform this time series using a transposed convolutional network (learned upsampling) that maps it to a new time series y = f (h) with the same resolution as the audio signal, which is then used in the activation unit as follows:
z= tanh (Wp, *x+Vpn*y) Oo (Wor *x+Von*y),
where Vf,k ây is now a 1Ã1 convolution. As an alternative to the transposed convolutional network, it is also possible to use Vf,k âh and repeat these values across time. We saw that this worked slightly worse in our experiments.
2.6 CONTEXT STACKS
We have already mentioned several different ways to increase the receptive ï¬eld size of a WaveNet: increasing the number of dilation stages, using more layers, larger ï¬lters, greater dilation factors, or a combination thereof. A complementary approach is to use a separate, smaller context stack that processes a long part of the audio signal and locally conditions a larger WaveNet that processes only a smaller part of the audio signal (cropped at the end). One can use multiple context stacks with varying lengths and numbers of hidden units. Stacks with larger receptive ï¬elds have fewer units per layer. Context stacks can also have pooling layers to run at a lower frequency. This keeps the computational requirements at a reasonable level and is consistent with the intuition that less capacity is required to model temporal correlations at longer timescales.
# 3 EXPERIMENTS
To measure WaveNetâs audio modelling performance, we evaluate it on three different tasks: multi- speaker speech generation (not conditioned on text), TTS, and music audio modelling. We provide samples drawn from WaveNet for these experiments on the accompanying webpage: https://www.deepmind.com/blog/wavenet-generative-model-raw-audio/.
3.1 MULTI-SPEAKER SPEECH GENERATION
For the ï¬rst experiment we looked at free-form speech generation (not conditioned on text). We used the English multi-speaker corpus from CSTR voice cloning toolkit (VCTK) (Yamagishi, 2012) and conditioned WaveNet only on the speaker. The conditioning was applied by feeding the speaker ID to the model in the form of a one-hot vector. The dataset consisted of 44 hours of data from 109 different speakers.
Because the model is not conditioned on text, it generates non-existent but human language-like words in a smooth way with realistic sounding intonations. This is similar to generative models of language or images, where samples look realistic at ï¬rst glance, but are clearly unnatural upon closer inspection. The lack of long range coherence is partly due to the limited size of the modelâs receptive ï¬eld (about 300 milliseconds), which means it can only remember the last 2â3 phonemes it produced.
A single WaveNet was able to model speech from any of the speakers by conditioning it on a one- hot encoding of a speaker. This conï¬rms that it is powerful enough to capture the characteristics of all 109 speakers from the dataset in a single model. We observed that adding speakers resulted in better validation set performance compared to training solely on a single speaker. This suggests that WaveNetâs internal representation was shared among multiple speakers.
Finally, we observed that the model also picked up on other characteristics in the audio apart from the voice itself. For instance, it also mimicked the acoustics and recording quality, as well as the breathing and mouth movements of the speakers.
5
3.2 TEXT-TO-SPEECH
For the second experiment we looked at TTS. We used the same single-speaker speech databases from which Googleâs North American English and Mandarin Chinese TTS systems are built. The North American English dataset contains 24.6 hours of speech data, and the Mandarin Chinese dataset contains 34.8 hours; both were spoken by professional female speakers.
WaveNets for the TTS task were locally conditioned on linguistic features which were derived from input texts. We also trained WaveNets conditioned on the logarithmic fundamental frequency (log F0) values in addition to the linguistic features. External models predicting log F0 values and phone durations from linguistic features were also trained for each language. The receptive ï¬eld size of the WaveNets was 240 milliseconds. As example-based and model-based speech synthesis base- lines, hidden Markov model (HMM)-driven unit selection concatenative (Gonzalvo et al., 2016) and long short-term memory recurrent neural network (LSTM-RNN)-based statistical parametric (Zen et al., 2016) speech synthesizers were built. Since the same datasets and linguistic features were used to train both the baselines and WaveNets, these speech synthesizers could be fairly compared.
To evaluate the performance of WaveNets for the TTS task, subjective paired comparison tests and mean opinion score (MOS) tests were conducted. In the paired comparison tests, after listening to each pair of samples, the subjects were asked to choose which they preferred, though they could choose âneutralâ if they did not have any preference. In the MOS tests, after listening to each stimulus, the subjects were asked to rate the naturalness of the stimulus in a ï¬ve-point Likert scale score (1: Bad, 2: Poor, 3: Fair, 4: Good, 5: Excellent). Please refer to Appendix B for details.
Fig. 5 shows a selection of the subjective paired comparison test results (see Appendix B for the complete table). It can be seen from the results that WaveNet outperformed the baseline statisti- cal parametric and concatenative speech synthesizers in both languages. We found that WaveNet conditioned on linguistic features could synthesize speech samples with natural segmental quality but sometimes it had unnatural prosody by stressing wrong words in a sentence. This could be due to the long-term dependency of F0 contours: the size of the receptive ï¬eld of the WaveNet, 240 milliseconds, was not long enough to capture such long-term dependency. WaveNet conditioned on both linguistic features and F0 values did not have this problem: the external F0 prediction model runs at a lower frequency (200 Hz) so it can learn long-range dependencies that exist in F0 contours.
Table 1 show the MOS test results. It can be seen from the table that WaveNets achieved 5-scale MOSs in naturalness above 4.0, which were signiï¬cantly better than those from the baseline systems. They were the highest ever reported MOS values with these training datasets and test sentences. The gap in the MOSs from the best synthetic speech to the natural ones decreased from 0.69 to 0.34 (51%) in US English and 0.42 to 0.13 (69%) in Mandarin Chinese.
Subjective 5-scale MOS in naturalness Speech samples North American English Mandarin Chinese LSTM-RNN parametric HMM-driven concatenative WaveNet (L+F) 3.67 ± 0.098 3.86 ± 0.137 4.21 ± 0.081 3.79 ± 0.084 3.47 ± 0.108 4.08 ± 0.085 Natural (8-bit µ-law) Natural (16-bit linear PCM) 4.46 ± 0.067 4.55 ± 0.075 4.25 ± 0.082 4.21 ± 0.071
Table 1: Subjective 5-scale mean opinion scores of speech samples from LSTM-RNN-based sta- tistical parametric, HMM-driven unit selection concatenative, and proposed WaveNet-based speech synthesizers, 8-bit µ-law encoded natural speech, and 16-bit linear pulse-code modulation (PCM) natural speech. WaveNet improved the previous state of the art signiï¬cantly, reducing the gap be- tween natural speech and best previous model by more than 50%.
3.3 MUSIC
For out third set of experiments we trained WaveNets to model two music datasets:
6
[J-stM~â fiiconcat ~ââ [No pref. 100 80 60 40 20 Preference scores (%)
North American English Mandarin Chinese
100 [| WaveNet (L) [J WaveNet (L+F) [JJ] No pref. 80 60 40 20 Preference scores (%) North American Mandarin Chinese
# English
100 Best baseline [[jWaveNet (L+F) [J ]No pref. 80 60 40 20 Preference scores (%) 0 North American English Mandarin Chinese
Figure 5: Subjective preference scores (%) of speech samples between (top) two baselines, (middle) two WaveNets, and (bottom) the best baseline and WaveNet. Note that LSTM and Concat cor- respond to LSTM-RNN-based statistical parametric and HMM-driven unit selection concatenative baseline synthesizers, and WaveNet (L) and WaveNet (L+F) correspond to the WaveNet condi- tioned on linguistic features only and that conditioned on both linguistic features and log F0 values.
7
⢠the MagnaTagATune dataset (Law & Von Ahn, 2009), which consists of about 200 hours of music audio. Each 29-second clip is annotated with tags from a set of 188, which describe the genre, instrumentation, tempo, volume and mood of the music.
⢠the YouTube piano dataset, which consists of about 60 hours of solo piano music obtained from YouTube videos. Because it is constrained to a single instrument, it is considerably easier to model.
Although it is difï¬cult to quantitatively evaluate these models, a subjective evaluation is possible by listening to the samples they produce. We found that enlarging the receptive ï¬eld was crucial to ob- tain samples that sounded musical. Even with a receptive ï¬eld of several seconds, the models did not enforce long-range consistency which resulted in second-to-second variations in genre, instrumen- tation, volume and sound quality. Nevertheless, the samples were often harmonic and aesthetically pleasing, even when produced by unconditional models.
Of particular interest are conditional music models, which can generate music given a set of tags specifying e.g. genre or instruments. Similarly to conditional speech models, we insert biases that depend on a binary vector representation of the tags associated with each training clip. This makes it possible to control various aspects of the output of the model when sampling, by feeding in a binary vector that encodes the desired properties of the samples. We have trained such models on the MagnaTagATune dataset; although the tag data bundled with the dataset was relatively noisy and had many omissions, after cleaning it up by merging similar tags and removing those with too few associated clips, we found this approach to work reasonably well.
3.4 SPEECH RECOGNITION
Although WaveNet was designed as a generative model, it can straightforwardly be adapted to dis- criminative audio tasks such as speech recognition.
Traditionally, speech recognition research has largely focused on using log mel-ï¬lterbank energies or mel-frequency cepstral coefï¬cients (MFCCs), but has been moving to raw audio recently (Palaz et al., 2013; T¨uske et al., 2014; Hoshen et al., 2015; Sainath et al., 2015). Recurrent neural networks such as LSTM-RNNs (Hochreiter & Schmidhuber, 1997) have been a key component in these new speech classiï¬cation pipelines, because they allow for building models with long range contexts. With WaveNets we have shown that layers of dilated convolutions allow the receptive ï¬eld to grow longer in a much cheaper way than using LSTM units.
As a last experiment we looked at speech recognition with WaveNets on the TIMIT (Garofolo et al., 1993) dataset. For this task we added a mean-pooling layer after the dilated convolutions that ag- gregated the activations to coarser frames spanning 10 milliseconds (160Ã downsampling). The pooling layer was followed by a few non-causal convolutions. We trained WaveNet with two loss terms, one to predict the next sample and one to classify the frame, the model generalized better than with a single loss and achieved 18.8 PER on the test set, which is to our knowledge the best score obtained from a model trained directly on raw audio on TIMIT.
# 4 CONCLUSION
This paper has presented WaveNet, a deep generative model of audio data that operates directly at the waveform level. WaveNets are autoregressive and combine causal ï¬lters with dilated convolu- tions to allow their receptive ï¬elds to grow exponentially with depth, which is important to model the long-range temporal dependencies in audio signals. We have shown how WaveNets can be con- ditioned on other inputs in a global (e.g. speaker identity) or local way (e.g. linguistic features). When applied to TTS, WaveNets produced samples that outperform the current best TTS systems in subjective naturalness. Finally, WaveNets showed very promising results when applied to music audio modeling and speech recognition.
# ACKNOWLEDGEMENTS
The authors would like to thank Lasse Espeholt, Jeffrey De Fauw and Grzegorz Swirszcz for their inputs, Adam Cain, Max Cant and Adrian Bolton for their help with artwork, Helen King, Steven
8
Gaffney and Steve Crossan for helping to manage the project, Faith Mackinder for help with prepar- ing the blogpost, James Besley for legal support and Demis Hassabis for managing the project and his inputs.
# REFERENCES
Agiomyrgiannakis, Yannis. Vocaine the vocoder and applications is speech synthesis. In ICASSP, pp. 4230â4234, 2015.
Bishop, Christopher M. Mixture density networks. Technical Report NCRG/94/004, Neural Com- puting Research Group, Aston University, 1994.
Chen, Liang-Chieh, Papandreou, George, Kokkinos, Iasonas, Murphy, Kevin, and Yuille, Alan L. Semantic image segmentation with deep convolutional nets and fully connected CRFs. In ICLR, 2015. URL http://arxiv.org/abs/1412.7062.
Chiba, Tsutomu and Kajiyama, Masato. The Vowel: Its Nature and Structure. Tokyo-Kaiseikan, 1942.
Dudley, Homer. Remaking speech. The Journal of the Acoustical Society of America, 11(2):169â 177, 1939.
Dutilleux, Pierre. An implementation of the âalgorithme `a trousâ to compute the wavelet transform. In Combes, Jean-Michel, Grossmann, Alexander, and Tchamitchian, Philippe (eds.), Wavelets: Time-Frequency Methods and Phase Space, pp. 298â304. Springer Berlin Heidelberg, 1989.
Fan, Yuchen, Qian, Yao, and Xie, Feng-Long, Soong Frank K. TTS synthesis with bidirectional LSTM based recurrent neural networks. In Interspeech, pp. 1964â1968, 2014.
Fant, Gunnar. Acoustic Theory of Speech Production. Mouton De Gruyter, 1970.
Garofolo, John S., Lamel, Lori F., Fisher, William M., Fiscus, Jonathon G., and Pallett, David S. DARPA TIMIT acoustic-phonetic continuous speech corpus CD-ROM. NIST speech disc 1-1.1. NASA STI/Recon technical report, 93, 1993.
Gonzalvo, Xavi, Tazari, Siamak, Chan, Chun-an, Becker, Markus, Gutkin, Alexander, and Silen, Hanna. Recent advances in Google real-time HMM-driven unit selection synthesizer. In Inter- speech, 2016. URL http://research.google.com/pubs/pub45564.html.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015.
Hochreiter, S. and Schmidhuber, J. Long short-term memory. Neural Comput., 9(8):1735â1780, 1997.
Holschneider, Matthias, Kronland-Martinet, Richard, Morlet, Jean, and Tchamitchian, Philippe. A real-time algorithm for signal analysis with the help of the wavelet transform. In Combes, Jean- Michel, Grossmann, Alexander, and Tchamitchian, Philippe (eds.), Wavelets: Time-Frequency Methods and Phase Space, pp. 286â297. Springer Berlin Heidelberg, 1989.
Hoshen, Yedid, Weiss, Ron J., and Wilson, Kevin W. Speech acoustic modeling from raw multi- channel waveforms. In ICASSP, pp. 4624â4628. IEEE, 2015.
Hunt, Andrew J. and Black, Alan W. Unit selection in a concatenative speech synthesis system using a large speech database. In ICASSP, pp. 373â376, 1996.
Imai, Satoshi and Furuichi, Chieko. Unbiased estimation of log spectrum. In EURASIP, pp. 203â 206, 1988.
Itakura, Fumitada. Line spectrum representation of linear predictor coefï¬cients of speech signals. The Journal of the Acoust. Society of America, 57(S1):S35âS35, 1975.
Itakura, Fumitada and Saito, Shuzo. A statistical method for estimation of speech spectral density and formant frequencies. Trans. IEICE, J53A:35â42, 1970.
9
ITU-T. Recommendation G. 711. Pulse Code Modulation (PCM) of voice frequencies, 1988.
J´ozefowicz, Rafal, Vinyals, Oriol, Schuster, Mike, Shazeer, Noam, and Wu, Yonghui. Exploring the limits of language modeling. CoRR, abs/1602.02410, 2016. URL http://arxiv.org/abs/ 1602.02410.
Juang, Biing-Hwang and Rabiner, Lawrence. Mixture autoregressive hidden Markov models for speech signals. IEEE Trans. Acoust. Speech Signal Process., pp. 1404â1413, 1985.
Kameoka, Hirokazu, Ohishi, Yasunori, Mochihashi, Daichi, and Le Roux, Jonathan. Speech anal- ysis with multi-kernel linear prediction. In Spring Conference of ASJ, pp. 499â502, 2010. (in Japanese).
Karaali, Orhan, Corrigan, Gerald, Gerson, Ira, and Massey, Noel. Text-to-speech conversion with neural networks: A recurrent TDNN approach. In Eurospeech, pp. 561â564, 1997.
Kawahara, Hideki, Masuda-Katsuse, Ikuyo, and de Cheveign´e, Alain. Restructuring speech rep- resentations using a pitch-adaptive time-frequency smoothing and an instantaneous-frequency- based f0 extraction: possible role of a repetitive structure in sounds. Speech Commn., 27:187â 207, 1999.
Kawahara, Hideki, Estill, Jo, and Fujimura, Osamu. Aperiodicity extraction and control using mixed mode excitation and group delay manipulation for a high quality speech analysis, modiï¬cation and synthesis system STRAIGHT. In MAVEBA, pp. 13â15, 2001.
Law, Edith and Von Ahn, Luis. Input-agreement: a new mechanism for collecting data using human computation games. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1197â1206. ACM, 2009.
Maia, Ranniery, Zen, Heiga, and Gales, Mark J. F. Statistical parametric speech synthesis with joint estimation of acoustic and excitation model parameters. In ISCA SSW7, pp. 88â93, 2010.
Morise, Masanori, Yokomori, Fumiya, and Ozawa, Kenji. WORLD: A vocoder-based high-quality speech synthesis system for real-time applications. IEICE Trans. Inf. Syst., E99-D(7):1877â1884, 2016.
Moulines, Eric and Charpentier, Francis. Pitch synchronous waveform processing techniques for text-to-speech synthesis using diphones. Speech Commn., 9:453â467, 1990.
Muthukumar, P. and Black, Alan W. A deep learning approach to data-driven parameterizations for statistical parametric speech synthesis. arXiv:1409.8558, 2014.
Nair, Vinod and Hinton, Geoffrey E. Rectiï¬ed linear units improve restricted Boltzmann machines. In ICML, pp. 807â814, 2010.
Nakamura, Kazuhiro, Hashimoto, Kei, Nankaku, Yoshihiko, and Tokuda, Keiichi. Integration of IEICE Trans. Inf. spectral feature extraction and modeling for HMM-based speech synthesis. Syst., E97-D(6):1438â1448, 2014.
Palaz, Dimitri, Collobert, Ronan, and Magimai-Doss, Mathew. Estimating phoneme class condi- tional probabilities from raw speech signal using convolutional neural networks. In Interspeech, pp. 1766â1770, 2013.
Peltonen, Sari, Gabbouj, Moncef, and Astola, Jaakko. Nonlinear ï¬lter design: methodologies and challenges. In IEEE ISPA, pp. 102â107, 2001.
Poritz, Alan B. Linear predictive hidden Markov models and the speech signal. In ICASSP, pp. 1291â1294, 1982.
Rabiner, Lawrence and Juang, Biing-Hwang. Fundamentals of Speech Recognition. PrenticeHall, 1993.
Sagisaka, Yoshinori, Kaiki, Nobuyoshi, Iwahashi, Naoto, and Mimura, Katsuhiko. ATR ν-talk speech synthesis system. In ICSLP, pp. 483â486, 1992.
10
Sainath, Tara N., Weiss, Ron J., Senior, Andrew, Wilson, Kevin W., and Vinyals, Oriol. Learning the speech front-end with raw waveform CLDNNs. In Interspeech, pp. 1â5, 2015.
Takaki, Shinji and Yamagishi, Junichi. A deep auto-encoder based low-dimensional feature ex- traction from FFT spectral envelopes for statistical parametric speech synthesis. In ICASSP, pp. 5535â5539, 2016.
Takamichi, Shinnosuke, Toda, Tomoki, Black, Alan W., Neubig, Graham, Sakriani, Sakti, and Naka- mura, Satoshi. Postï¬lters to modify the modulation spectrum for statistical parametric speech synthesis. IEEE/ACM Trans. Audio Speech Lang. Process., 24(4):755â767, 2016.
Theis, Lucas and Bethge, Matthias. Generative image modeling using spatial LSTMs. In NIPS, pp. 1927â1935, 2015.
Toda, Tomoki and Tokuda, Keiichi. A speech parameter generation algorithm considering global variance for HMM-based speech synthesis. IEICE Trans. Inf. Syst., E90-D(5):816â824, 2007.
Toda, Tomoki and Tokuda, Keiichi. Statistical approach to vocal tract transfer function estimation based on factor analyzed trajectory hmm. In ICASSP, pp. 3925â3928, 2008.
Tokuda, Keiichi. Speech synthesis as a statistical machine learning problem. http://www.sp. nitech.ac.jp/Ëtokuda/tokuda_asru2011_for_pdf.pdf, 2011. Invited talk given at ASRU.
Tokuda, Keiichi and Zen, Heiga. Directly modeling speech waveforms by neural networks for statistical parametric speech synthesis. In ICASSP, pp. 4215â4219, 2015.
Tokuda, Keiichi and Zen, Heiga. Directly modeling voiced and unvoiced components in speech waveforms by neural networks. In ICASSP, pp. 5640â5644, 2016.
Tuerk, Christine and Robinson, Tony. Speech synthesis using artiï¬cial neural networks trained on cepstral coefï¬cients. In Proc. Eurospeech, pp. 1713â1716, 1993.
T¨uske, Zolt´an, Golik, Pavel, Schl¨uter, Ralf, and Ney, Hermann. Acoustic modeling with deep neural networks using raw time signal for LVCSR. In Interspeech, pp. 890â894, 2014.
Uria, Benigno, Murray, Iain, Renals, Steve, Valentini-Botinhao, Cassia, and Bridle, John. Modelling acoustic feature dependencies with artiï¬cial neural networks: Trajectory-RNADE. In ICASSP, pp. 4465â4469, 2015.
van den Oord, A¨aron, Kalchbrenner, Nal, and Kavukcuoglu, Koray. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016a.
van den Oord, A¨aron, Kalchbrenner, Nal, Vinyals, Oriol, Espeholt, Lasse, Graves, Alex, and Kavukcuoglu, Koray. Conditional image generation with PixelCNN decoders. CoRR, abs/1606.05328, 2016b. URL http://arxiv.org/abs/1606.05328.
Wu, Yi-Jian and Tokuda, Keiichi. Minimum generation error training with direct log spectral distor- tion on LSPs for HMM-based speech synthesis. In Interspeech, pp. 577â580, 2008.
Yamagishi, Junichi. English multi-speaker corpus for CSTR voice cloning toolkit, 2012. URL http://homepages.inf.ed.ac.uk/jyamagis/page3/page58/page58.html.
Yoshimura, Takayoshi. Simultaneous modeling of phonetic and prosodic parameters, and char- acteristic conversion for HMM-based text-to-speech systems. PhD thesis, Nagoya Institute of Technology, 2002.
Yu, Fisher and Koltun, Vladlen. Multi-scale context aggregation by dilated convolutions. In ICLR, 2016. URL http://arxiv.org/abs/1511.07122.
Zen, Heiga. An example of context-dependent label format for HMM-based speech synthesis in English, 2006. URL http://hts.sp.nitech.ac.jp/?Download.
11
Feature | ' ic [| prediction |e SeaREES = soeccn Training Synthesis Model training
# Figure 6: Outline of statistical parametric speech synthesis.
Zen, Heiga, Tokuda, Keiichi, and Kitamura, Tadashi. Reformulating the HMM as a trajectory model by imposing explicit relationships between static and dynamic features. Comput. Speech Lang., 21(1):153â173, 2007.
Zen, Heiga, Tokuda, Keiichi, and Black, Alan W. Statistical parametric speech synthesis. Speech Commn., 51(11):1039â1064, 2009.
Zen, Heiga, Senior, Andrew, and Schuster, Mike. Statistical parametric speech synthesis using deep neural networks. In Proc. ICASSP, pp. 7962â7966, 2013.
Zen, Heiga, Agiomyrgiannakis, Yannis, Egberts, Niels, Henderson, Fergus, and Szczepaniak, Prze- mysÅaw. Fast, compact, and high quality LSTM-RNN based statistical parametric speech synthe- sizers for mobile devices. In Interspeech, 2016. URL https://arxiv.org/abs/1606. 06061.
# A TEXT-TO-SPEECH BACKGROUND
The goal of TTS synthesis is to render naturally sounding speech signals given a text to be syn- thesized. Human speech production process ï¬rst translates a text (or concept) into movements of muscles associated with articulators and speech production-related organs. Then using air-ï¬ow from lung, vocal source excitation signals, which contain both periodic (by vocal cord vibration) and aperiodic (by turbulent noise) components, are generated. By ï¬ltering the vocal source excitation signals by time-varying vocal tract transfer functions controlled by the articulators, their frequency characteristics are modulated. Finally, the generated speech signals are emitted. The aim of TTS is to mimic this process by computers in some way.
TTS can be viewed as a sequence-to-sequence mapping problem; from a sequence of discrete sym- bols (text) to a real-valued time series (speech signals). A typical TTS pipeline has two parts; 1) text analysis and 2) speech synthesis. The text analysis part typically includes a number of natural language processing (NLP) steps, such as sentence segmentation, word segmentation, text normal- ization, part-of-speech (POS) tagging, and grapheme-to-phoneme (G2P) conversion. It takes a word sequence as input and outputs a phoneme sequence with a variety of linguistic contexts. The speech synthesis part takes the context-dependent phoneme sequence as its input and outputs a synthesized speech waveform. This part typically includes prosody prediction and speech waveform generation.
There are two main approaches to realize the speech synthesis part; non-parametric, example-based approach known as concatenative speech synthesis (Moulines & Charpentier, 1990; Sagisaka et al., 1992; Hunt & Black, 1996), and parametric, model-based approach known as statistical parametric speech synthesis (Yoshimura, 2002; Zen et al., 2009). The concatenative approach builds up the utterance from units of recorded speech, whereas the statistical parametric approach uses a gener- ative model to synthesize the speech. The statistical parametric approach ï¬rst extracts a sequence of vocoder parameters (Dudley, 1939) o = {o1, . . . , oN } from speech signals x = {x1, . . . , xT } and linguistic features l from the text W , where N and T correspond to the numbers of vocoder parameter vectors and speech signals. Typically a vocoder parameter vector on is extracted at ev- ery 5 milliseconds. It often includes cepstra (Imai & Furuichi, 1988) or line spectral pairs (Itakura, 1975), which represent vocal tract transfer function, and fundamental frequency (F0) and aperiodic- ity (Kawahara et al., 2001), which represent characteristics of vocal source excitation signals. Then a set of generative models, such as hidden Markov models (HMMs) (Yoshimura, 2002), feed-forward neural networks (Zen et al., 2013), and recurrent neural networks (Tuerk & Robinson, 1993; Karaali et al., 1997; Fan et al., 2014), is trained from the extracted vocoder parameters and linguistic features
12
# as
ËÎ = arg max p (o | l, Î) , Î (4)
where Î denotes the set of parameters of the generative model. At the synthesis stage, the most probable vocoder parameters are generated given linguistic features extracted from a text to be syn- thesized as
Ëo = arg max p(o | l, ËÎ). o (5)
Then a speech waveform is reconstructed from Ëo using a vocoder. The statistical parametric ap- proach offers various advantages over the concatenative one such as small footprint and ï¬exibility to change its voice characteristics. However, its subjective naturalness is often signiï¬cantly worse than that of the concatenative approach; synthesized speech often sounds mufï¬ed and has artifacts. Zen et al. (2009) reported three major factors that can degrade the subjective naturalness; quality of vocoders, accuracy of generative models, and effect of oversmoothing. The ï¬rst factor causes the artifacts and the second and third factors lead to the mufï¬eness in the synthesized speech. There have been a number of attempts to address these issues individually, such as developing high-quality vocoders (Kawahara et al., 1999; Agiomyrgiannakis, 2015; Morise et al., 2016), improving the ac- curacy of generative models (Zen et al., 2007; 2013; Fan et al., 2014; Uria et al., 2015), and compen- sating the oversmoothing effect (Toda & Tokuda, 2007; Takamichi et al., 2016). Zen et al. (2016) showed that state-of-the-art statistical parametric speech syntheziers matched state-of-the-art con- catenative ones in some languages. However, its vocoded sound quality is still a major issue.
Extracting vocoder parameters can be viewed as estimation of a generative model parameters given speech signals (Itakura & Saito, 1970; Imai & Furuichi, 1988). For example, linear predictive anal- ysis (Itakura & Saito, 1970), which has been used in speech coding, assumes that the generative model of speech signals is a linear auto-regressive (AR) zero-mean Gaussian process;
# autoregressive
=Car p+ (6) p=1
p=1 er ~ N(0,G?)
(7) where ap is a p-th order linear predictive coefï¬cient (LPC) and G2 is a variance of modeling error. These parameters are estimated based on the maximum likelihood (ML) criterion. In this sense, the training part of the statistical parametric approach can be viewed as a two-step optimization and sub-optimal: extract vocoder parameters by ï¬tting a generative model of speech signals then model trajectories of the extracted vocoder parameters by a separate generative model for time series (Tokuda, 2011). There have been attempts to integrate these two steps into a single one (Toda & Tokuda, 2008; Wu & Tokuda, 2008; Maia et al., 2010; Nakamura et al., 2014; Muthukumar & Black, 2014; Tokuda & Zen, 2015; 2016; Takaki & Yamagishi, 2016). For example, Tokuda & Zen (2016) integrated non-stationary, nonzero-mean Gaussian process generative model of speech signals and LSTM-RNN-based sequence generative model to a single one and jointly optimized them by back-propagation. Although they showed that this model could approximate natural speech signals, its segmental naturalness was signiï¬cantly worse than the non-integrated model due to over- generalization and over-estimation of noise components in speech signals.
The conventional generative models of raw audio signals have a number of assumptions which are inspired from the speech production, such as
⢠Use of ï¬xed-length analysis window; They are typically based on a stationary stochas- tic process (Itakura & Saito, 1970; Imai & Furuichi, 1988; Poritz, 1982; Juang & Rabiner, 1985; Kameoka et al., 2010). To model time-varying speech signals by a stationary stochas- tic process, parameters of these generative models are estimated within a ï¬xed-length, over- lapping and shifting analysis window (typically its length is 20 to 30 milliseconds, and shift is 5 to 10 milliseconds). However, some phones such as stops are time-limited by less than 20 milliseconds (Rabiner & Juang, 1993). Therefore, using such ï¬xed-size analysis win- dow has limitations.
⢠Linear ï¬lter; These generative models are typically realized as a linear time-invariant ï¬l- ter (Itakura & Saito, 1970; Imai & Furuichi, 1988; Poritz, 1982; Juang & Rabiner, 1985; Kameoka et al., 2010) within a windowed frame. However, the relationship between suc- cessive audio samples can be highly non-linear.
13
⢠Gaussian process assumption; The conventional generative models are based on Gaussian process (Itakura & Saito, 1970; Imai & Furuichi, 1988; Poritz, 1982; Juang & Rabiner, 1985; Kameoka et al., 2010; Tokuda & Zen, 2015; 2016). From the source-ï¬lter model of speech production (Chiba & Kajiyama, 1942; Fant, 1970) point of view, this is equivalent to assuming that a vocal source excitation signal is a sample from a Gaussian distribu- tion (Itakura & Saito, 1970; Imai & Furuichi, 1988; Poritz, 1982; Juang & Rabiner, 1985; Tokuda & Zen, 2015; Kameoka et al., 2010; Tokuda & Zen, 2016). Together with the lin- ear assumption above, it results in assuming that speech signals are normally distributed. However, distributions of real speech signals can be signiï¬cantly different from Gaussian.
Although these assumptions are convenient, samples from these generative models tend to be noisy and lose important details to make these audio signals sounding natural.
WaveNet, which was described in Section 2, has none of the above-mentioned assumptions. It incorporates almost no prior knowledge about audio signals, except the choice of the receptive ï¬eld and µ-law encoding of the signal. It can also be viewed as a non-linear causal ï¬lter for quantized signals. Although such non-linear ï¬lter can represent complicated signals while preserving the details, designing such ï¬lters is usually difï¬cult (Peltonen et al., 2001). WaveNets give a way to train them from data.
# B DETAILS OF TTS EXPERIMENT
The HMM-driven unit selection and WaveNet TTS systems were built from speech at 16 kHz sam- pling. Although LSTM-RNNs were trained from speech at 22.05 kHz sampling, speech at 16 kHz sampling was synthesized at runtime using a resampling functionality in the Vocaine vocoder (Agiomyrgiannakis, 2015). Both the LSTM-RNN-based statistical parametric and HMM-driven unit selection speech synthesizers were built from the speech datasets in the 16-bit linear PCM, whereas the WaveNet-based ones were trained from the same speech datasets in the 8-bit µ-law encoding.
The linguistic features include phone, syllable, word, phrase, and utterance-level features (Zen, 2006) (e.g. phone identities, syllable stress, the number of syllables in a word, and position of the current syllable in a phrase) with additional frame position and phone duration features (Zen et al., 2013). These features were derived and associated with speech every 5 milliseconds by phone-level forced alignment at the training stage. We used LSTM-RNN-based phone duration and autoregres- sive CNN-based log F0 prediction models. They were trained so as to minimize the mean squared errors (MSE). It is important to note that no post-processing was applied to the audio signals gener- ated from the WaveNets.
The subjective listening tests were blind and crowdsourced. 100 sentences not included in the train- ing data were used for evaluation. Each subject could evaluate up to 8 and 63 stimuli for North American English and Mandarin Chinese, respectively. Test stimuli were randomly chosen and pre- sented for each subject. In the paired comparison test, each pair of speech samples was the same text synthesized by the different models. In the MOS test, each stimulus was presented to subjects in isolation. Each pair was evaluated by eight subjects in the paired comparison test, and each stimulus was evaluated by eight subjects in the MOS test. The subjects were paid and native speakers per- forming the task. Those ratings (about 40%) where headphones were not used were excluded when computing the preference and mean opinion scores. Table 2 shows the full details of the paired comparison test shown in Fig. 5.
14
Subjective preference (%) in naturalness WaveNet WaveNet No Language LSTM Concat (L) (L+F) _ preference p value North 23.3 63.6 13.1 | < 107% American 18.7 69.3 12.0] <10-° English 7.6 82.0 10.4 | < 107% 32.4 41.2 26.4 0.003 20.1 49.3 30.6 | <10-° 17.8 37.9 44.3 | <10-° Mandarin 50.6 15.6 33.8 | <10-° Chinese 25.0 23.3 51.8 0.476 12.5 29.3 58.2 | < 107% 17.6 43.1 39.3 | «<10-° 7.6 55.9 36.5 | <10-° 10.0 25.5 64.5 | <10-°
Table 2: Subjective preference scores of speech samples between LSTM-RNN-based statistical para- metric (LSTM), HMM-driven unit selection concatenative (Concat), and proposed WaveNet-based speech synthesizers. Each row of the table denotes scores of a paired comparison test between two synthesizers. Scores of the synthesizers which were signiï¬cantly better than their competing ones at p < 0.01 level were shown in the bold type. Note that WaveNet (L) and WaveNet (L+F) correspond to WaveNet conditioned on linguistic features only and that conditioned on both linguistic features and F0 values.
15 | {
"id": "1601.06759"
} |
1609.03193 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | 6 1 0 2
p e S 3 1 ] G L . s c [
2 v 3 9 1 3 0 . 9 0 6 1 : v i X r a
# Wav2Letter: an End-to-End ConvNet-based Speech Recognition System
# Ronan Collobert Facebook AI Research, Menlo Park locronan@fb.com
# Christian Puhrsch Facebook AI Research, Menlo Park cpuhrsch@fb.com
# Gabriel Synnaeve Facebook AI Research, New York gab@fb.com
# Abstract
This paper presents a simple end-to-end model for speech recognition, combining a convolutional network based acoustic model and a graph decoding. It is trained to output letters, with transcribed speech, without the need for force alignment of phonemes. We introduce an automatic segmentation criterion for training from sequence annotation without alignment that is on par with CTC [6] while being simpler. We show competitive results in word error rate on the Librispeech corpus [18] with MFCC features, and promising results from raw waveform.
# Introduction
We present an end-to-end system to speech recognition, going from the speech signal (e.g. Mel- Frequency Cepstral Coefï¬cients (MFCC), power spectrum, or raw waveform) to the transcription. The acoustic model is trained using letters (graphemes) directly, which take out the need for an intermediate (human or automatic) phonetic transcription. Indeed, the classical pipeline to build state of the art systems for speech recognition consists in ï¬rst training an HMM/GMM model to force align the units on which the ï¬nal acoustic model operates (most often context-dependent phone states). This approach takes its roots in HMM/GMM training [27]. The improvements brought by deep neural networks (DNNs) [14, 10] and convolutional neural networks (CNNs) [24, 25] for acoustic modeling only extend this training pipeline.
The current state of the art on Librispeech (the dataset that we used for our evaluations) uses this approach too [18, 20], with an additional step of speaker adaptation [22, 19]. Recently, [23] proposed GMM-free training, but the approach still requires to generate a force alignment. An approach that cut ties with the HMM/GMM pipeline (and with force alignment) was to train with a recurrent neural network (RNN) [7] for phoneme transcription. There are now competitive end-to-end approaches of acoustic models toppled with RNNs layers as in [8, 13, 21, 1], trained with a sequence criterion [6]. However these models are computationally expensive, and thus take a long time to train.
Compared to classical approaches that need phonetic annotation (often derived from a phonetic dictionary, rules, and generative training), we propose to train the model end-to-end, using graphemes directly. Compared to sequence criterion based approaches that train directly from speech signal to graphemes [13], we propose a simple(r) architecture (23 millions of parameters for our best model, vs. 100 millions of parameters in [1]) based on convolutional networks for the acoustic model, toppled with a graph transformer network [4], trained with a simpler sequence criterion. Our word-error-rate on clean speech is slightly better than [8], and slightly worse than [1], in particular factoring that they train on 12,000 hours while we only train on the 960h available in LibriSpeechâs train set. Finally, some of our models are also trained on the raw waveform, as in [15, 16]. The rest of the paper is
structured as follows: the next section presents the convolutional networks used for acoustic modeling, along with the automatic segmentation criterion. The following section shows experimental results comparing different features, the criterion, and our current best word error rates on LibriSpeech.
# 2 Architecture
Our speech recognition system is a standard convolutional neural network [12] fed with various different features, trained through an alternative to the Connectionist Temporal Classiï¬cation (CTC) [6], and coupled with a simple beam search decoder. In the following sub-sections, we detail each of these components.
# 2.1 Features
We consider three types of input features for our model: MFCCs, power-spectrum, and raw wave. MFCCs are carefully designed speech-speciï¬c features, often found in classical HMM/GMM speech systems [27] because of their dimensionality compression (13 coefï¬- cients are often enough to span speech frequencies). Power-spectrum features are found in most recent deep learning acoustic modeling features [1]. Raw wave has been somewhat explored in few recent work [15, 16]. ConvNets have the advantage to be ï¬exible enough to be used with either of these input feature types. Our acoustic models output letter scores (one score per letter, given a dictionary
# L
# 2.2 ConvNet Acoustic Model
The acoustic models we considered in this paper are all based on standard 1D convolutional neural networks (ConvNets). ConvNets interleave convolution operations with pointwise non-linearity oper- ations. Often ConvNets also embark pooling layers: these type of layers allow the network to âseeâ a larger context, without increas- ing the number of parameters, by locally aggregating the previous convolution operation output. Instead, our networks leverage striding convolutions. Given (xt)t=1...Tx an input sequence with Tx frames of dx dimensional vectors, a convolution with kernel width kw, stride dw and dy frame size output computes the following:
d, kw He =O + DOS wise Coxe arak VWSt< dy, j=lk=1 qd)
Rdy and w where b lution (to be learned). â â RdyÃdxÃkw are the parameters of the convo-
CONV kw = 1 2000 : 40 CONV kw = 1 2000 : 2000 CONV kw = 32 250_: 2000 CONV kw = 7 250 : 250 CONV kw = 7 250: 250 CONV kw = 48,dw = 2 250: 250 CONV kw = 250, dw = 160 1: 250 al
Pointwise non-linear layers are added after convolutional layers. In our experience, we surprisingly found that using hyperbolic tangents, their piecewise linear counterpart HardTanh (as in [16]) or ReLU units lead to similar results.
There are some slight variations between the architectures, depending on the input features. MFCC-based networks need less striding, as standard MFCC ï¬lters are applied with large strides on the input raw sequence. With power spectrum-based and raw wave-based networks, we observed that the overall stride of the network was more important than where the convolution with strides were placed. We found thus preferrable to set the strided convolutions near the ï¬rst input layers of the network, as it leads to the fastest architectures: with power spectrum features or raw wave, the input sequences are very long and the ï¬rst convolutions are thus the most expensive ones.
Figure 1: Our neural net- work architecture for raw wave. two layers are convolutions with strides. Last two layers are convolu- tions with kw = 1, which are equivalent to fully con- nected layers. Power spec- trum and MFCC based net- works do not have the ï¬rst layer.
2
The last layer of our convolutional network outputs one score per letter in the letter dictionary (dy = ). Our architecture for raw wave is shown in Figure 1 and is inspired by [16]. The architectures for both power spectrum and MFCC features do not include the ï¬rst layer. The full network can be seen as a non-linear convolution, with a kernel width of size 31280 and stride equal to 320; given the sample rate of our data is 16KHz, label scores are produced using a window of 1955 ms, with steps of 20ms.
# Inferring Segmentation with AutoSegCriterion
Most large labeled speech databases provide only a text transcription for each audio ï¬le. In a classiï¬cation framework (and given our acoustic model produces letter predictions), one would need the segmentation of each letter in the transcription to train properly the model. Unfortunately, manually labeling the segmentation of each letter would be tedious. Several solutions have been explored in the speech community to alleviate this issue: HMM/GMM models use an iterative EM procedure: (i) during the Estimation step, the best segmentation is inferred, according to the current model, by maximizing the joint probability of the letter (or any sub-word unit) transcription and input sequence. (ii) During the Maximization step the model is optimized by minimizing a frame-level criterion, based on the (now ï¬xed) inferred segmentation. This approach is also often used to boostrap the training of neural network-based acoustic models.
Other alternatives have been explored in the context of hybrid HMM/NN systems, such as the MMI criterion [2] which maximizes the mutual information between the acoustic sequence and word sequences or the Minimum Bayse Risk (MBR) criterion [5].
More recently, standalone neural network architectures have been trained using criterions which jointly infer the segmentation of the transcription while increase the overall score of the right transcription [6, 17]. The most popular one is certainly the Connectionist Temporal Classiï¬cation (CTC) criterion, which is at the core of Baiduâs Deep Speech architecture [1]. CTC assumes that the network output probability scores, normalized at the frame level. It considers all possible sequence of letters (or any sub-word units), which can lead to a to a given transcription. CTC also allow a special âblankâ state to be optionally inserted between each letters. The rational behind the blank state is two- folds: (i) modeling âgarbageâ frames which might occur between each letter and (ii) identifying the separation between two identical consecutive letters in a transcription. Figure 2a shows an example of the sequences accepted by CTC for a given transcription. In practice, this graph is unfolded as shown in Figure 2b, over the available frames output by the acoustic model. We denote ctc(θ, T ) G an unfolded graph over T frames for a given transcription θ, and Ï = Ï1, . . . , ÏT ctc(θ, T ) a path in this graph representing a (valid) sequence of letters for this transcription. At each time step t, )) each node of the graph is assigned with the corresponding log-probability letter (that we denote ft( · ctc(θ, T ); for output by the acoustic model. CTC aims at maximizing the âoverallâ score of paths in that purpose, it minimizes the Forward score:
T CTC(0,T) =â logadd SO fx,(x), (2) TEGete (9,7) pay
where the âlogaddâ operation, also often called âlog-sum-expâ is deï¬ned as logadd(a, b) = exp(log(a) + log(b)). This overall score can be efï¬ciently computed with the Forward algorithm. To put things in perspective, if one would replace the logadd( ) in (2) (which can be then · · efï¬ciently computed by the Viterbi algorithm, the counterpart of the Forward algorithm), one would then maximize the score of the best path, according to the model belief. The logadd( ) can be seen · as a smooth version of the max( ): paths with similar scores will be attributed the same weight in the · overall score (and hence receive the same gradient), and paths with much larger score will have much ) works much better more overall weight than paths with low scores. In practice, using the logadd( · than the max( ). It is also worth noting that maximizing (2) does not diverge, as the acoustic model · ). is assumed to output normalized scores (log-probabilities) fi( ·
In this paper, we explore an alternative to CTC, with three differences: (i) there are no blank labels, (ii) un-normalized scores on the nodes (and possibly un-normalized transition scores on the edges) (iii) global normalization instead of per-frame normalization:
The advantage of (i) is that it produces a much simpler graph (see Figure 3a and Figure 3b). We found that in practice there was no advantage of having a blank class to model the
3
(a) (b)
Figure 2: The CTC criterion graph. (a) Graph which represents all the acceptable sequences of letters (with the blank state denoted â â), for the transcription âcatâ. (b) Shows the same graph unfolded â
over 5 frames. There are no transitions scores. At each time step, nodes are assigned a conditional probability output by the neural network acoustic model.
# e
# e
possible âgarbageâ frames between letters. Modeling letter repetitions (which is also an important quality of the blank label in CTC) can be easily replaced by repetition character labels (we used two extra labels for two and three repetitions). For example âcaterpillarâ could be written as âcaterpil2arâ, where â2â is a label to represent the repetition of the previous letter. Not having blank labels also simpliï¬es the decoder. With (ii) one can easily plug an external language model, which would insert transition scores on the edges of the graph. This could be particularly useful in future work, if one wanted to model representations more high-level than letters. In that respect, avoiding normalized transitions is important to alleviate the problem of âlabel biasâ [3, 11]. In this work, we limited ourselves to transition scalars, which are learned together with the acoustic model. The normalization evoked in (iii) is necessary when using un-normalized scores on nodes or edges; it insures incorrect transcriptions will have a low conï¬dence.
In the following, we name our criterion âAuto Segmentation Criterionâ (ASG). Considering the asg(θ, T ) over T frames for a given same notations than for CTC in (2), and an unfolded graph transcription θ (as in Figure 3b), as well as a fully connected graph f ull(θ, T ) over T frames (representing all possible sequence of letters, as in Figure 3c), ASG aims at minimizing:
ASG(6,T) =â_ logadd Slate )+9n1m(2)) + logadd Sul 2) + Gneaer(@)) ®EGasg(9.T) 424 mwEGfut(9,T) a]
(3) where gi,j( ) is a transition score model to jump from label i to label j. The left-hand part of 3 · promotes sequences of letters leading to the right transcription, and the right-hand part demotes all sequences of letters. As for CTC, these two parts can be efï¬ciently computed with the Forward algorithm. Derivatives with respect to fi( ) can be obtained (maths are a bit tedious) by ) and gi,j( · · applying the chain rule through the Forward recursion.
# 2.4 Beam-Search Decoder
We wrote our own one-pass decoder, which performs a simple beam-search with beam threholding, histogram pruning and language model smearing [26]. We kept the decoder as simple as possible (under 1000 lines of C code). We did not implement any sort of model adaptation before decoding, nor any word graph rescoring. Our decoder relies on KenLM [9] for the language modeling part. It also accepts un-normalized acoustic scores (transitions and emissions from the acoustic model) as input. The decoder attempts to maximize the following:
T L(A) = logadd S~(fa,(2) + Gne1m()) + 10g Pim (9) + 814] , (4) TEGasg(9,T) pay
4
(a) (b) (c)
Figure 3: The ASG criterion graph. (a) Graph which represents all the acceptable sequences of letters for the transcription âcatâ. (b) Shows the same graph unfolded over 5 frames. (c) Shows the corresponding fully connected graph, which describe all possible sequences of letter; this graph is used for normalization purposes. Un-normalized transitions scores are possible on the edges. At each time step, nodes are assigned a conditional un-normalized score, output by the neural network acoustic model.
where Plm(θ) is the probability of the language model given a transcription θ, α and β are two hyper-parameters which control the weight of the language model and the word insertion penalty respectively.
# 3 Experiments
We implemented everything using Torch71. The ASG criterion as well as the decoder were imple- mented in C (and then interfaced into Torch).
We consider as benchmark LibriSpeech, a large speech database freely available for download [18]. LibriSpeech comes with its own train, validation and test sets. Except when speciï¬ed, we used all the available data (about 1000h of audio ï¬les) for training and validating our models. We use the original 16 KHz sampling rate. The vocabulary contains 30 graphemes: the standard English alphabet plus the apostrophe, silence, and two special ârepetitionâ graphemes which encode the duplication (once or twice) of the previous letter (see Section 2.3).
The architecture hyper-parameters, as well the decoder ones were tuned using the validation set. In the following, we either report letter-error-rates (LERs) or word-error-rates (WERs). WERs have been obtained by using our own decoder (see Section 2.4), with the standard 4-gram language model provided with LibriSpeech2.
MFCC features are computed with 13 coefï¬cients, a 25 ms sliding window and 10 ms stride. We included ï¬rst and second order derivatives. Power spectrum features are computed with a 25 ms window, 10 ms stride, and have 257 components. All features are normalized (mean 0, std 1) per input sequence.
# 3.1 Results
Table 1 reports a comparison between CTC and ASG, in terms of LER and speed. Our ASG criterion is implemented in C (CPU only), leveraging SSE instructions when possible. Our batching is done with an OpenMP parallel for. We picked the CTC criterion implementation provided by Baidu3. Both criteria lead to the same LER. For comparing the speed, we report performance for sequence sizes as reported initially by Baidu, but also for longer sequence sizes, which corresponds to our average use
1http://www.torch.ch. 2http://www.openslr.org/11. 3https://github.com/baidu-research/warp-ctc.
5
Table 1: CTC vs ASG. CTC is Baiduâs implementation. ASG is implemented on CPU (core in C, threading in Lua). (a) reports performance in LER. Timings (in ms) for small sequences (input frames: 150, letter vocabulary size: 28, transcription size: 40) and long sequences (input frames: 700, letter vocabulary size: 28, transcription size: 200) are reported in (b) and (c) respectively. Timings include both forward and backward passes. CPU implementations use 8 threads.
(a)
(b)
dev-clean test-clean ASG CTC 10.7 10.4 10.5 10.1 batch size 1 4 8 ASG CPU GPU CPU 2.5 5.9 2.8 6.0 2.8 6.1 CTC 1.9 2.0 2.0
(c)
batch size 1 4 8 CTC ASG GPU CPU 16.0 97.9 17.7 99.6 19.2 100.3 CPU 40.9 41.6 41.7
(a) (b)
# a
Figure 4: Valid LER (a) and WER (b) v.s. training set size (10h, 100h, 200h, 1000h). This compares MFCC-based and power spectrum-based (POW) architectures. AUG experiments include data augmentation. In (b) we provide Baidu Deep Speech 1 and 2 numbers on LibriSpeech, as a comparison [8, 1].
case. ASG appears faster on long sequences, even though it is running on CPU only. Baiduâs GPU CTC implementation seems more aimed at larger vocabularies (e.g. 5000 Chinese characters).
We also investigated the impact of the training size on the dataset, as well as the effect of a simple data augmentation procedure, where shifts were introduced in the input frames, as well as stretching. For that purpose, we tuned the size of our architectures (given a particular size of the dataset), to avoid over-ï¬tting. Figure 4a shows the augmentation helps for small training set size. However, with enough training data, the effect of data augmentation vanishes, and both type of features appear to perform similarly. Figure 4b reports the WER with respect to the available training data size. We observe that we compare very well against Deep Speech 1 & 2 which were trained with much more data [8, 1].
Finally, we report in Table 2 the best results of our system so far, trained on 1000h of speech, for each type of features. The overall stride of architectures is 320 (see Figure 1), which produces a label every 20 ms. We found that one could squeeze out about 1% in performance by reï¬ning the precision of the output. This is efï¬ciently achieved by shifting the input sequence, and feeding it to the network
6
Table 2: LER/WER of the best sets of hyper-parameters for each feature types.
dev-clean test-clean PS LER WER LER WER LER WER 6.9 6.9 MFCC Raw 9.3 9.1 10.3 10.6 7.2 9.4 10.1
several times. Results in Table 2 were obtained by a single extra shift of 10 ms. Both power spectrum and raw features are performing slightly worse than MFCCs. One could expect, however, that with enough data (see Figure 4) the gap would vanish.
# 4 Conclusion
We have introduced a simple end-to-end automatic speech recognition system, which combines a standard 1D convolutional neural network, a sequence criterion which can infer the segmentation, and a simple beam-search decoder. The decoding results are competitive on the LibriSpeech corpus with MFCC features (7.2% WER), and promising with power spectrum and raw speech (9.4% WER and 10.1% WER respectively). We showed that our AutoSegCriterion can be faster than CTC [6], and as accurate (table 1). Our approach breaks free from HMM/GMM pre-training and force-alignment, as well as not being as computationally intensive as RNN-based approaches [1] (on average, one LibriSpeech sentence is processed in less than 60ms by our ConvNet, and the decoder runs at 8.6x on a single thread).
# References
[1] AMODEI, D., ANUBHAI, R., BATTENBERG, E., CASE, C., CASPER, J., CATANZARO, B., CHEN, J., CHRZANOWSKI, M., COATES, A., DIAMOS, G., ET AL. Deep speech 2: End-to-end speech recognition in english and mandarin. arXiv preprint arXiv:1512.02595 (2015).
[2] BAHL, L. R., BROWN, P. F., DE SOUZA, P. V., AND MERCER, R. L. Maximum mutual information estimation of hidden markov model parameters for speech recognition. In Acoustics, Speech and Signal Processing (ICASSP), 1986 IEEE International Conference on (1986), IEEE, pp. 49â52.
[3] BOTTOU, L. Une approche theorique de lâapprentissage connexionniste et applications a la reconnaissance de la parole. PhD thesis, 1991.
[4] BOTTOU, L., BENGIO, Y., AND LE CUN, Y. Global training of document processing sys- tems using graph transformer networks. In Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on (1997), IEEE, pp. 489â494.
[5] GIBSON, M., AND HAIN, T. Hypothesis spaces for minimum bayes risk training in large vocabulary speech recognition. In Proceedings of INTERSPEECH (2006), IEEE, pp. 2406â- 2409.
[6] GRAVES, A., FERNÃNDEZ, S., GOMEZ, F., AND SCHMIDHUBER, J. Connectionist temporal classiï¬cation: labelling unsegmented sequence data with recurrent neural networks. In Proceed- ings of the 23rd international conference on Machine learning (2006), ACM, pp. 369â376.
[7] GRAVES, A., MOHAMED, A.-R., AND HINTON, G. Speech recognition with deep recur- In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE rent neural networks. International Conference on (2013), IEEE, pp. 6645â6649.
[8] HANNUN, A., CASE, C., CASPER, J., CATANZARO, B., DIAMOS, G., ELSEN, E., PRENGER, R., SATHEESH, S., SENGUPTA, S., COATES, A., ET AL. Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567 (2014).
[9] HEAFIELD, K., POUZYREVSKY, I., CLARK, J. H., AND KOEHN, P. Scalable modiï¬ed kneser-ney language model estimation. In ACL (2) (2013), pp. 690â696.
7
[10] HINTON, G., DENG, L., YU, D., DAHL, G. E., MOHAMED, A.-R., JAITLY, N., SENIOR, A., VANHOUCKE, V., NGUYEN, P., SAINATH, T. N., ET AL. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE 29, 6 (2012), 82â97.
[11] LAFFERTY, J., MCCALLUM, A., AND PEREIRA, F. Conditional random ï¬elds: Probabilistic models for segmenting and labeling sequence data. In Eighteenth International Conference on Machine Learning, ICML (2001).
[12] LECUN, Y., AND BENGIO, Y. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks 3361, 10 (1995), 1995.
[13] MIAO, Y., GOWAYYED, M., AND METZE, F. Eesen: End-to-end speech recognition using deep rnn models and wfst-based decoding. arXiv preprint arXiv:1507.08240 (2015).
[14] MOHAMED, A.-R., DAHL, G. E., AND HINTON, G. Acoustic modeling using deep belief networks. Audio, Speech, and Language Processing, IEEE Transactions on 20, 1 (2012), 14â22.
[15] PALAZ, D., COLLOBERT, R., AND DOSS, M. M. Estimating phoneme class conditional probabilities from raw speech signal using convolutional neural networks. arXiv preprint arXiv:1304.1018 (2013).
[16] PALAZ, D., COLLOBERT, R., ET AL. Analysis of cnn-based speech recognition system using raw speech as input. In Proceedings of Interspeech (2015), no. EPFL-CONF-210029.
[17] PALAZ, D., MAGIMAI-DOSS, M., AND COLLOBERT, R. Joint phoneme segmentation infer- ence and classiï¬cation using crfs. In Signal and Information Processing (GlobalSIP), 2014 IEEE Global Conference on (2014), IEEE, pp. 587â591.
[18] PANAYOTOV, V., CHEN, G., POVEY, D., AND KHUDANPUR, S. Librispeech: an asr corpus based on public domain audio books. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on (2015), IEEE, pp. 5206â5210.
[19] PEDDINTI, V., CHEN, G., MANOHAR, V., KO, T., POVEY, D., AND KHUDANPUR, S. Jhu aspire system: Robust lvcsr with tdnns, i-vector adaptation, and rnn-lms. In Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop (2015).
[20] PEDDINTI, V., POVEY, D., AND KHUDANPUR, S. A time delay neural network architecture for efï¬cient modeling of long temporal contexts. In Proceedings of INTERSPEECH (2015).
[21] SAON, G., KUO, H.-K. J., RENNIE, S., AND PICHENY, M. The ibm 2015 english conversa- tional telephone speech recognition system. arXiv preprint arXiv:1505.05899 (2015).
[22] SAON, G., SOLTAU, H., NAHAMOO, D., AND PICHENY, M. Speaker adaptation of neural network acoustic models using i-vectors. In ASRU (2013), pp. 55â59.
[23] SENIOR, A., HEIGOLD, G., BACCHIANI, M., AND LIAO, H. Gmm-free dnn training. In Proceedings of ICASSP (2014), pp. 5639â5643.
[24] SERCU, T., PUHRSCH, C., KINGSBURY, B., AND LECUN, Y. Very deep multilingual convolutional neural networks for lvcsr. arXiv preprint arXiv:1509.08967 (2015).
[25] SOLTAU, H., SAON, G., AND SAINATH, T. N. Joint training of convolutional and non- convolutional neural networks. In ICASSP (2014), pp. 5572â5576.
[26] STEINBISS, V., TRAN, B.-H., AND NEY, H. Improvements in beam search. In ICSLP (1994), vol. 94, pp. 2143â2146.
[27] WOODLAND, P. C., AND YOUNG, S. J. The htk tied-state continuous speech recogniser. In Eurospeech (1993).
8 | {
"id": "1509.08967"
} |
1609.02200 | Discrete Variational Autoencoders | Probabilistic models with discrete latent variables naturally capture
datasets composed of discrete classes. However, they are difficult to train
efficiently, since backpropagation through discrete variables is generally not
possible. We present a novel method to train a class of probabilistic models
with discrete latent variables using the variational autoencoder framework,
including backpropagation through the discrete latent variables. The associated
class of probabilistic models comprises an undirected discrete component and a
directed hierarchical continuous component. The discrete component captures the
distribution over the disconnected smooth manifolds induced by the continuous
component. As a result, this class of models efficiently learns both the class
of objects in an image, and their specific realization in pixels, from
unsupervised data, and outperforms state-of-the-art methods on the
permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. | http://arxiv.org/pdf/1609.02200 | Jason Tyler Rolfe | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160907 | 20170422 | 7 1 0 2
r p A 2 2 ] L M . t a t s [ 2 v 0 0 2 2 0 . 9 0 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# DISCRETE VARIATIONAL AUTOENCODERS
# Jason Tyler Rolfe D-Wave Systems Burnaby, BC V5G-4M9, Canada jrolfe@dwavesys.com
# ABSTRACT
Probabilistic models with discrete latent variables naturally capture datasets com- posed of discrete classes. However, they are difï¬cult to train efï¬ciently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models com- prises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the discon- nected smooth manifolds induced by the continuous component. As a result, this class of models efï¬ciently learns both the class of objects in an image, and their speciï¬c realization in pixels, from unsupervised data; and outperforms state-of- the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
# INTRODUCTION
Unsupervised learning of probabilistic models is a powerful technique, facilitating tasks such as denoising and inpainting, and regularizing supervised tasks such as classiï¬cation (Hinton et al., 2006; Salakhutdinov & Hinton, 2009; Rasmus et al., 2015). Many datasets of practical interest are projections of underlying distributions over real-world objects into an observation space; the pixels of an image, for example. When the real-world objects are of discrete types subject to continuous transformations, these datasets comprise multiple disconnected smooth manifolds. For instance, natural images change smoothly with respect to the position and pose of objects, as well as scene lighting. At the same time, it is extremely difï¬cult to directly transform the image of a person to one of a car while remaining on the manifold of natural images.
It would be natural to represent the space within each disconnected component with continuous vari- ables, and the selection amongst these components with discrete variables. In contrast, most state- of-the-art probabilistic models use exclusively discrete variables â as do DBMs (Salakhutdinov & Hinton, 2009), NADEs (Larochelle & Murray, 2011), sigmoid belief networks (Spiegelhalter & Lau- ritzen, 1990; Bornschein et al., 2016), and DARNs (Gregor et al., 2014) â or exclusively continuous variables â as do VAEs (Kingma & Welling, 2014; Rezende et al., 2014) and GANs (Goodfellow et al., 2014).1 Moreover, it would be desirable to apply the efï¬cient variational autoencoder frame- work to models with discrete values, but this has proven difï¬cult, since backpropagation through discrete variables is generally not possible (Bengio et al., 2013; Raiko et al., 2015).
We introduce a novel class of probabilistic models, comprising an undirected graphical model de- ï¬ned over binary latent variables, followed by multiple directed layers of continuous latent variables. This class of models captures both the discrete class of the object in an image, and its speciï¬c con- tinuously deformable realization. Moreover, we show how these models can be trained efï¬ciently using the variational autoencoder framework, including backpropagation through the binary latent variables. We ensure that the evidence lower bound remains tight by incorporating a hierarchical approximation to the posterior distribution of the latent variables, which can model strong corre- lations. Since these models efï¬ciently marry the variational autoencoder framework with discrete latent variables, we call them discrete variational autoencoders (discrete VAEs).
1Spike-and-slab RBMs (Courville et al., 2011) use both discrete and continuous latent variables.
1
Published as a conference paper at ICLR 2017
1.1 VARIATIONAL AUTOENCODERS ARE INCOMPATIBLE WITH DISCRETE DISTRIBUTIONS
Conventionally, unsupervised learning algorithms maximize the log-likelihood of an observed dataset under a probabilistic model. Even stochastic approximations to the gradient of the log- likelihood generally require samples from the posterior and prior of the model. However, sampling from undirected graphical models is generally intractable (Long & Servedio, 2010), as is sampling from the posterior of a directed graphical model conditioned on its leaf variables (Dagum & Luby, 1993).
In contrast to the exact log-likelihood, it can be computationally efï¬cient to optimize a lower bound (x, θ, Ï); on the log-likelihood (Jordan et al., 1999), such as the evidence lower bound (ELBO, Hinton & Zemel, 1994):
(x, θ, Ï) = log p(x KL[q(z p(z x, θ)], (1)
θ) |
x, Ï) |
# L
â
||
|
x, θ). where q(z | We denote the observed random variables by x, the latent random variables by z, the parameters of the generative model by θ, and the parameters of the approximating posterior by Ï. The variational autoencoder (VAE; Kingma & Welling, 2014; Rezende et al., 2014; Kingma et al., 2014) regroups the evidence lower bound of Equation 1 as:
L(x,6,) = âKL {q(2|2, 6)||p(2|9)] +E, log p(c|z, 6). @) a KL term autoencoding term
# L
x) and p(z), the KL term of Equation 2 In many cases of practical interest, such as Gaussian q(z | can be computed analytically. Moreover, a low-variance stochastic approximation to the gradient of the autoencoding term can be obtained using backpropagation and the reparameterization trick, so long as samples from the approximating posterior q(z x) can be drawn using a differentiable, deterministic function f (x, Ï, Ï) of the combination of the inputs, the parameters, and a set of input- D. For instance, samples can be drawn from a and parameter-independent random variables Ï (m(x, Ï), v(x, Ï)), using Gaussian distribution with mean and variance determined by the input, f (x, Ï, Ï) = m(x, Ï) +
# â¼ N 1 N
fa) 1 (2) ; : FE g(z|x,0) log p(x|z, 0)| © Vv > 9g ePlels (a, p, ¢), 8). (3) prD ' Oo
# â¼D
The reparameterization trick can be generalized to a large set of distributions, including nonfactorial approximating posteriors. We address this issue carefully in Appendix A, where we ï¬nd that an analog of Equation 3 holds. Speciï¬cally,
Di is the uniform distribution between 0 and 1, and f (x) = Fâ
1(x),
where F is the conditional-marginal cumulative distribution function (CDF) deï¬ned by:
# x
F;(x) -|/ p(a|ai,...,@i-1)- (5) ai =â00 However, this generalization is only possible if the inverse of the conditional-marginal CDF exists and is differentiable.
ââ
A formulation comparable to Equation 3 is not possible for discrete distributions, such as restricted Boltzmann machines (RBMs) (Smolensky, 1986):
p(z) = 1-20) _ i . ele Wetblz) (6) P Zp
where z Zp is the partition function of p(z), and the lateral connection matrix W is triangular. Any approximating posterior that only assigns nonzero probability to a discrete domain corresponds to a CDF that is piecewise-contant. That is, the range of the CDF is a proper subset of the interval [0, 1]. The domain of the inverse CDF is thus also a proper subset of [0, 1], and its derivative is not deï¬ned, as required in Equations 3 and 4.2
# {z ER: Lon
>This problem remains even if we use the quantile function, F;'(p) = inf {z ER: Lon p(zâ) > o} ; the derivative of which is either zero or infinite if p is a discrete distribution.
2
(2)
(4)
Published as a conference paper at ICLR 2017
In the following sections, we present the discrete variational autoencoder (discrete VAE), a hierar- chical probabilistic model consising of an RBM,3 followed by multiple directed layers of continuous latent variables. This model is efï¬ciently trainable using the variational autoencoder formalism, as in Equation 3, including backpropagation through its discrete latent variables.
1.2 RELATED WORK
Recently, there have been many efforts to develop effective unsupervised learning techniques by building upon variational autoencoders. Importance weighted autoencoders (Burda et al., 2016), Hamiltonian variational inference (Salimans et al., 2015), normalizing ï¬ows (Rezende & Mohamed, 2015), and variational Gaussian processes (Tran et al., 2016) improve the approximation to the pos- terior distribution. Ladder variational autoencoders (Sønderby et al., 2016) increase the power of the architecture of both approximating posterior and prior. Neural adaptive importance sampling (Du et al., 2015) and reweighted wake-sleep (Bornschein & Bengio, 2015) use sophisticated approxi- mations to the gradient of the log-likelihood that do not admit direct backpropagation. Structured variational autoencoders use conjugate priors to construct powerful approximating posterior distri- butions (Johnson et al., 2016).
It is easy to construct a stochastic approximation to the gradient of the ELBO that admits both discrete and continuous latent variables, and only requires computationally tractable samples. Un- fortunately, this naive estimate is impractically high-variance, leading to slow training and poor performance (Paisley et al., 2012). The variance of the gradient can be reduced somewhat using the baseline technique, originally called REINFORCE in the reinforcement learning literature (Mnih & Gregor, 2014; Williams, 1992; Mnih & Rezende, 2016), which we discuss in greater detail in Appendix B.
Prior efforts by Makhzani et al. (2015) to use multimodal priors with implicit discrete variables governing the modes did not successfully align the modes of the prior with the intrinsic clusters of the dataset. Rectiï¬ed Gaussian units allow spike-and-slab sparsity in a VAE, but the discrete variables are also implicit, and their prior factorial and thus unimodal (Salimans, 2016). Graves (2016) computes VAE-like gradient approximations for mixture models, but the component models are assumed to be simple factorial distributions. In contrast, discrete VAEs generalize to powerful multimodal priors on the discrete variables, and a wider set of mappings to the continuous units.
The generative model underlying the discrete variational autoencoder resembles a deep belief net- work (DBN; Hinton et al., 2006). A DBN comprises a sigmoid belief network, the top layer of which is conditioned on the visible units of an RBM. In contrast to a DBN, we use a bipartite Boltz- mann machine, with both sides of the bipartite split connected to the rest of the model. Moreover, all hidden layers below the bipartite Boltzmann machine are composed of continuous latent variables with a fully autoregressive layer-wise connection architecture. Each layer j receives connections from all previous layers i < j, with connections from the bipartite Boltzmann machine mediated by a set of smoothing variables. However, these architectural differences are secondary to those in the gradient estimation technique. Whereas DBNs are traditionally trained by unrolling a succession of RBMs, discrete variational autoencoders use the reparameterization trick to backpropagate through the evidence lower bound.
2 BACKPROPAGATING THROUGH DISCRETE LATENT VARIABLES BY ADDING CONTINUOUS LATENT VARIABLES
When working with an approximating posterior over discrete latent variables, we can effectively smooth the conditional-marginal CDF (deï¬ned by Equation 5 and Appendix A) by augmenting the latent representation with a set of continous random variables. The conditional-marginal CDF over the new continuous variables is invertible and its inverse is differentiable, as required in Equations 3 and 4. We redeï¬ne the generative model so that the conditional distribution of the observed variables given the latent variables only depends on the new continuous latent space. This does not alter
3Strictly speaking, the prior contains a bipartite Boltzmann machine, all the units of which are connected to the rest of the model. In contrast to a traditional RBM, there is no distinction between the âvisibleâ units and the âhiddenâ units. Nevertheless, we use the familiar term RBM in the sequel, rather than the more cumbersome âfully hidden bipartite Boltzmann machine.â
3
Published as a conference paper at ICLR 2017
(a) Approximating posterior q(ζ, z|x) (b) Prior p(x, ζ, z) (c) Autoencoding term
Figure 1: Graphical models of the smoothed approximating posterior (a) and prior (b), and the network realizing the autoencoding term of the ELBO from Equation 2 (c). Continuous latent vari- ables ζi are smoothed analogs of discrete latent variables zi, and insulate z from the observed vari- ables x in the prior (b). This facilitates the marginalization of the discrete z in the autoencoding term of the ELBO, resulting in a network (c) in which all operations are deterministic and differentiable given independent stochastic input Ï
â¼
the fundamental form of the model, or the KL term of Equation 2; rather, it can be interpreted as adding a noisy nonlinearity, like dropout (Srivastava et al., 2014) or batch normalization with a small minibatch (Ioffe & Szegedy, 2015), to each latent variable in the approximating posterior and the prior. The conceptual motivation for this approach is discussed in Appendix C.
Speciï¬cally, as shown in Figure 1a, we augment the latent representation in the approximating pos- terior with continuous random variables ζ,4 conditioned on the discrete latent variables z of the RBM:
(,2le.6) = r(¢\z)-a(zle.4), where r(¢lz) = iat r(Gilzi)-
a support of r(¢|z) for all values of z must be connected, so the marginal distribution (¢|a, 6) =o. r(¢|z) - ee @) has a constant, connected support so long as 0 < q(z|2,¢) < 1. We further require that r(¢|z) is continuous and differentiable except at the endpoints of its support, so the inverse conditional-marginal CDF of q(¢|x, ¢) is differentiable in Equations 3 and 4, as we discuss in Appendix A.
As shown in Figure 1b, we correspondingly augment the prior with ζ:
p(ζ, z z) p(z
θ) = r(ζ |
θ), |
z) is the same as for the approximating posterior. Finally, we require that the conditional
|
where r(ζ distribution over the observed variables only depends on ζ: |
ζ, z, θ) = p(x |
(7) The smoothing distribution r(ζ z) transforms the model into a continuous function of the distri- | bution over z, and allows us to use Equations 2 and 3 directly to obtain low-variance stochastic approximations to the gradient.
|
Given this expansion, we can simplify Equations 3 and 4 by dropping the dependence on z and applying Equation 16 of Appendix A, which generalizes Equation 3: (2) 1 (2) _
(2) 1 (2) _ ag eacle9) [log p(x|¢, z,)] = Vv > 9g BP (IF sd j2,0)(0)-4) . (8) pr (0,1)"
â¼ 4We always use a variant of z for latent variables. This is zeta, or Greek z. The discrete latent variables z
can conveniently be thought of as English z.
4
Published as a conference paper at ICLR 2017
If the approximating posterior is factorial, then each Fi is an independent CDF, without conditioning or marginalization.
1 x, Ï), where x,Ï)(Ï) is a function of q(z = 1 As we shall demonstrate in Section 2.1, Fâ q(ζ | | x, Ï) is a deterministic probability value calculated by a parameterized function, such as q(z = 1 | a neural network. The autoencoder implicit in Equation 8 is shown in Figure 1c. Initially, input x x, Ï), for which the ï¬nal nonlinearity is is passed into a deterministic feedforward network q(z = 1 | the logistic function. Its output q, along with an independent random variable Ï U [0, 1], is passed 1 x,Ï)(Ï) to produce a sample of ζ. This ζ, along with the original into the deterministic function Fâ q(ζ | input x, is ï¬nally passed to log p (x ζ, θ). The expectation of this log probability with respect to Ï is | the autoencoding term of the VAE formalism, as in Equation 2. Moreover, conditioned on the input and the independent Ï, this autoencoder is deterministic and differentiable, so backpropagation can be used to produce a low-variance, computationally-efï¬cient approximation to the gradient.
# 2.1 SPIKE-AND-EXPONENTIAL SMOOTHING TRANSFORMATION
As a concrete example consistent with sparse coding, consider the spike-and-exponential transfor- mation from binary z to continuous ζ:
oo, if¢;=0 0, otherwise Frcda=o(6) = 1 nala=0)={ ae ey eg ec] Bo SBC! gy MGlyâ-la-dea HOSGS Fue (C) = ââ| = = (Gilzi = 1) fi otherwise rca (O) = a5 eal
â
where F,,(¢â) = fs p(¢) - d¢ is the CDF of probability distribution p in the domain [0, 1]. This transformation from 2; to ¢; is invertible: ¢; = 0 = z; = 0, and ¢; > 0 & z; = 1 almost surely.â We can now find the CDF for q(¢|x, #) as a function of q(z = 1|x, d) in the domain (0, 1], marginal- izing out the discrete z:
Fagje,6) (6) = (1 = a(2 = Le, 9) - Freijereo) (0) + a(2 = U2, 6) - Freier (0) ef 4 = q(z = 1\z,¢)- Boi 1) +1.
To evaluate the autoencoder of Figure 1c, and through it the gradient approximation of Equation 8, we must invert the conditional-marginal CDF Fq(ζ
Fi¢jx,4):
; 4 -tog | (#4) - (e 1) +1] ifp>1âq Frode o)() = 4% 4 ther ° acle,6) (P) 4 otherwise â
q to simplify notation. For all values of the inde- x, Ï) where we use the substitution q(z = 1 | 1 pendent random variable Ï x, Ï) if x,Ï)(Ï) rectiï¬es the input q(z = 1 U [0, 1], the function F â q(ζ | | Ï in a manner analogous to a rectiï¬ed linear unit (ReLU), as shown in Figure 2a. It is q ⤠1 is increasing but concave-down if q > 1 Ï. The effect of Ï on also quasi-sigmoidal, in that F â 1 is qualitatively similar to that of dropout (Srivastava et al., 2014), depicted in Figure 2b, or the F â noise injected by batch normalization (Ioffe & Szegedy, 2015) using small minibatches, shown in Figure 2c.
Other expansions to the continuous space are possible. In Appendix D.1, we consider the case where zi = 1) are linear functions of ζ; in Appendix D.2, we develop a spike- both r(ζi| and-slab transformation; and in Appendix E, we explore a spike-and-Gaussian transformation where the continuous ζ is directly dependent on the input x in addition to the discrete z.
5In the limit β â â, ζi = zi almost surely, and the continuous variables ζ can effectively be removed from the model. This trick can be used after training with ï¬nite β to produce a model without smoothing variables ζ.
5
Published as a conference paper at ICLR 2017
(a) Spike-and-exp, β â {1, 3, 5} (b) ReLU with dropout (c) ReLU with batch norm
smoothing transformation for Figure 2: Ï ; β = 1 (dotted), β = 3 (solid), and β = 5 (dashed) (a). Rectiï¬ed linear } unit with dropout rate 0.5 (b). Shift (red) and scale (green) noise from batch normalization; with magnitude 0.3 (dashed), 0.3 (dotted), or 0 (solid blue); before a rectiï¬ed linear unit (c). In all cases, the abcissa is the input and the ordinate is the output of the effective transfer function. The 1 novel stochastic nonlinearity F â x,Ï)(Ï) from Figure 1c, of which (a) is an example, is qualitatively q(ζ | similar to the familiar stochastic nonlinearities induced by dropout (b) or batch normalization (c).
# 3 ACCOMMODATING EXPLAINING-AWAY WITH A HIERARCHICAL
# APPROXIMATING POSTERIOR
When a probabilistic model is deï¬ned in terms of a prior distribution p(z) and a conditional dis- tribution p(x x) due | to phenomena such as explaining-away (Pearl, 1988). Moreover, we wish to use an RBM as the prior distribution (Equation 6), which itself may have strong correlations. In contrast, to maintain tractability, many variational approximations use a product of independent approximating posterior distributions (e.g., mean-ï¬eld methods, but also Kingma & Welling (2014); Rezende et al. (2014)).
To accommodate strong correlations in the posterior distribution while maintaining tractability, we introduce a hierarchy into the approximating posterior q(z x) over the discrete latent variables. | Speciï¬cally, we divide the latent variables z of the RBM into disjoint groups, z1, . . . , zk,6 and deï¬ne the approximating posterior via a directed acyclic graphical model over these groups:
a(21,G1s-++ 2k Cele, 6) = TT r(Gjlzs)-a(zilGccj.,¢) where 1<j<k 9 (Gi<i sty) "+25 Tle,e2, (1 + ef, (Gi<j--9)) ; (10) (25 |Gi<j, 2, @)
# zιâ
n, and gj(ζi<j, x, Ï) is a parameterized function of the inputs and preceding ζi, such as zj â { a neural network. The corresponding graphical model is depicted in Figure 3a, and the integration of such hierarchical approximating posteriors into the reparameterization trick is discussed in Ap- pendix A. If each group zj contains a single variable, this dependence structure is analogous to that of a deep autoregressive network (DARN; Gregor et al., 2014), and can represent any distribution. However, the dependence of zj on the preceding discrete variables zi<j is always mediated by the continuous variables ζi<j.
This hierarchical approximating posterior does not affect the form of the autoencoding term in Equa- tion 8, except to increase the depth of the autoencoder, as shown in Figure 3b. The deterministic probability value q(zj = 1 ζi<j, x, Ï) of Equation 10 is parameterized, generally by a neural net- | work, in a manner analogous to Section 2. However, the ï¬nal logistic function is made explicit in Equation 10 to simplify Equation 12. For each successive layer j of the autoencoder, input x and all ζi<j, x, Ï). Its output qj, along with an previous ζi<j are passed into the network computing q(z = 1 |
6The continuous latent variables ζ are divided into complementary disjoint groups ζ1, . . . , ζk.
6
Published as a conference paper at ICLR 2017
(a) Hierarch approx post q(ζ, z|x) (b) Hierarchical ELBO autoencoding term
Figure 3: Graphical model of the hierarchical approximating posterior (a) and the network realizing the autoencoding term of the ELBO (b) from Equation 2. Discrete latent variables zj only depend on the previous zi<j through their smoothed analogs ζi<j. The autoregressive hierarchy allows the approximating posterior to capture correlations and multiple modes. Again, all operations in (b) are deterministic and differentiable given the stochastic input Ï.
independent random variable Ï Î¶i<j ,x,Ï)(Ï) to produce a sample of ζj. Once all ζj have been recursively computed, the full ζ along with the ζ, θ). The expectation of this log probability with respect original input x is ï¬nally passed to log p (x | to Ï is again the autoencoding term of the VAE formalism, as in Equation 2.
In Appendix F, we show that the gradients of the remaining KL term of the ELBO (Equation 2) can be estimated stochastically using: (2) OE, (z, 0) OE, (2,0
(2) OE, (z, 0) OE, (2,0 ag KE lallel = Eqceste.0) [+ [Bacuccrane | oe il Ex | fe | and
a - â pt 8 _ ot ww. (1226 24 SKE lalel =B, [(o(e.c) 0)" S227 we. (70 SAY), (12)
â â
In particular, Equation 12 is substantially lower variance than the naive approach to calculate â âÏ KL [q
||
# 4 MODELLING CONTINUOUS DEFORMATIONS WITH A HIERARCHY OF
CONTINUOUS LATENT VARIABLES
We can make both the generative model and the approximating posterior more powerful by adding additional layers of latent variables below the RBM. While these layers can be discrete, we focus on continuous variables, which have proven to be powerful in generative adversarial networks (Goodfel- low et al., 2014) and traditional variational autoencoders (Kingma & Welling, 2014; Rezende et al., 2014). When positioned below and conditioned on a layer of discrete variables, continuous variables can build continuous manifolds, from which the discrete variables can choose. This complements the structure of the natural world, where a percept is determined ï¬rst by a discrete selection of the types of objects present in the scene, and then by the position, pose, and other continuous attributes of these objects. Speciï¬cally, we augment the latent representation with continuous random variables z,7 and deï¬ne both the approximating posterior and the prior to be layer-wise fully autoregressive directed graphi- cal models. We use the same autoregressive variable order for the approximating posterior as for the
7We always use a variant of z for latent variables. This is Fraktur z, or German z.
7
(p)
(11)
Published as a conference paper at ICLR 2017
(a) Approx post w/ cont latent vars q(z, ζ, z|x) (b) Prior w/ cont latent vars p(x, z, ζ, z)
Figure 4: Graphical models of the approximating posterior (a) and prior (b) with a hierarchy of continuous latent variables. The shaded regions in parts (a) and (b) expand to Figures 3a and 1b respectively. The continuous latent variables z build continuous manifolds, capturing properties like position and pose, conditioned on the discrete latent variables z, which can represent the discrete types of objects in the image.
prior, as in DRAW (Gregor et al., 2015), variational recurrent neural networks (Chung et al., 2015), the deep VAE of Salimans (2016), and ladder networks (Rasmus et al., 2015; Sønderby et al., 2016). We discuss the motivation for this ordering in Appendix G.
The directed graphical model of the approximating posterior and prior are deï¬ned by:
a(30.---:3nlt,6)= T] a (3mlsi<ms2.¢) and 0<m<n P(30,---s3n19) = [] p(omlarcm,9)- (13) 0<m<n
â¤
â¤
The full set of latent variables associated with the RBM is now denoted by z0 = z1, ζ1, . . . , zk, ζk} . { However, the conditional distributions in Equation 13 only depend on the continuous ζj. Each zm 1 denotes a layer of continuous latent variables, and Figure 4 shows the resulting graphical model.
The ELBO decomposes as:
L (x, θ, Ï) = E x,Ï) [log p(x q(z | z, θ)] | â m E q(zl<m| x,Ï) [KL [q(zm| zl<m, x, Ï) p(zm| || zl<m, θ)]] . (14)
If both q(zm| zl<m, θ) are Gaussian, then their KL divergence has a simple zl<m, x, Ï) and p(zm| closed form, which is computationally efï¬cient if the covariance matrices are diagonal. Gradients x, Ï) using the traditional reparameterization trick, described in can be passed through the q(zl<m| Section 1.1.
# 5 RESULTS
Discrete variational autoencoders comprise a smoothed RBM (Section 2) with a hierarchical approx- imating posterior (Section 3), followed by a hierarchy of continuous latent variables (Section 4). We parameterize all distributions with neural networks, except the smoothing distribution r(ζ z) dis- cussed in Section 2. Like NVIL (Mnih & Gregor, 2014) and VAEs (Kingma & Welling, 2014; Rezende et al., 2014), we deï¬ne all approximating posteriors q to be explicit functions of x, with parameters Ï shared between all inputs x. For distributions over discrete variables, the neural net- works output the parameters of a factorial Bernoulli distribution using a logistic ï¬nal layer, as in Equation 10; for the continuous z, the neural networks output the mean and log-standard deviation of a diagonal-covariance Gaussian distribution using a linear ï¬nal layer. Each layer of the neu- ral networks parameterizing the distributions over z, z, and x consists of a linear transformation,
8
Published as a conference paper at ICLR 2017
batch normalization (Ioffe & Szegedy, 2015) (but see Appendix H.2), and a rectiï¬ed-linear point- wise nonlinearity (ReLU). We stochastically approximate the expectation with respect to the RBM prior p(z θ) in Equation 11 using block Gibbs sampling on persistent Markov chains, analogous to persistent contrastive divergence (Tieleman, 2008). We minimize the ELBO using ADAM (Kingma & Ba, 2015) with a decaying step size.
The hierarchical structure of Section 4 is very powerful, and overï¬ts without strong regularization of the prior, as shown in Appendix H. In contrast, powerful approximating posteriors do not induce signiï¬cant overï¬tting. To address this problem, we use conditional distributions over the input ζ, θ) without any deterministic hidden layers, except on Omniglot. Moreover, all other neural p(x | networks in the prior have only one hidden layer, the size of which is carefully controlled. On statically binarized MNIST, Omniglot, and Caltech-101, we share parameters between the layers of the hierarchy over z. We present the details of the architecture in Appendix H.
We train the resulting discrete VAEs on the permutation-invariant MNIST (LeCun et al., 1998), Om- niglot8 (Lake et al., 2013), and Caltech-101 Silhouettes datasets (Marlin et al., 2010). For MNIST, we use both the static binarization of Salakhutdinov & Murray (2008) and dynamic binarization. Estimates of the log-likelihood9 of these models, computed using the method of (Burda et al., 2016) with 104 importance-weighted samples, are listed in Table 1. The reported log-likelihoods for dis- crete VAEs are the average of 16 runs; the standard deviation of these log-likelihoods are 0.08, 0.04, 0.05, and 0.11 for dynamically and statically binarized MNIST, Omniglot, and Caltech-101 Silhou- ettes, respectively. Removing the RBM reduces the test set log-likelihood by 0.09, 0.37, 0.69, and 0.66.
MNIST (dynamic binarization) LL MNIST (static binarization) ELBO LL DBN IWAE Ladder VAE Discrete VAE -84.55 -82.90 -81.74 -80.15 -88.30 -87.40 -85.10 -85.51 -83.67
Omniglot Caltech-101 Silhouettes LL LL IWAE Ladder VAE RBM DBN Discrete VAE -103.38 -102.11 -100.46 -100.45 -97.43 IWAE RWS SBN RBM NAIS NADE Discrete VAE -117.2 -113.3 -107.8 -100.0 -97.6
Table 1: Test set log-likelihood of various models on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. For the discrete VAE, the reported log-likelihood is estimated with 104 importance-weighted samples (Burda et al., 2016). For comparison, we also report perfor- mance of some recent state-of-the-art techniques. Full names and references are listed in Appendix I.
We further analyze the performance of discrete VAEs on dynamically binarized MNIST: the largest of the datasets, requiring the least regularization. Figure 5 shows the generative output of a discrete VAE as the Markov chain over the RBM evolves via block Gibbs sampling. The RBM is held con- stant across each sub-row of ï¬ve samples, and variation amongst these samples is due to the layers of continuous latent variables. Given a multimodal distribution with well-separated modes, Gibbs sampling passes through the large, low-probability space between the modes only infrequently. As a result, consistency of the digit class over many successive rows in Figure 5 indicates that the RBM prior has well-separated modes. The RBM learns distinct, separated modes corresponding to the different digit types, except for 3/5 and 4/9, which are either nearby or overlapping; at least tens of
8We use the partitioned, preprocessed Omniglot dataset of Burda et al. (2016), available from https://github.com/yburda/iwae/tree/master/datasets/OMNIGLOT.
9The importance-weighted estimate of the log-likelihood is a lower bound, except for the log partition function of the RBM. We describe our unbiased estimation method for the partition function in Appendix H.1.
9
Published as a conference paper at ICLR 2017
we e4E VNAVINS LEKOKFLOVACL NNANYNN FS OK QHOADS Ow. LO WHUAWYH ARVBNYNRUBDBN WK ee ? NVAKYUNVUYYRAVYVOVPYLDYA YNUN SQ LOK +1 © at) 3 5 3 6 % 3 Bg = 8 g 8 & t sf % 3 AS 3 Ss 3 YVAHHEMWHMPRHYHHYWHWDHWVSOW WA®W wor md MQM Mq OW WH ~~ G & WARQWADA HMM VYAROHPHYAWWANUW ~wW NS NN BR ww me N RH TEENS Oe as RAVER FORK HBDHREHAAKAKRS | BREAARSCHHROKF SEH HRSEGCRARA S| SCTFENANTANETHSETAEHKHKE RN RNARKRKREOKSTHEORACSEARQREOS NARHSHHAKEESESCHHR AK SCKAAKASEA WREPRMPAKTHKTSSTKREKCHTHSOHKH PRPHYPVWEARARKRHASESKRAKKRAGDA RPHYVRLNRESTHKOHCHN KE BE HEH BSVPHNAHRE RNS CSE RAKR SS NRX INNNN FORHOS HECK Ns HT NJAVANN SE cw we cod BY L0H 01 Me EWWHRHWH AHWAHEAH KOOWDVY® WWHAWHAWOHWEAYHHNAHA WY WW QPP WO Qwanngyn sy OWYYYNLAWEANDNAAEANAGGaAY BQO OWUWHOYBRRY HAUHWHAWO VY PNENEPN OHO PPL RPS UNND PESO PNK YHA NVELNENEND DL NP YUSDXYDHNLUBSHABMHNY PON P) e 3 iS 3 3 3 Ss 9 e 2 & & & 8 a g x i) § ~N~NY NER KH wR UPNN P
Figure 5: Evolution of samples from a discrete VAE trained on dynamically binarized MNIST, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM. The long vertical sequences in which the digit ID remains constant demonstrate that the RBM has well-separated modes, each of which corresponds to a single (or occasionally two) digit IDs, despite being trained in a wholly unsupervised manner.
(a) Block Gibbs iterations (b) Num RBM units (c) RBM approx post layers
Figure 6: Log likelihood versus the number of iterations of block Gibbs sampling per minibatch (a), the number of units in the RBM (b), and the number of layers in the approximating posterior over the RBM (c). Better sampling (a) and hierarchical approximating posteriors (c) support better per- formance, but the network is robust to the size of the RBM (b).
thousands of iterations of single-temperature block Gibbs sampling is required to mix between the modes. We present corresponding ï¬gures for the other datasets, and results on simpliï¬ed architec- tures, in Appendix J.
The large mixing time of block Gibbs sampling on the RBM suggests that training may be con- strained by sample quality. Figure 6a shows that performance10 improves as we increase the num- θ) in ber of iterations of block Gibbs sampling performed per minibatch on the RBM prior: p(z | Equation 11. This suggests that a further improvement may be achieved by using a more effective sampling algorithm, such as parallel tempering (Swendsen & Wang, 1986).
10All models in Figure 6 use only 10 layers of continuous latent variables, for computational efï¬ciency.
10
Published as a conference paper at ICLR 2017
Commensurate with the small number of intrinsic classes, a moderately sized RBM yields the best performance on MNIST. As shown in Figure 6b, the log-likelihood plateaus once the number of units in the RBM reaches at least 64. Presumably, we would need a much larger RBM to model a dataset like Imagenet, which has many classes and complicated relationships between the elements of various classes.
The beneï¬t of the hierarchical approximating posterior over the RBM, introduced in Section 3, is apparent from Figure 6c. The reduction in performance when moving from 4 to 8 layers in the approximating posterior may be due to the fact that each additional hierarchical layer over the ap- proximating posterior adds three layers to the encoder neural network: there are two deterministic hidden layers for each stochastic latent layer. As a result, expanding the number of RBM approx- imating posterior layers signiï¬cantly increases the number of parameters that must be trained, and increases the risk of overï¬tting.
# 6 CONCLUSION
Datasets consisting of a discrete set of classes are naturally modeled using discrete latent variables. However, it is difï¬cult to train probabilistic models over discrete latent variables using efï¬cient gradient approximations based upon backpropagation, such as variational autoencoders, since it is generally not possible to backpropagate through a discrete variable (Bengio et al., 2013).
We avoid this problem by symmetrically projecting the approximating posterior and the prior into a continuous space. We then evaluate the autoencoding term of the evidence lower bound exclusively in the continous space, marginalizing out the original discrete latent representation. At the same time, we evaluate the KL divergence between the approximating posterior and the true prior in the original discrete space; due to the symmetry of the projection into the continuous space, it does not contribute to the KL term. To increase representational power, we make the approximating posterior over the discrete latent variables hierarchical, and add a hierarchy of continuous latent variables below them. The resulting discrete variational autoencoder achieves state-of-the-art performance on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
# ACKNOWLEDGEMENTS
Zhengbing Bian, Fabian Chudak, Arash Vahdat helped run experiments. Jack Raymond provided the library used to estimate the log partition function of RBMs. Mani Ranjbar wrote the cluster management system, and a custom GPU acceleration library used for an earlier version of the code. We thank Evgeny Andriyash, William Macready, and Aaron Courville for helpful discussions; and one of our anonymous reviewers for identifying the problem addressed in Appendix D.3.
# REFERENCES
Jimmy Ba and Brendan Frey. Adaptive dropout for training deep neural networks. In Advances in Neural Information Processing Systems, pp. 3084â3092, 2013.
Yoshua Bengio, Nicholas L´eonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
Charles H. Bennett. Efï¬cient estimation of free energy differences from Monte Carlo data. Journal of Computational Physics, 22(2):245â268, 1976.
J¨org Bornschein and Yoshua Bengio. Reweighted wake-sleep. In Proceedings of the International Conference on Learning Representations, arXiv:1406.2751, 2015.
J¨org Bornschein, Samira Shabanian, Asja Fischer, and Yoshua Bengio. Bidirectional Helmholtz machines. In Proceedings of The 33rd International Conference on Machine Learning, pp. 2511â 2519, 2016.
Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy In Proceedings of the 20th SIGNLL Bengio. Generating sentences from a continuous space. Conference on Computational Natural Language Learning, pp. 10â21, 2016.
11
Published as a conference paper at ICLR 2017
Yuri Burda, Roger B. Grosse, and Ruslan Salakhutdinov. Accurate and conservative estimates of MRF log-likelihood using reverse annealing. In Proceedings of the 18th International Conference on Artiï¬cial Intelligence and Statistics, 2015.
Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. Proceed- ings of the International Conference on Learning Representations, arXiv:1509.00519, 2016.
Steve Cheng. Differentiation under the integral sign with weak derivatives. Technical report, Work- ing paper, 2006.
KyungHyun Cho, Tapani Raiko, and Alexander Ilin. Enhanced gradient for training restricted Boltz- mann machines. Neural Computation, 25(3):805â831, 2013.
Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C. Courville, and Yoshua Ben- gio. A recurrent latent variable model for sequential data. In Advances in Neural Information Processing Systems, pp. 2980â2988, 2015.
Aaron C. Courville, James S. Bergstra, and Yoshua Bengio. Unsupervised models of images by spike-and-slab rbms. In Proceedings of the 28th International Conference on Machine Learning, pp. 1145â1152, 2011.
Paul Dagum and Michael Luby. Approximating probabilistic inference in Bayesian belief networks is NP-hard. Artiï¬cial Intelligence, 60(1):141â153, 1993.
Chao Du, Jun Zhu, and Bo Zhang. Learning deep generative models with doubly stochastic MCMC. arXiv preprint arXiv:1506.04557, 2015.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor- mation Processing Systems, pp. 2672â2680, 2014.
Alex Graves. Stochastic backpropagation through mixture density distributions. arXiv preprint arXiv:1607.05690, 2016.
Karol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, and Daan Wierstra. Deep autoregres- sive networks. In Proceedings of the 31st International Conference on Machine Learning, pp. 1242â1250, 2014.
Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. DRAW: A recurrent neural network for image generation. In Proceedings of the 32nd International Conference on Machine Learning, pp. 1462â1471, 2015.
Geoffrey Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527â1554, 2006.
Geoffrey E. Hinton and R. S. Zemel. Autoencoders, minimum description length, and Helmholtz free energy. In J. D. Cowan, G. Tesauro, and J. Alspector (eds.), Advances in Neural Information Processing Systems 6, pp. 3â10. Morgan Kaufmann Publishers, Inc., 1994.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, pp. 448â456, 2015.
Matthew Johnson, David K Duvenaud, Alexander B Wiltschko, Sandeep R Datta, and Ryan P Adams. Composing graphical models with neural networks for structured representations and fast inference. In Advances in Neural Information Processing Systems, pp. 2946â2954, 2016.
Michael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183â233, 1999.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations, arXiv:1412.6980, 2015.
12
Published as a conference paper at ICLR 2017
Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pp. 3581â3589, 2014.
Durk P. Kingma and Max Welling. Auto-encoding variational bayes. In Proceedings of the Interna- tional Conference on Learning Representations, arXiv:1312.6114, 2014.
Brenden M. Lake, Ruslan R. Salakhutdinov, and Josh Tenenbaum. One-shot learning by inverting a compositional causal process. In Advances in Neural Information Processing Systems, pp. 2526â 2534, 2013.
Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In Proceedings of the 14th International Conference on Artiï¬cial Intelligence and Statistics, 2011.
Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998.
Yingzhen Li and Richard E. Turner. Variational inference with R´enyi divergence. arXiv preprint arXiv:1602.02311, 2016.
Philip M. Long and Rocco Servedio. Restricted Boltzmann machines are hard to approximately evaluate or simulate. In Proceedings of the 27th International Conference on Machine Learning, pp. 703â710, 2010.
Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015.
Inductive principles for restricted Boltzmann machine learning. In Proceedings of the 13th International Conference on Artiï¬cial Intelligence and Statistics, pp. 509â516, 2010.
Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. Pro- ceedings of the 31st International Conference on Machine Learning, pp. 1791â1799, 2014.
Andriy Mnih and Danilo J. Rezende. Variational inference for Monte Carlo objectives. In Proceed- ings of the 33rd International Conference on Machine Learning, pp. 2188â2196, 2016.
Iain Murray and Ruslan R. Salakhutdinov. Evaluating probabilities under high-dimensional latent variable models. In Advances in Neural Information Processing Systems, pp. 1137â1144, 2009.
Radford M. Neal. Connectionist learning of belief networks. Artiï¬cial Intelligence, 56(1):71â113, 1992.
Bruno A. Olshausen and David J. Field. Emergence of simple-cell receptive ï¬eld properties by learning a sparse code for natural images. Nature, 381(6583):607â609, 1996.
John Paisley, David M. Blei, and Michael I. Jordan. Variational Baysian inference with stochastic search. In Proceedings of the 29th International Conference on Machine Learning, 2012.
Judea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Mor- gan Kaufmann, 1988.
Tapani Raiko, Harri Valpola, Markus Harva, and Juha Karhunen. Building blocks for variational Bayesian learning of latent variable models. Journal of Machine Learning Research, 8:155â201, 2007.
Tapani Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh. Techniques for learning binary stochastic feedforward neural networks. In Proceedings of the International Conference on Learning Representations, arXiv:1406.2989, 2015.
Semi- supervised learning with ladder networks. In Advances in Neural Information Processing Systems, pp. 3546â3554, 2015.
13
Published as a conference paper at ICLR 2017
Danilo Rezende and Shakir Mohamed. Variational inference with normalizing ï¬ows. In Proceedings of the 32nd International Conference on Machine Learning, pp. 1530â1538, 2015.
Danilo J. Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approxi- mate inference in deep generative models. In Proceedings of The 31st International Conference on Machine Learning, pp. 1278â1286, 2014.
Ruslan Salakhutdinov and Geoffrey E. Hinton. Deep Boltzmann machines. In Proceedings of the 12th International Conference on Artiï¬cial Intelligence and Statistics, pp. 448â455, 2009.
Ruslan Salakhutdinov and Iain Murray. On the quantitative analysis of deep belief networks. In Proceedings of the 25th International Conference on Machine Learning, pp. 872â879. ACM, 2008.
Tim Salimans. A structured variational auto-encoder for learning deep hierarchies of sparse features. arXiv preprint arXiv:1602.08734, 2016.
Tim Salimans, Diederik P. Kingma, Max Welling, et al. Markov chain Monte Carlo and variational inference: Bridging the gap. In Proceedings of the 32nd International Conference on Machine Learning, pp. 1218â1226, 2015.
Michael R. Shirts and John D. Chodera. Statistically optimal analysis of samples from multiple equilibrium states. The Journal of Chemical Physics, 129(12), 2008.
Paul Smolensky. Information processing in dynamical systems: Foundations of harmony theory. In D. E. Rumelhart and J. L. McClelland (eds.), Parallel Distributed Processing, volume 1, chapter 6, pp. 194â281. MIT Press, Cambridge, 1986.
Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. Ladder variational autoencoders. In Advances in Neural Information Processing Systems, pp. 3738â3746, 2016.
David J. Spiegelhalter and Steffen L. Lauritzen. Sequential updating of conditional probabilities on directed graphical structures. Networks, 20(5):579â605, 1990.
Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overï¬tting. Journal of Machine Learning Research, 15(1):1929â1958, 2014.
Robert H. Swendsen and Jian-Sheng Wang. Replica Monte Carlo simulation of spin-glasses. Phys- ical Review Letters, 57(21):2607, 1986.
Tijmen Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient. In Proceedings of the 25th International Conference on Machine Learning, pp. 1064â 1071. ACM, 2008.
Dustin Tran, Rajesh Ranganath, and David M. Blei. The variational Gaussian process. Proceedings of the International Conference on Learning Representations, arXiv:1511.06499, 2016.
Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
A MULTIVARIATE VAES BASED ON THE CUMULATIVE DISTRIBUTION FUNCTION
The reparameterization trick is always possible if the cumulative distribution function (CDF) of x, Ï) is invertible, and the inverse CDF is differentiable, as noted in Kingma & Welling (2014). q(z | However, for multivariate distributions, the CDF is deï¬ned by:
F(x) = os D(ath +2): a =â00 w!,=â00
ââ
ââ
14
Published as a conference paper at ICLR 2017
# n R
The multivariate CDF maps In place of the multivariate CDF, consider the set of conditional-marginal CDFs deï¬ned by:12
â
p(ai|ai,...,@i-1)- (15)
ââ
That is, Fj(x) is the CDF of xj, conditioned on all xi such that i < h, and marginalized over all xk such the j < k. The range of each Fj is [0, 1], so F maps the domain of the original [0, 1]n. To invert F, we need only invert each conditional-marginal CDF in turn, distribution to Ï 1 conditioning xj = F â 1(Ï). These inverses exist so long as 1 = F â j j â the conditional-marginal probabilities are everywhere nonzero. It is not problematic to effectively deï¬ne F â (Ï) based upon xi<j, rather than Ïi<j, since by induction we can uniquely determine j xi<j given Ïi<j.
Using integration-by-substition, we can compute the gradient of the ELBO by taking the expectation 1 of a uniform random variable Ï on [0, 1]n, and using Fâ x,Ï) to transform Ï back to the element q(z | of z on which p(x z, θ) is conditioned. To perform integration-by-substitution, we will require the | determinant of the Jacobian of Fâ
The derivative of a CDF is the probability density function at the selected point, and Fj is a simple CDF when we hold ï¬xed the variables xi<j on which it is conditioned, so using the inverse function theorem we ï¬nd:
p (23 =F; '()\ti<j,)
where p is a vector, and FY is 55, or is triangular, since the earlier conditional- j marginal CDFs Fâ; are independent of the value of the later x, 7 < k, over which they are marginal- ized. Moreover, the inverse conditional-marginal CDFs have the same dependence structure as F, so the Jacobian of F~! is also triangular. The determinant of a triangular matrix is the product of the diagonal elements. . The Jacobian matrix
11For instance, for the bivariate uniform distribution on the interval [0, 1]2, the CDF F (x, y) = x · y for x yields F (x, y) = c. Clearly, many different pairs 0 ⤠x, y ⤠1, so for any 0 ⤠c ⤠1 and c ⤠x ⤠1, y = c (x, y) yield each possible value c of F (x, y).
The set of marginal CDFs, used to define copulas, is invertible. However, it does not gener- ally map the original distribution to a simple joint distribution, such as a multivariate unform distribu- are (e)
# det
(e) (z|2,6) q (2: = does
q(z|x,Ï) âÏ does not cancel out tion, as required for variational autoencoders. In Equation 16,
The determinant of the inverse Jacobian is instead [[];
1
qd (Fydje.) (Ole, ). The determinant of the inverse Jacobian is instead [[]; q (2: = F;'(p))] 1 which -1
Fâ1
Fâ1 q(z|x,Ï)(Ï) q if q is not factorial. As a result, we do not recover the variational autoen- differs from coder formulation of Equation 16.
15
Published as a conference paper at ICLR 2017
Using these facts to perform a multivariate integration-by-substitution, we obtain:
# E
ee aot â4 OF! - fis (Fy Lie, » (p)|x.9) âlogp (2 [Fb jno)(P):9) | eee 4 een walt ~ j [ 4 (Fye.o)(0)le.9) 0 =0 [Lj 9 (27 = Fj (e)lz<3 1 =|, log p (IF ,d |x,6) (0), 6) Bycsino) logpte|2.8)] =f alsle.6) -logplele.8) = | âa (Fide (le+4) log (IF 2 ).,5)(0)-9) - ) âlogp (« aE ele, )(P p); 6)
# q(z
Ï=0
The variable Ï has dimensionality equal to that of z; 0 is the vector of all 0s; 1 is the vector of all 1s.
The gradient with respect to Ï is then easy to approximate stochastically:
(a) 1 (2) _ ag natele#) flog p(z]z, @)] © WV » a6 log p (cP sd j2.0)(0)> 6) (17) pru(0,l)"
â¼
Note that if q(z x, Ï) is factorial (i.e., the product of independent distributions in each dimension zj), | then the conditional-marginal CDFs Fj are just the marginal CDFs in each direction. However, even if q(z x, Ï) is not factorial, Equation 17 still holds so long as F is nevertheless deï¬ned to be the set | of conditional-marginal CDFs of Equation 15.
# B THE DIFFICULTY OF ESTIMATING GRADIENTS OF THE ELBO WITH
# REINFORCE
It is easy to construct a stochastic approximation to the gradient of the ELBO that only requires computationally tractable samples, and admits both discrete and continuous latent variables. Un- fortunately, this naive estimate is impractically high-variance, leading to slow training and poor performance (Paisley et al., 2012). The variance of the gradient can be reduced somewhat using the baseline technique, originally called REINFORCE in the reinforcement learning literature (Mnih & Gregor, 2014; Williams, 1992; Bengio et al., 2013; Mnih & Rezende, 2016):
(a) a] FBacoe le nll] = Eye [Boe r(ale,8) ~ BC tow alle. 6)] =F LD (lowr(els.4) â Ble}: Fo towalcle.9)) 8 znq(2|2,)
x,Ï) |
â¼
where B(x) is a (possibly input-dependent) baseline, which does not affect the gradient, but can reduce the variance of a stochastic estimate of the expectation. In REINFORCE, â z, θ)] is effectively estimated by something akin to a ï¬nite âÏ | difference approximation to the derivative. The autoencoding term is a function of the conditional x, Ï), which deter- log-likelihood log p(x | mines the value of z at which p(x z, θ) is evaluated. However, the conditional log-likelihood is | never differentiated directly in REINFORCE, even in the context of the chain rule. Rather, the con- ditional log-likelihood is evaluated at many different points z x, Ï), and a weighted sum of these values is used to approximate the gradient, just like in the ï¬nite difference approximation.
# E
Equation 18 of REINFORCE captures much less information about p(|z, 0) per sample than Equa- tion 3 of the variational autoencoder, which actively makes use of the gradient. In particular, the change of p(x|z, @) in some direction dcan only affect the REINFORCE gradient estimate if a sam- ple is taken with a component in direction d. Ina D-dimensional latent space, at least D samples are
16
(16)
Published as a conference paper at ICLR 2017
required to capture the variation of p(x z, θ) in all directions; fewer samples span a smaller subspace. | Since the latent representation commonly consists of dozens of variables, the REINFORCE gradi- ent estimate can be much less efï¬cient than one that makes direct use of the gradient of p(x z, θ). | Moreover, we will show in Section 5 that, when the gradient is calculated efï¬ciently, hundreds of latent variables can be used effectively.
C AUGMENTING DISCRETE LATENT VARIABLES WITH CONTINUOUS LATENT VARIABLES
Intuitively, variational autoencoders break the encoder13 distribution into âpacketsâ of probability of inï¬nitessimal but equal mass, within which the value of the latent variables is approximately constant. These packets correspond to a region ri < Ïi < ri + δ for all i in Equation 16, and the expectation is taken over these packets. There are more packets in regions of high probability, so x,Ï)(ζ) maps intervals high-probability values are more likely to be selected. More rigorously, Fq(z | 1, so a randomly selected Ï of high probability to larger spans of 0 U [0, 1] is more likely â¼ 1 to be mapped to a high-probability point by Fâ q(z
x,Ï)(Ï). | As the parameters of the encoder are changed, the location of a packet can move, while its mass is 1 x,Ï)(Ï) is a function of Ï, whereas the probability mass associated held constant. That is, ζ = Fâ q(z | 1 with a region of Ï-space is constant by deï¬nition. So long as Fâ x,Ï) exists and is differentiable, a q(z | small change in Ï will correspond to a small change in the location of each packet. This allows us to use the gradient of the decoder to estimate the change in the loss function, since the gradient of the decoder captures the effect of small changes in the location of a selected packet in the latent space.
In contrast, REINFORCE (Equation 18) breaks the latent represention into segments of infinites- simal but equal volume; e.g., z; < Zz < 2; +6 for all i (Williams, 1992; Mnih & Gregor, 2014; Bengio et al., 2013). The latent variables are also approximately constant within these segments, but the probability mass varies between them. Specifically, the probability mass of the segment z <2! < 246 is proportional to q(z|x, ¢).
x, Ï). |
â¤
Once a segment is selected in the latent space, its location is independent of the encoder and decoder. In particular, the gradient of the loss function does not depend on the gradient of the decoder with respect to position in the latent space, since this position is ï¬xed. Only the probability mass assigned to the segment is relevant.
Although variational autoencoders can make use of the additional gradient information from the decoder, the gradient estimate is only low-variance so long as the motion of most probability packets has a similar effect on the loss. This is likely to be the case if the packets are tightly clustered (e.g., the encoder produces a Gaussian with low variance, or the spike-and-exponential distribution of Section 2.1), or if the movements of far-separated packets have a similar effect on the total loss (e.g., the decoder is roughly linear).
Nevertheless, Equation 17 of the VAE can be understood in analogy to dropout (Srivastava et al., 1 2014) or standout (Ba & Frey, 2013) regularization. Like dropout and standout, Fâ x,Ï)(Ï) is an q(z | 1 x,Ï)(Ï) selects a point element-wise stochastic nonlinearity applied to a hidden layer. Since Fâ q(z in the probability distribution, it rarely selects an improbable point. Like standout, the distribution of the hidden layer is learned. Indeed, we recover the encoder of standout if we use the spike-and- Gaussian distribution of Section E.1 and let the standard deviation Ï go to zero.
However, variational autoencoders cannot be used directly with discrete latent representations, since changing the parameters of a discrete encoder can only move probability mass between the allowed discrete values, which are far apart. If we follow a probability packet as we change the encoder parameters, it either remains in place, or jumps a large distance. As a result, the vast majority of probability packets are unaffected by small changes to the parameters of the encoder. Even if we are lucky enough to select a packet that jumps between the discrete values of the latent representation,
13Since the approximating posterior q(z|x, Ï) maps each input to a distribution over the latent space, it is sometimes called the encoder. Correspondingly, since the conditional likelihood p(x|z, θ) maps each conï¬gu- ration of the latent variables to a distribution over the input space, it is called the decoder.
17
Published as a conference paper at ICLR 2017
the gradient of the decoder cannot be used to accurately estimate the change in the loss function, since the gradient only captures the effect of very small movements of the probability packet.
To use discrete latent representations in the variational autoencoder framework, we must ï¬rst trans- form to a continuous latent space, within which probability packets move smoothly. That is, we must compute Equation 17 over a different distribution than the original posterior distribution. Sur- prisingly, we need not sacriï¬ce the original discrete latent space, with its associated approximating posterior. Rather, we extend the encoder q(z θ) with a transformation to a continuous, auxiliary latent representation ζ, and correspondingly make the decoder a function of this new continuous representation. By extending both the encoder and the prior in the same way, we avoid affecting the remaining KL divergence in Equation 2.14
The gradient is deï¬ned everywhere if we require that each point in the original latent space map to nonzero probability over the entire auxiliary continuous space. This ensures that, if the probability of some point in the original latent space increases from zero to a nonzero value, no probability packet needs to jump a large distance to cover the resulting new region in the auxiliary continuous space. Moreover, it ensures that the conditional-marginal CDFs are strictly increasing as a function of their main argument, and thus are invertible.
If we ignore the cases where some discrete latent variable has probability 0 or 1, we need only require that, for every pair of points in the original latent space, the associated regions of nonzero probability in the auxiliary continuous space overlap. This ensures that probability packets can move continuously as the parameters Ï of the encoder, q(z x, Ï), change, redistributing weight amongst | the associated regions of the auxiliary continuous space.
# D ALTERNATIVE TRANSFORMATIONS FROM DISCRETE TO CONTINUOUS LATENT REPRESENTATIONS
The spike-and-exponential transformation from discrete latent variables z to continuous latent vari- ables ζ presented in Section 2.1 is by no means the only one possible. Here, we develop a collection of alternative transformations.
# D.1 MIXTURE OF RAMPS
As another concrete example, we consider a case where both r(ζi| linear functions of ζi: zi = 0) and r(ζi| zi = 1) are
2-(1âG), if0<G<1 ¢ 2 r(Gilzi = 0) = {i 7! otherwise Frcil2v=0) (6) = 2G; â Co = 2¢'-¢ 2-G, if0<G<1 fp m(cili = 1) = to otherwise Frgja=iy(C) =~ ean =¢
where F,,(¢â) = f p(¢) - d¢ is the CDF of probability distribution p in the domain [0, 1]. The CDF for q(¢|z, ¢) as a function of g(z = 1\x, ¢) is:
x, Ï) is: x, Ï) as a function of q(z = 1 | |
Fy(ciao)(6!) = (1 (2 = Her, 9) - (26) + ale = Ie, 9) 6â =2-g(z=la,9)- (67 = 6) 426-67, (19)
14Rather than extend the encoder and the prior, we cannot simply prepend the transformation to continuous space to the decoder, since this does not change the space of the probabilty packets.
18
Published as a conference paper at ICLR 2017
We can calculate Fide, #) explicitly, using the substitutions Fiy(¢jz,4) > p, q(z = 1x, 6) â q, and ¢â â ¢ in Equation 19 to simplify notation:
â
pa=2-q(C-O+%e-C 0= (29-1): +2(1-4)-C-p ¢ 2(qâ-1) + V4 - 2¢ + @) + 4(2q â Lp 2(2q â 1) q-)+VP +2(0- Nat C= p) te
â
1 2 ; Ï = ζ otherwise. F â q(ζ
if q 1 2 ; Ï = ζ otherwise. F â q(ζ = 1
x,Ï) has the desired range [0, 1] if we choose |
F-1(p) (@Q-1)+ VP +2(o-Da+(â- p) 2q-1 _4g-1+VJV(-1)?+(¢-1)-p (20) 2q-1
F â
â
= 1 if q in Figure 7. 2 , and F â 1(Ï) = Ï if q = 1 1 2 . We plot F â q(ζ x,Ï)(Ï) as a function of q for various values of Ï |
(0) -1 q(C|x., i i i i 0 02 04 06 08 q(z = 1|2,¢@) a
# Figure 7: Inverse CDF of the mixture of ramps transformation for Ï
⬠{0.2,0.5,0.8}
â {
}
In Equation 20, F ela, g)(P) is quasi-sigmoidal as a function of g(z = 1|x,¢). If p < 0.5, Fou is concave-up; if p > 0.5, F~! is concave-down; if p ~ 0.5, F~! is sigmoid. In no case is F~! extremely flat, so it does not kill gradients. In contrast, the sigmoid probability of z inevitably flattens.
# D.2 SPIKE-AND-SLAB
We can also use the spike-and-slab transformation, which is consistent with sparse coding and proven in other successful generative models (Courville et al., 2011):
ee Oe oor) if¢; =0 nh r(Gilzi = 0) = {o otherwise Fy(¢\2:=0)(¢') = 1 4, fl, if0<G<1 nGls=1)= {9 Npemie Fygiaay(C) = Gig =
where F,(¢â) = f° âwc p(¢) -d¢ is the cumulative tion pin he lomein (0, 1]. The CDF for g(¢|a,
dζ is the cumulative distribution function (CDF) of probability distribu- p(ζ) · ââ
@) as a function of g(z = 1|x, 4) is: Frejev=oy (6) + a(2 = Wor, 6)» +1.
Facclx,0)(6) = 1 a(z = Me, 6)» Frejev=oy (6) + a(2 = Wor, 6)» Fegijevaay (0) = q(z=1\2,¢)-(¢/-1) +1.
â = q(z = 1 |
â
19
Published as a conference paper at ICLR 2017
1 We can calculate F â q(ζ x, Ï) â q to simplify notation:
#) explicitly, using the substitution g(z = p-l 7 eli,
1 q + 1, â 0, if Ï â ⥠otherwise 1 1 F â q(ζ x,Ï)(Ï) = | q
1 We plot F â q(ζ
x,Ï)(Ï) as a function of q for various values of Ï in Figure 8. |
0.8 0.6 )(p) 0.4 -1 aC|a.e 0.2 0 i i i i 0 02 04 06 08 q(z = 1|2,¢@) a
# Figure 8: Inverse CDF of the spike-and-slab transformation for Ï
⬠{0.2, 0.5, 0.8}
â {
}
# D.3 ENGINEERING EFFECTIVE SMOOTHING TRANSFORMATIONS
If the smoothing transformation is not chosen appropriately, the contribution of low-probability regions to the expected gradient of the inverse CDF may be large. Using a variant of the inverse function theorem, we ï¬nd:
(a) OF OF (a) F(F(p)) + = =F "(p) =0 06 06 F-1(p) Oz F-1(p) 00 a4, OF (2) - 30° (p) = ~ 90 |.â
where z = F'~1(p). Consider the case where r(¢;|z; = 0) and r(¢;|z; = 1) are unimodal, but have little overlap. For instance, both distributions might be Gaussian, with means that are many standard deviations apart. For values of ¢; between the two modes, F(¢;) ~ q(zi = O|x,¢@), assuming without loss of generality that the mode corresponding to z; = 0 occurs at a smaller value of ¢; than that corresponding to z; = 1. As a result, x = 1 between the two modes, and or ~] A even if r(¢;) ~& 0. In this case, the stochastic estimates of the gradient in equation 8, which depend upon a have large variance. These high-variance gradient estimates arise because r(¢;|z; = 0) and r(¢;|z; = 1) are too well separated, and the resulting smoothing transformation is too sharp. Such disjoint smoothing trans- formations are analogous to a sigmoid transfer function o(c - x), where o is the logistic function and c â oo. The smoothing provided by the continuous random variables ¢ is only effective if there is a region of meaningful overlap between r(¢|z = 0) and r(¢\z = 1). In particular, Y., (Glzi = 0) +r(Gilzi = 1) > 0 for all ¢; between the modes of r(¢;|zi = 0) and r(Gi]z; = 1), so p(z) remains moderate in equation 21. In the spike-and-exponential distribution described in Section 2.1, this overlap can be ensured by fixing or bounding £.
# E TRANSFORMATIONS FROM DISCRETE TO CONTINUOUS LATENT
REPRESENTATIONS THAT DEPEND UPON THE INPUT
It is not necessary to deï¬ne the transformation from discrete to continuous latent variables in the z), to be independent of the input x. In the true posterior distribution, approximating posterior, r(ζ |
20
(21)
Published as a conference paper at ICLR 2017
p(ζ p(ζ little as a function of x, since z, x) | z) only if z already captures most of the information about x and p(ζ â | z, x) changes |
p(ζ z) = | x p(ζ, x z) = | x p(ζ z, x) | · p(x z). |
This is implausible if the number of discrete latent variables is much smaller than the entropy of the input data distribution. To address this, we can deï¬ne:
q(ζ, z x, Ï) = q(z | θ) = p(ζ p(ζ, z | x, Ï) | z) | q(ζ | θ) | · p(z · z, x, Ï)
|
This leads to an evidence lower bound that resembles that of Equation 2, but adds an extra term:
LV AE(x, θ, Ï) = log p(x = log p(x
6) = log (2/0) â KL [q(z,¢|2.¢)|lp(z, Cx, 0] = log p(2|4) â KL [q(¢|2,2, 6) - (ele, 9)||p(Cl2.@,8) plete. 4)) _ _ a, [plalg. 8) -pC|z.8) - Plzl8) =X [aca0) a(ele. 6) los | CaS by ace) = Ey(¢\2,2,6)-q(z|x,¢) Log p(x|¢, )] â KL [q(zIa, ¢)||p(218)] â Yi alele, 6) KL [a(¢|z, 2, 9)||p(¢\2)]
The extension to hierarchical approximating posteriors proceeds as in sections 3 and 4.
If both g(¢|z, x,¢) and p(¢|z) are Gaussian, then their KL divergence has a simple closed form, which is computationally efficient if the covariance matrices are diagonal. However, while the gra- dients of this KL divergence are easy to calculate when conditioned on z, the gradients with respect of q(z|x, @) in the new term seem to force us into a REINFORCE-like approach (c.f. Equation 18): log q(z\z,9) YEO) fa(lz.2,9)lv(Cl=)]
log q(z\z,9) YEO) KL fa(lz.2,9)lv(Cl=)] = Buta [RL[a(l2.2.9)|p(Cl2)]- BE Oo z
(23) The reward signal is now KL [q(ζ z, θ), but the effect on the | variance is the same, likely negating the advantages of the variational autoencoder in the rest of the loss function.
However, whereas REINFORCE is high-variance because it samples over the expectation, we can perform the expectation in Equation 23 analytically, without injecting any additional variance. Specifically, if q(z|a,@) and q(¢|z,a,) are factorial, with q(¢;|zi,7,@) only dependent on z;, then KL [q(¢|z, x, )||p(¢|z)] decomposes into a sum of the KL divergences over each variable, as does Steg qle.6) The expectation of all terms in the resulting product of sums is zero except those of the form E [KL {gil |pi] - legs) , due to the identity explained in Equation 27. We then use the reparameterization trick to eliminate all hierarchical layers before the current one, and marginalize over each z;. As a result, we can compute the term of Equation 23 by backpropagating
KL [q(ζ p(ζ z = 1)] p(ζ
z = 1, x, Ï | â x, Ï). This is especially simple if q(ζi| | z = 0, x, Ï) |
KL [q(ζ z = 0, x, Ï) | zi, x, Ï) = p(ζi|
z = 0)] |
||
|
||
into q(z KL [q(ζ p(ζ zi) when zi = 0, since then
z = 0)] = 0. |
||
# E.1 SPIKE-AND-GAUSSIAN
zi, x, Ï) to be a separate Gaussian for both values of the binary zi. However, it We might wish q(ζi| is difï¬cult to invert the CDF of the resulting mixture of Gaussians. It is much easier to use a mixture of a delta spike and a Gaussian, for which the CDF can inverted piecewise:
0,
0, if¢; <0 q(Gi\zi = 0, 2,4) = 5(G) Fa (é:\2:=0,0,6)(G) = H(Gi) = {i otherwise zi =1,2,6) =N (ug i (2, 102 (x, ¢ 2,=1,0,0) (Gi i +t Gi = Mail, 6) a(cil 1,2, ) (Hq alt, 6), 05 :(,)) Fa(eler=t.e.o) (Gi) 3 f at( Via,,:(@, 6)
21
(22)
.
Published as a conference paper at ICLR 2017
where µq(x, Ï) and Ïq(x, Ï) are functions of x and Ï. We use the substitutions q(zi = 1 | µq,i(x, Ï) is similarly parameterized.
We can now ï¬nd the CDF for q(ζ q:
x, Ï) x, Ï) as a function of q(z = 1 | |
â
Fa(gje.o) (Gi) = (1 = a) - (Gi) Gi = Hayi l+erf (Sat )| Gi +5
Since zi = 0 makes no contribution to the CDF until ζi = 0, the value of Ï at which ζi = 0 is
Ïstep i = qi 2 1 + erf µq,i â â2Ïq,i
# so:
qi + V20q,-erf + (% _ 1) , if pi < pre? oye it piâ < pi < pl"? + (1a) Hq + V 204, erf | (248 + 1) , otherwise
Gradients are always evaluated for ï¬xed choices of Ï, and gradients are never taken with respect to Ï. As a result, expectations with respect to Ï are invariant to permutations of Ï. Furthermore,
2p: _ | 2 -D) Gi Gi +1
# where pi, =
# use
qi). We can thus shift the delta spike to the beginning of the range of p;, and 9, ifp) <1-âq â1 ( 2(pi-1) : Hq i + V20q;- ert âA +1), otherwise
p; + (1 â
# qi
â
G=
All parameters of the multivariate Gaussians should be trainable functions of x, and independent of q. The new term in Equation 22 is:
SY alele, 4) KL [a(Clz,«, 8)|le(Cl2)] = z Se ala = Ua, 6) KL [a(Gilzi = 1,2, )loGlzi = 0) + (1â4q(z = 1,6) - KL fa(¢il2: = 0,2, 6)|Ip(Gilz« = 0)]
x, Ï)) q(zi = 1 KL [q(ζi| | â zi = 0, θ), and KL [q(ζi| zi = 0, x, Ï) = p(ζi| p,i and Ï2 q,i, is
p(ζi| || zi = 0, x, Ï)
zi = 0)] p(ζi| ||
If zi = 0, then q(ζi| zi = 0, θ)] = 0 as in Section 2. The KL divergence between two multivariate Gaussians with diagonal covariance matrices, with means µp,i, µq,i, and covariances Ï2
Ï2 q,i + (µq,i â Ï2 2 p,i · zi = 1, x, Ï) x, Ï), we thus need to backpropagate KL [q(ζi| To train q(zi = 1 |
1 5)
p(ζi| || zi = 1)] into it. Finally,
âKL[q || âµq,i âKL[q || âÏq,i p] p] = = µq,i â Ï2 p,i â 1 Ïq,i µp,i + Ïq,i Ï2 p,i
22
Published as a conference paper at ICLR 2017
# so
Hq ~ Epi / (a) , a q(2\z, 9) + Dias KL [q||p] = 9(z: = 1x, 4) - o, 1 , (a) , Ooi Di ale, @) - 5 âKL [ally] = az = Me, 9) - (-; +3 qi qi yi z
)
For p, it is not useful to make the mean values of ζ adjustable for each value of z, since this is redundant with the parameterization of the decoder. With ï¬xed means, we could still parameterize the variance, but to maintain correspondence with the standard VAE, we choose the variance to be one.
# F COMPUTING THE GRADIENT OF KL [q(ζ, z
x, Ï) |
# p(ζ, z
# F COMPUTING THE GRADIENT OF KL [q(¢, z|x, ¢)||p(¢, z|4)]
||
θ)] |
The KL term of the ELBO (Equation 2) is not signiï¬cantly affected by the introduction of additional z) for both the approximat- continuous latent variables ζ, so long as we use the same expansion r(ζ | ing posterior and the prior:
KL [allel = ~/{ I -L/ {0 Thej<e r(Glzi) - (2lGi<j,2) (2) - Thej<n r(Gil23) Thej<e steer) (2) : (G25) + W(zilG<j,2) | + log | 1<j<k r(Gjlz5) > a(25\Gicg,) } + log (24) 1<j<k
The gradient of Equation 24 with respect to the parameters θ of the prior, p(z timated stochastically using samples from the approximating posterior, q(ζ, z prior, p(z
|
â
â
2 x1 tale] = â So ale 2l0.4) - PEO âLae y- BL) 00 OE,(z,) OE,(z,4) âE4y(21|2,9) [ [Eataiccnae) 6 om + Epc2|a) 30 (25)
ζi<k, x, Ï) can be performed analytically; all other expec- The ï¬nal expectation with respect to q(zk| tations require samples from the approximating posterior. Similarly, for the prior, we must sample from the RBM, although Rao-Blackwellization can be used to marginalize half of the units.
# F.1 GRADIENT OF THE ENTROPY WITH RESPECT TO Ï
In contrast, the gradient of the KL term with respect to the parameters of the approximating posterior is severely complicated by a nonfactorial approximating posterior. We break KL [q||p| into two terms, the negative entropy Vc qlog q, and the cross-entropy â Vac qlog p, and compute their gradients separately.
23
Published as a conference paper at ICLR 2017
We can regroup the negative entropy term of the KL divergence so as to use the reparameterization trick to backpropagate through |]; -; ¢(zj|G<j, 2):
i<j q(zj|
âH() = ff TL Glen) atesleecs.2) | tos] TL alesis) 2 %S \i<j<k 1<j<k ->/ [[Gilz)- az5lGi<j,2) | SP log q(zil¢i<j,2) z j j ->y/ [] Giles): a(zilGn<i. 2) log q(zj\Ci<j,2) je °S Visi = SE ces2ccjle0) | > az ilbi<j,2) * log a(zi|Gi<j,2) Fi 23 âLE Pics Ya (zj|Pi<j,) - log q(zj|pi<j, 2) 25
â
Ïi<j, x) is where indices i and j denote hierarchical groups of variables. The probability q(zj| evaluated analytically, whereas all variables zi<j and ζi<j are implicitly sampled stochastically via Ïi<j.
We wish to take the gradient of H(q) in Equation 26. Using the identity:
â
(a) (2) (a) ale kin -Ee(Me)-k(E)-0
âH(q)
for any constant c, we can eliminate the gradient of log qj
# Ïi<j in
âÏ , and obtain:
for any constant c, we can eliminate the gradient of log qjp,., in â oa) and obtain:
â
|
(a) SHC = LE E (g504 (2j|Pi<j,t 2) sloral=s[ei<ist)
Moreover, we can eliminate any log-partition function in log q(zj| to Equation 27.15 By repeating this argument one more time, we can break â factorial components.16 If zi â { reduces to:
Og. Og aC = LE a VY al 2) (a: Oe -> (atad-a *)) (g.- %) ej Og; 5 = Den [FEW 0 lat = a=] j
where ι and zι correspond to single variables within the hierarchical groups denoted by j. In Ten- sorFlow, it might be simpler to write:
aT fe) 0q; (2; =1 5 H(q) = Ep. iG dai : 0¢ J Oo
= =c $s >. G@=d: Su
ce: = ZS 3 1: ba: TLjvi G = 0.
z q = 0, where c is the log partition function of q(zj|Ïi<j, x).
PS ce: = =c $s >. 4 = 0, where c is the log partition function of q(z;|pi<j, ©).
# oa The
16 ZS 3 1: G@=d: Su Ili q;, so the q;4; marginalize out of oa The qj When multiplied by log qi. When 1 ba: TLjvi qj is multiplied by one of the log q;4:, the sum over z; can be taken inside the coe and again Bp oe, G = 0.
24
(26)
Published as a conference paper at ICLR 2017
# F.2 GRADIENT OF THE CROSS-ENTROPY
The gradient of the cross-entropy with respect to the parameters Ï of the approximating posterior does not depend on the partition function of the prior
# Zp, since: â log q âÏ
(6) (6) (6) 0 ~ 9g 2218 Du gt Bet 359 8% dat
# Ep
by Equations 6 and 27, so we are left with the gradient of the average energy Ep.
The remaining cross-entropy term is
Soa: Ep =-E, [2'-W-z+b"- 2]. 2
# â z analytically, since zi â { ·
We can handle the term b' - z analytically, since z; ⬠{0,1}, and
0, 1 , and } EÏ [q(z = 1)] .
=b"
# EÏ
The approximating posterior q is continuous, with nonzero derivative, so the reparameterization trick can be applied to backpropagate gradients:
(a) 0¢ E, [bt : 2] =b! -E, Fue = | :
In contrast, each element of the sum
2) Wee SOW 25-2 ij
depends upon variables that are not usually in the same hierarchical level, so in general
E, [Wij212)] A WijE, [zi] - Ep [2)]. term into Ep [Wij2i2%j] = Wij Epc: [21 Epes: [2],
We might decompose this term into
EÏ [Wijzizj] = Wij · where without loss of generality zi is in an earlier hierarchical layer than zj; however, it is not clear how to take the derivative of zi, since it is a discontinuous function of Ïk
â¤
F.3 NAIVE APPROACH
The naive approach would be to take the gradient of the expectation using the gradient of log- probabilities over all variables:
(a) ; (a) aa" [Wij 2i2;] = Ey [Waxes : 30 08 i . (a) = Eq, aay. |Wigz3 D> 5g OB Itch (28) k , 1 OdK|t<k = Eg aii, wus > Tuten : 6 :
# âqk|l<k âÏ
, we can drop out terms involving only zi<k and zj<k that occur hierarchically before k, For since those terms can be pulled out of the expectation over qk, and we can apply Equation 27. However, for terms involving zi>k or zj>k that occur hierarchically after k, the expected value of zi or zj depends upon the chosen value of zk.
The gradient calculation in Equation 28 is an instance of the REINFORCE algorithm (Equation 18). Moreover, the variance of the estimate is proportional to the number of terms (to the extent that the terms are independent). The number of terms contributing to each gradient grows quadrati- cally with number of units in the RBM. We can introduce a baseline, as in NVIL (Mnih & Gregor, 2014):
O Ey | (Wizz; â c(x)) - a6 loga| ;
but this approximation is still high-variance.
25
Published as a conference paper at ICLR 2017
F.4 DECOMPOSITION OF â âÏ Wijzizj VIA THE CHAIN RULE
When using the spike-and-exponential, spike-and-slab, or spike-and-Gaussian distributions of sec- tions 2.1 D.2, and E.1, we can decompose the gradient of E [Wijzizj] using the chain rule. Previ- ously, we have considered z to be a function of Ï and Ï. We can instead formulate z as a function of q(z = 1) and Ï, where q(z = 1) is itself a function of Ï and Ï. Speciï¬cally,
0 ifp;<l-qg(a=)=a(a=0 alla.) ={) Menge, MDH) 09)
âqj (zj =1) Using the chain rule, â =j ï¬xed, even âÏ though they all depend on the common variables Ï and parameters Ï. We use the chain rule to differentiate with respect to q(z = 1) since it allows us to pull part of the integral over Ï inside the derivative with respect to Ï. In the sequel, we sometimes write q in place of q(z = 1) to minimize notational clutter.
Expanding the desired gradient using the reparameterization trick and the chain rule, we ï¬nd:
a] (a) age Wizizd] = 55 Eo Wiss] 06 OWij212 Ok (Zk eas, eS (30)
We can change the order of integration (via the expectation) and differentiation since
Wijzizj| ⤠| Wij < â
for all Ï and bounded Ï (Cheng, 2006). Although z(q, Ï) is a step function, and its derivative is a delta function, the integral (corresponding to the expectation with respect to Ï) of its derivative is ï¬nite. Rather than dealing with generalized functions directly, we apply the deï¬nition of the derivative, and push through the matching integral to recover a ï¬nite quantity.
For simplicity, we pull the sum over & out of the expectation in Equation 30, and consider each summand independently. From Equation 29, we see that z; is only a function of q;, so all terms in the sum over k in Equation 30 vanish except k = i and k = j. Without loss of generality, we consider the term k = 2; the term k = j is symmetric. Applying the definition of the gradient to one of the summands, and then analytically taking the expectation with respect to p;, we obtain: OW 2G) (Gp) Ogi(zi = 1)
| OW 2G) (Gp) Ogi(zi = 1) dail = 1) a6 _ wm ig 2i(G + 810) 254 + 84.) â Way = (4,0) (G0) Oai(zi = Y) ©? | éqi(i=1) 30 qi 06 = Ep, lim a; Wij 1-2j(¢,0) â Wij -0- 25(G p) _ Ogi(zi = 1) 5q:(2i=1) 0 bd: de pi=ai(zi=0) 7 , 0G: (%i = = Boys fw, 2; (4, p)- 36 sweco]
# EÏ
The third line follows from Equation 29, since zi(q + δqi, Ï) differs from zi(q, Ï) only in the region = zi(q, Ï). Regardless of of Ï of size δqi around qi(zi = 0) = 1 â the choice of Ï, zj(q + δqi, Ï) = zj(q, Ï).
The third line ï¬xes Ïi to the transition between zi = 0 and zi = 1 at qi(zi = 0). Since zi = 0 implies ζi = 0,17 and ζ is a continuous function of Ï, the third line implies that ζi = 0. At the same time, since qi is only a function of Ïk<i from earlier in the hierarchy, the term âqi âÏ is not affected by the choice of Ïi.18 As noted above, due to the chain rule, the perturbation δqi has no effect on other
17We chose the conditional distribution r(ζi|zi = 0) to be a delta spike at zero. 18In contrast, zi is a function of Ïi.
26
Published as a conference paper at ICLR 2017
qj by deï¬nition; the gradient is evaluated with those values held constant. On the other hand, âqi generally nonzero for all parameters governing hierarchical levels k < i.
Since Ïi is ï¬xed such that ζi = 0, all units further down the hierarchy must be sampled consis- tent with this restriction. A sample from Ï has ζi = 0 if zi = 0, which occurs with probability qi(zi = 0).19 We can compute the gradient with a stochastic approximation by multiplying each = 0 are ignored,20 and scaling up the gradient when zi = 0 sample by 1 1 by qi(zi=0) :
2, so that terms with ¢; 4 0 are ignored,â° and scaling (6) 1- 3% â EWyaz] =E, |We- Ly de Wis2i2i] = Bo |W I-qa=) 7 is not necessary if 7 comes before i in the hierarchy.
â âÏ zi 1 qi(zi = 1) · âqi(zi = 1) âÏ E [Wijzizj] = EÏ . â 1 (31)
â
# The term 1 1
# zi qi
# et
â â
While Equation 31 appears similar to REINFORCE, it is better understood as an importance- weighted estimate of an efï¬cient gradient calculation. Just as a ReLU only has a nonzero gradi- ent in the linear regime, âzi âÏ effectively only has a nonzero gradient when zi = 0, in which case âqi(zi=1) âzi . Unlike in REINFORCE, we do effectively differentiate the reward, Wijzizj. âÏ â¼ âÏ Moreover, the number of terms contributing to each gradient âqi(zi=1) ber of units in an RBM, whereas it grows quadratically in the method of Section F.3.
# G MOTIVATION FOR BUILDING APPROXIMATING POSTERIOR AND PRIOR
HIERARCHIES IN THE SAME ORDER
Intuition regarding the difï¬culty of approximating the posterior distribution over the latent variables given the data can be developed by considering sparse coding, an approach that uses a basis set of spatially locallized ï¬lters (Olshausen & Field, 1996). The basis set is overcomplete, and there are generally many basis elements similar to any selected basis element. However, the sparsity prior pushes the posterior distribution to use only one amongst each set of similar basis elements.
As a result, there is a large set of sparse representations of roughly equivalent quality for any single input. Each basis element individually can be replaced with a similar basis element. However, having changed one basis element, the optimal choice for the adjacent elements also changes so the ï¬lters mesh properly, avoiding redundancy or gaps. The true posterior is thus highly correlated, since even after conditioning on the input, the probability of a given basis element depends strongly on the selection of the adjacent basis elements.
These equivalent representations can easily be disambiguated by the successive layers of the rep- resentation. In the simplest case, the previous layer could directly specify which correlated set of basis elements to use amongst the applicable sets. We can therefore achieve greater efï¬ciency by inferring the approximating posterior over the top-most latent layer ï¬rst. Only then do we compute the conditional approximating posteriors of lower layers given a sample from the approximating posterior of the higher layers, breaking the symmetry between representations of similar quality.
# H ARCHITECTURE
The stochastic approximation to the ELBO is computed via one pass down the approximating pos- terior (Figure 4a), sampling from each continuous latent layer ζi and zm>1 in turn; and another pass down the prior (Figure 4b), conditioned on the sample from the approximating posterior. In the pass down the prior, signals do not ï¬ow from layer to layer through the entire model. Rather, the input to each layer is determined by the approximating posterior of the previous layers, as follows from Equation 14. The gradient is computed by backpropagating the reconstruction log-likelihood, and the KL divergence between the approximating posterior and true prior at each layer, through this differentiable structure.
19It might also be the case that ζi = 0 when zi = 1, but with our choice of r(ζ|z), this has vanishingly small probability.
20This takes advantage of the fact that zi â {0, 1}.
27
Published as a conference paper at ICLR 2017
All hyperparameters were tuned via manual experimentation. Except in Figure 6, RBMs have 128 units (64 units per side, with full bipartite connections between the two sides), with 4 layers of hierarchy in the approximating posterior. We use 100 iterations of block Gibbs sampling, with 20 persistent chains per element of the minibatch, to sample from the prior in the stochastic approxi- mation to Equation 11.
When using the hierarchy of continuous latent variables described in Section 4, discrete VAEs overï¬t if any component of the prior is overparameterized, as shown in Figure 9a. In contrast, a larger and more powerful approximating posterior generally did not reduce performance within the range examined, as in Figure 9b. In response, we manually tuned the number of layers of continuous latent variables, the number of such continuous latent variables per layer, the number of deterministic hidden units per layer in the neural network deï¬ning each hierarchical layer of the prior, and the use of parameter sharing in the prior. We list the selected values in Table 2. All neural networks implementing components of the approximating posterior contain two hidden layers of 2000 units.
(a) Prior (b) Approximating posterior
Figure 9: Log likelihood on statically binarized MNIST versus the number of hidden units per neural network layer, in the prior (a) and approximating posterior (b). The number of deterministic hidden layers in the networks parameterizing the prior/approximating posterior is 1 (blue), 2 (red), 3 (green) in (a/b), respectively. The number of deterministic hidden layers in the ï¬nal network parameterizing z) is 0 (solid) or 1 (dashed). All models use only 10 layers of continuous latent variables, with p(x | no parameter sharing.
Num layers Vars per layer Hids per prior layer Param sharing MNIST (dyn bin) MNIST (static bin) Omniglot Caltech-101 Sil 18 20 16 12 64 256 256 80 1000 2000 800 100 none 2 groups 2 groups complete
Table 2: Architectural hyperparameters used for each dataset. Successive columns list the number of layers of continuous latent variables, the number of such continuous latent variables per layer, the number of deterministic hidden units per layer in the neural network deï¬ning each hierarchical layer of the prior, and the use of parameter sharing in the prior. Smaller datasets require more regularization, and achieve optimal performance with a smaller prior.
On statically binarized MNIST, Omniglot, and Caltech-101 Silhouettes, we further regularize using recurrent parameter sharing. In the simplest case, each p (3m|3i<m,9) and p(a|3,@) is a func- tion of rem 3b rather than a function of the concatenation [30,31,---,3m-âi]. Moreover, all P(3m>1|31<m,9) share parameters. The RBM layer 39 is rendered compatible with this parame- terization by using a trainable linear transformation of ¢, M - ¢; where the number of rows in M is
28
Published as a conference paper at ICLR 2017
equal to the number of variables in each zm>0. We refer to this architecture as complete recurrent parameter sharing.
On datasets of intermediate size, a degree of recurrent parameter sharing somewhere between full independence and complete sharing is beneï¬cial. We deï¬ne the n group architecture by dividing the 1 into n equally sized groups of consecutive layers. Each such group is continuous latent layers zm independently subject to recurrent parameter sharing analogous to the complete sharing architecture, and the RBM layer z0 is independently parameterized.
We use the spike-and-exponential transformation described in Section 2.1. The exponent is a train- able parameter, but it is bounded above by a value that increases linearly with the number of training epochs. We use warm-up with strength 20 for 5 epochs, and additional warm-up of strength 2 on the RBM alone for 20 epochs (Raiko et al., 2007; Bowman et al., 2016; Sønderby et al., 2016).
z) is linear, all nonlinear transformations are part of the prior over the latent variables. When p(x In contrast, it is also possible to deï¬ne the prior distribution over the continuous latent variables to be a simple factorial distribution, and push the nonlinearity into the ï¬nal decoder p(x z), as in | traditional VAEs. The former case can be reduced to something analogous to the latter case using the reparameterization trick.
However, a VAE with a completely independent prior does not regularize the nonlinearity of the prior; whereas a hierarchical prior requires that the nonlinearity of the prior (via its effect on the true posterior) be well-represented by the approximating posterior. Viewed another way, a com- pletely independent prior requires the model to consist of many independent sources of variance, so the data manifold must be fully unfolded into an isotropic ball. A hierarchical prior allows the data manifold to remain curled within a higher-dimensional ambient space, with the approximating posterior merely tracking its contortions. A higher-dimensional ambient space makes sense when modeling multiple classes of objects. For instance, the parameters characterizing limb positions and orientations for people have no analog for houses.
# H.1 ESTIMATING THE LOG PARTITION FUNCTION
We estimate the log-likelihood by subtracting an estimate of the log partition function of the RBM (log Zp from Equation 6) from an importance-weighted computation analogous to that of Burda et al. (2016). For this purpose, we estimate the log partition function using bridge sampling, a variant of Bennettâs acceptance ratio method (Bennett, 1976; Shirts & Chodera, 2008), which produces unbiased estimates of the partition function. Interpolating distributions were of the form p(x)β, and sampled with a parallel tempering routine (Swendsen & Wang, 1986). The set of smoothing parameters β in [0, 1] were chosen to approximately equalize replica exchange rates at 0.5. This standard criteria simultaneously keeps mixing times small, and allows for robust inference. We make a conservative estimate for burn-in (0.5 of total run time), and choose the total length of run, and number of repeated experiments, to achieve sufï¬cient statistical accuracy in the log partition function. In Figure 10, we plot the distribution of independent estimations of the log-partition function for a single model of each dataset. These estimates differ by no more than about 0.1, indicating that the estimate of the log-likelihood should be accurate to within about 0.05 nats.
# H.2 CONSTRAINED LAPLACIAN BATCH NORMALIZATION
Rather than traditional batch normalization (Ioffe & Szegedy, 2015), we base our batch normaliza- tion on the L1 norm. Speciï¬cally, we use:
y=x-xX Xn = y/ (B+6) Osto,
where x is a minibatch of scalar values, X denotes the mean of x, © indicates element-wise mul- tiplication, ⬠is a small positive constant, s is a learned scale, and o is a learned offset. For the approximating posterior over the RBM units, we bound 2 < s < 3, and â-s < o < s. This helps ensure that all units are both active and inactive in each minibatch, and thus that all units are used.
29
Published as a conference paper at ICLR 2017
(a) MNIST (dyn bin) (b) MNIST (static bin) (c) Omniglot (d) Caltech-101 Silhouettes
Figure 10: Distribution of estimates of the log-partition function, using Bennettâs acceptance ratio method with parallel tempering, for a single model trained on dynamically binarized MNIST (a), statically binarized MNIST (b), Omniglot (c), and Caltech-101 Silhouettes (d)
# I COMPARISON MODELS
In Table 1, we compare the performance of the discrete variational autoencoder to a selection of recent, competitive models. For dynamically binarized MNIST, we compare to deep belief networks (DBN; Hinton et al., 2006), reporting the results of Murray & Salakhutdinov (2009); importance- weighted autoencoders (IWAE; Burda et al., 2016); and ladder variational autoencoders (Ladder VAE; Sønderby et al., 2016).
For the static MNIST binarization of (Salakhutdinov & Murray, 2008), we compare to Hamilto- nian variational inference (HVI; Salimans et al., 2015); the deep recurrent attentive writer (DRAW; Gregor et al., 2015); the neural adaptive importance sampler with neural autoregressive distribution estimator (NAIS NADE; Du et al., 2015); deep latent Gaussian models with normalizing ï¬ows (Nor- malizing ï¬ows; Rezende & Mohamed, 2015); and the variational Gaussian process (Tran et al., 2016).
On Omniglot, we compare to the importance-weighted autoencoder (IWAE; Burda et al., 2016); ladder variational autoencoder (Ladder VAE; Sønderby et al., 2016); and the restricted Boltzmann machine (RBM; Smolensky, 1986) and deep belief network (DBN; Hinton et al., 2006), reporting the results of Burda et al. (2015).
Finally, for Caltech-101 Silhouettes, we compare to the importance-weighted autoencoder (IWAE; Burda et al., 2016), reporting the results of Li & Turner (2016); reweighted wake-sleep with a deep sigmoid belief network (RWS SBN; Bornschein & Bengio, 2015); the restricted Boltzmann machine (RBM; Smolensky, 1986), reporting the results of Cho et al. (2013); and the neural adaptive importance sampler with neural autoregressive distribution estimator (NAIS NADE; Du et al., 2015).
30
Published as a conference paper at ICLR 2017
haan MPN BDLKRAG ANAIwRNY OWS PAWADDDROCHUWPYYNHKELGN PEIwWHWMODDSXHAL MUP NW EDNAD YA OSHOVEKR BATES WHA SWAN OHINSOSSCHOYAN Ke wKweeawys \YUwen wk SYVAOBOCHACH AXwWhrwo 2aDwaA YU AWYMWHArNWDOSOA MH VMOHKY NPN HOVDdhHANHW HAPTAOSCISSIGOCISGOoSCeHAAN WHENRNDOCOVSOCFAUSOOCOOSOHALS AXSFEVOSASOeASOSOSCE ARYA BATAONSCHOHVNAHSCHSOOSARSN NN KT RWEKUNYAMY YN PVU~~NY DN] BD LBW NANYDH PNY ~W NNHKORTWNWRMYWYMN PNY HWS WN HKOKRON RMU VDNHVN~~WA NN SRO RRONKAKQOUUPN~Y-AWI N=-GQhNi Ny SGInNwOOYWVaeeruns NAWAN YP BIND LINV HOB LY NH-@sinpGInwerqyvevwbwrnxvr RH GhHNnVN HIAVYVRDOENYNVEoeUnnr~ Râ-&ORNYPP FINVSWLlOQYvew porn NLKDMIVYVOGSOSA Ke eTndnw Qwuns WMRLIGINOSCHOALCKKRASEARQAWUVHDH / s s § 6 l a ? a 2 (4 2 ! / & 4 7 \ 3 3 WWH-Ye&e ry eNO LYN Pâ FeEWOA~ WeeNeareNen L&yvy P- Fen yH~ IES DMOACYWYHWW ââ DR wD~ NEP HWG AKRAHOOOBOSASCHGODVCSCHRARS YIISeb fwwW~-~Hâr~r~NeQowW Are andToerunvr pre rdsh NEQereehywasw-~-~-~â-â~ ârv eavywy SDRBWoo eww we-â-~â-~~nNn eK WY AYR ewQtrwuew-â-~-â~ ~NwMQqwa f Ss $§ %% & be 14 AA 22 22 22 te 22 tt 7 & & 94 77 4\ 33 3 3
Figure 11: Evolution of samples from a discrete VAE trained on statically binarized MNIST, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM. Vertical sequences in which the digit ID remains constant demonstrate that the RBM has distinct modes, each of which corresponds to a single digit ID, despite being trained in a wholly unsupervised manner.
# J SUPPLEMENTARY RESULTS
To highlight the contribution of the various components of our generative model, we investigate performance on a selection of simpliï¬ed models.21 First, we remove the continuous latent layers. The resulting prior, depicted in Figure 1b, consists of the bipartite Boltzmann machine (RBM), the smoothing variables ζ, and a factorial Bernoulli distribution over the observed variables x deï¬ned via a deep neural network with a logistic ï¬nal layer. This probabilistic model achieves a log-likelihood 85.2 with 200 RBM units. of
â
â
Next, we further restrict the neural network deï¬ning the distribution over the observed variables x given the smoothing variables ζ to consist of a linear transformation followed by a pointwise logistic nonlinearity, analogous to a sigmoid belief network (SBN; Spiegelhalter & Lauritzen, 1990; Neal, 88.8 with 1992). This decreases the negative log-likelihood to 200 RBM units.
We then remove the lateral connections in the RBM, reducing it to a set of independent binary random variables. The resulting network is a noisy sigmoid belief network. That is, samples are produced by drawing samples from the independent binary random variables, multiplying by an independent noise source, and then sampling from the observed variables as in a standard SBN. With this SBN-like architecture, the discrete variational autoencoder achieves a log-likelihood of
â Finally, we replace the hierarchical approximating posterior of Figure 3a with the factorial approxi- mating posterior of Figure 1a. This simpliï¬cation of the approximating posterior, in addition to the prior, reduces the log-likelihood to
â
21In all cases, we report the negative log-likelihood on statically binarized MNIST (Salakhutdinov & Mur- ray, 2008), estimated with 104 importance weighted samples (Burda et al., 2016).
31
Published as a conference paper at ICLR 2017
Sk Oya o
Figure 12: Evolution of samples from a discrete VAE trained on Omniglot, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM.
Ce et | oe 4 moe COP < Sk Goes eeer RAK Eun | doddad gada aaaa atte Asad? ety? fete vere wee RHha eh ere
Figure 13: Evolution of samples from a discrete VAE trained on Caltech-101 Silhouettes, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM. Vertical sequences in which the silhouette shape remains similar demonstrate that the RBM has distinct modes, each of which corresponds to a single silhouette type, despite being trained in a wholly unsupervised manner.
32
Published as a conference paper at ICLR 2017
Figures 11, 12, and 13 repeat the analysis of Figure 5 for statically binarized MNIST, Omniglot, and Caltech-101 Silhouettes. Speciï¬cally, they show the generative output of a discrete VAE as the Markov chain over the RBM evolves via block Gibbs sampling. The RBM is held constant across each sub-row of ï¬ve samples, and variation amongst these samples is due to the layers of continuous latent variables. Given a multimodal distribution with well-separated modes, Gibbs sampling passes through the large, low-probability space between the modes only infrequently. As a result, consistency of the object class over many successive rows in Figures 11, 12, and 13 indicates that the RBM prior has well-separated modes.
On statically binarized MNIST, the RBM still learns distinct, separated modes corresponding to most of the different digit types. However, these modes are not as well separated as in dynamically binarized MNIST, as is evident from the more rapid switching between digit types in Figure 11. There are not obvious modes for Omniglot in Figure 12; it is plausible that an RBM with 128 units could not represent enough well-separated modes to capture the large number of distinct character types in the Omniglot dataset. On Caltech-101 Silhouettes, there may be a mode corresponding to large, roughly convex blobs.
33 | {
"id": "1602.08734"
} |
1608.08710 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | 7 1 0 2
r a M 0 1 ] V C . s c [
3 v 0 1 7 8 0 . 8 0 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# PRUNING FILTERS FOR EFFICIENT CONVNETS
Hao Liâ University of Maryland haoli@cs.umd.edu
Asim Kadav NEC Labs America asim@nec-labs.com
Igor Durdanovic NEC Labs America igord@nec-labs.com
Hanan Sametâ University of Maryland hjs@cs.umd.edu
Hans Peter Graf NEC Labs America hpg@nec-labs.com
# ABSTRACT
The success of CNNs in various applications is accompanied by a signiï¬cant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a signiï¬cant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune ï¬lters from CNNs that are identiï¬ed as having a small effect on the output accuracy. By removing whole ï¬lters in the network together with their connecting feature maps, the computation costs are reduced signiï¬cantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efï¬cient BLAS libraries for dense matrix multiplications. We show that even simple ï¬lter pruning techniques can reduce inference costs for VGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks.
# INTRODUCTION
The ImageNet challenge has led to signiï¬cant advancements in exploring various architectural choices in CNNs (Russakovsky et al. (2015); Krizhevsky et al. (2012); Simonyan & Zisserman (2015); Szegedy et al. (2015a); He et al. (2016)). The general trend since the past few years has been that the networks have grown deeper, with an overall increase in the number of parameters and convolution operations. These high capacity networks have signiï¬cant inference costs especially when used with embedded sensors or mobile devices where computational and power resources may be limited. For these applications, in addition to accuracy, computational efï¬ciency and small network sizes are crucial enabling factors (Szegedy et al. (2015b)). In addition, for web services that provide image search and image classiï¬cation APIs that operate on a time budget often serving hundreds of thousands of images per second, beneï¬t signiï¬cantly from lower inference times.
There has been a signiï¬cant amount of work on reducing the storage and computation costs by model compression (Le Cun et al. (1989); Hassibi & Stork (1993); Srinivas & Babu (2015); Han et al. (2015); Mariet & Sra (2016)). Recently Han et al. (2015; 2016b) report impressive compression rates on AlexNet (Krizhevsky et al. (2012)) and VGGNet (Simonyan & Zisserman (2015)) by pruning weights with small magnitudes and then retraining without hurting the overall accuracy. However, pruning parameters does not necessarily reduce the computation time since the majority of the parameters removed are from the fully connected layers where the computation cost is low, e.g., the fully connected layers of VGG-16 occupy 90% of the total parameters but only contribute less than 1% of the overall ï¬oating point operations (FLOP). They also demonstrate that the convolutional layers can be compressed and accelerated (Iandola et al. (2016)), but additionally require sparse
âWork done at NEC Labs â Supported in part by the NSF under Grant IIS-13-2079
1
Published as a conference paper at ICLR 2017
BLAS libraries or even specialized hardware (Han et al. (2016a)). Modern libraries that provide speedup using sparse operations over CNNs are often limited (Szegedy et al. (2015a); Liu et al. (2015)) and maintaining sparse data structures also creates an additional storage overhead which can be signiï¬cant for low-precision weights.
Recent work on CNNs have yielded deep architectures with more efï¬cient design (Szegedy et al. (2015a;b); He & Sun (2015); He et al. (2016)), in which the fully connected layers are replaced with average pooling layers (Lin et al. (2013); He et al. (2016)), which reduces the number of parameters signiï¬cantly. The computation cost is also reduced by downsampling the image at an early stage to reduce the size of feature maps (He & Sun (2015)). Nevertheless, as the networks continue to become deeper, the computation costs of convolutional layers continue to dominate.
CNNs with large capacity usually have signiï¬cant redundancy among different ï¬lters and feature channels. In this work, we focus on reducing the computation cost of well-trained CNNs by pruning ï¬lters. Compared to pruning weights across the network, ï¬lter pruning is a naturally structured way of pruning without introducing sparsity and therefore does not require using sparse libraries or any specialized hardware. The number of pruned ï¬lters correlates directly with acceleration by reducing the number of matrix multiplications, which is easy to tune for a target speedup. In addition, instead of layer-wise iterative ï¬ne-tuning (retraining), we adopt a one-shot pruning and retraining strategy to save retraining time for pruning ï¬lters across multiple layers, which is critical for pruning very deep networks. Finally, we observe that even for ResNets, which have signiï¬cantly fewer parameters and inference costs than AlexNet or VGGNet, still have about 30% of FLOP reduction without sacriï¬cing too much accuracy. We conduct sensitivity analysis for convolutional layers in ResNets that improves the understanding of ResNets.
# 2 RELATED WORK
The early work by Le Cun et al. (1989) introduces Optimal Brain Damage, which prunes weights with a theoretically justiï¬ed saliency measure. Later, Hassibi & Stork (1993) propose Optimal Brain Surgeon to remove unimportant weights determined by the second-order derivative information. Mariet & Sra (2016) reduce the network redundancy by identifying a subset of diverse neurons that does not require retraining. However, this method only operates on the fully-connected layers and introduce sparse connections.
To reduce the computation costs of the convolutional layers, past work have proposed to approximate convolutional operations by representing the weight matrix as a low rank product of two smaller matrices without changing the original number of ï¬lters (Denil et al. (2013); Jaderberg et al. (2014); Zhang et al. (2015b;a); Tai et al. (2016); Ioannou et al. (2016)). Other approaches to reduce the convolutional overheads include using FFT based convolutions (Mathieu et al. (2013)) and fast convolution using the Winograd algorithm (Lavin & Gray (2016)). Additionally, quantization (Han et al. (2016b)) and binarization (Rastegari et al. (2016); Courbariaux & Bengio (2016)) can be used to reduce the model size and lower the computation overheads. Our method can be used in addition to these techniques to reduce computation costs without incurring additional overheads.
Several work have studied removing redundant feature maps from a well trained network (Anwar et al. (2015); Polyak & Wolf (2015)). Anwar et al. (2015) introduce a three-level pruning of the weights and locate the pruning candidates using particle ï¬ltering, which selects the best combination from a number of random generated masks. Polyak & Wolf (2015) detect the less frequently activated feature maps with sample input data for face detection applications. We choose to analyze the ï¬lter weights and prune ï¬lters with their corresponding feature maps using a simple magnitude based measure, without examining possible combinations. We also introduce network-wide holistic approaches to prune ï¬lters for simple and complex convolutional network architectures.
Concurrently with our work, there is a growing interest in training compact CNNs with sparse constraints (Lebedev & Lempitsky| (2016); |Zhou et al. (2016); Wen et al. (2016}). Lebedev & Lempitsky| (2016) leverage group-sparsity on the convolutional filters to achieve structured brain damage, i.e., prune the entries of the convolution kernel in a group-wise fashion. (2016) add group-sparse regularization on neurons during training to learn compact CNNs with reduced filters. [Wen et al-|(2016) add structured sparsity regularizer on each layer to reduce trivial filters, channels or even layers. In the filter-level pruning, all above work use ¢21-norm as a regularizer.
2
Published as a conference paper at ICLR 2017
Similar to the above work, we use ¢;-norm to select unimportant filters and physically prune them. Our fine-tuning process is the same as the conventional training procedure, without introducing additional regularization. Our approach does not introduce extra layer-wise meta-parameters for the regularizer except for the percentage of filters to be pruned, which is directly related to the desired speedup. By employing stage-wise pruning, we can set a single pruning rate for all layers in one stage.
# 3 PRUNING FILTERS AND FEATURE MAPS
Let ni denote the number of input channels for the ith convolutional layer and hi/wi be the height/width of the input feature maps. The convolutional layer transforms the input feature maps xi â RniÃhiÃwi into the output feature maps xi+1 â Rni+1Ãhi+1Ãwi+1, which are used as in- put feature maps for the next convolutional layer. This is achieved by applying ni+1 3D ï¬lters Fi,j â RniÃkÃk on the ni input channels, in which one ï¬lter generates one feature map. Each ï¬lter is composed by ni 2D kernels K â RkÃk (e.g., 3 à 3). All the ï¬lters, together, constitute the kernel matrix Fi â RniÃni+1ÃkÃk. The number of operations of the convolutional layer is ni+1nik2hi+1wi+1. As shown in Figure 1, when a ï¬lter Fi,j is pruned, its corresponding feature map xi+1,j is removed, which reduces nik2hi+1wi+1 operations. The kernels that apply on the removed feature maps from the ï¬lters of the next convolutional layer are also removed, which saves an additional ni+2k2hi+2wi+2 operations. Pruning m ï¬lters of layer i will reduce m/ni+1 of the computation cost for both layers i and i + 1.
kernel matrix Fig, f nr Nit hj HHH} nist Ni+2 Xi Xi+1 Xi+2
Figure 1: Pruning a ï¬lter results in removal of its corresponding feature map and related kernels in the next layer.
3.1 DETERMINING WHICH FILTERS TO PRUNE WITHIN A SINGLE LAYER
Our method prunes the less useful filters from a well-trained model for computational efficiency while minimizing the accuracy drop. We measure the relative importance of a filter in each layer by calculating the sum of its absolute weights )> |F;,;|, i.e., its ¢;-norm ||F;,;||1. Since the number of input channels, n;, is the same across filters, }> |F;,;| also represents the average magnitude of its kernel weights. This value gives an expectation of the magnitude of the output feature map. Filters with smaller kernel weights tend to produce feature maps with weak activations as compared to the other filters in that layer. Figure [2(a)]illustrates the distribution of filtersâ absolute weights sum for each convolutional layer in a VGG-16 network trained on the CIFAR-10 dataset, where the distribution varies significantly across layers. We find that pruning the smallest filters works better in comparison with pruning the same number of random or largest filters (Section|4.4). Compared to other criteria for activation-based feature map pruning (Section|4.5), we find ¢;-norm is a good criterion for data-free filter selection.
The procedure of pruning m ï¬lters from the ith convolutional layer is as follows:
1. For each filter F;,;, calculate the sum of its absolute kernel weights s; = )7/!!, >> |Kil. 2. Sort the filters by sj. 3. Prune m filters with the smallest sum values and their corresponding feature maps. The kernels in the next convolutional layer corresponding to the pruned feature maps are also removed.
>> |Kil.
4. A new kernel matrix is created for both the ith and i + 1th layers, and the remaining kernel weights are copied to the new model.
3
Published as a conference paper at ICLR 2017
(a) Filters are ranked by sj (b) Prune the smallest ï¬lters (c) Prune and retrain
94 CIFARI0, VGG-16, prune smallest filters. retrain 20 epochs a0 % % Fiters Prunea awayi%)
CIFAR-10, VGG-16 conv normalized abs sum of iter weight = conv 13 To) 120380 oa 3 fier index /#fters (6)
CCIFARIO, VGG-16, pruned smallest filters * conv.2 64 + conv 3 128 + conv4 128 co|[e-* conv_5 256 e* conv.6 256 so|]e-e cony_7 256 © conv.8 512 © conv.9 512 © conv.10 512 © convi1512 2o|{° © conv 12 512 © conv13512 pecuracy 0 Es a0 % Fiters Pruned Awayi%)
Figure 2: (a) Sorting ï¬lters by absolute weights sum for each layer of VGG-16 on CIFAR-10. The x-axis is the ï¬lter index divided by the total number of ï¬lters. The y-axis is the ï¬lter weight sum divided by the max sum value among ï¬lters in that layer. (b) Pruning ï¬lters with the lowest absolute weights sum and their corresponding test accuracies on CIFAR-10. (c) Prune and retrain for each single layer of VGG-16 on CIFAR-10. Some layers are sensitive and it can be harder to recover accuracy after pruning them.
Relationship to pruning weights Pruning ï¬lters with low absolute weights sum is similar to pruning low magnitude weights (Han et al. (2015)). Magnitude-based weight pruning may prune away whole ï¬lters when all the kernel weights of a ï¬lter are lower than a given threshold. However, it requires a careful tuning of the threshold and it is difï¬cult to predict the exact number of ï¬lters that will eventually be pruned. Furthermore, it generates sparse convolutional kernels which can be hard to accelerate given the lack of efï¬cient sparse libraries, especially for the case of low-sparsity.
Relationship to group-sparse regularization on filters Recent work [Wen] (2016)) apply group-sparse regularization ()'" , ||Fi,j|]2 or ¢2,1-norm) on convolutional filters, which also favor to zero-out filters with small /2-norms, i.e. F;,; = 0. In practice, we do not observe noticeable difference between the /2-norm and the ¢;-norm for filter selection, as the important filters tend to have large values for both measures (Appendi . Zeroing out weights of multiple filters during training has a similar effect to pruning filters with the strategy of iterative pruning and retraining as introduced in SectionB.4]
3.2 DETERMINING SINGLE LAYERâS SENSITIVITY TO PRUNING
To understand the sensitivity of each layer, we prune each layer independently and evaluate the resulting pruned networkâs accuracy on the validation set. Figure 2(b) shows that layers that maintain their accuracy as ï¬lters are pruned away correspond to layers with larger slopes in Figure 2(a). On the contrary, layers with relatively ï¬at slopes are more sensitive to pruning. We empirically determine the number of ï¬lters to prune for each layer based on their sensitivity to pruning. For deep networks such as VGG-16 or ResNets, we observe that layers in the same stage (with the same feature map size) have a similar sensitivity to pruning. To avoid introducing layer-wise meta-parameters, we use the same pruning ratio for all layers in the same stage. For layers that are sensitive to pruning, we prune a smaller percentage of these layers or completely skip pruning them.
# 3.3 PRUNING FILTERS ACROSS MULTIPLE LAYERS
We now discuss how to prune ï¬lters across the network. Previous work prunes the weights on a layer by layer basis, followed by iteratively retraining and compensating for any loss of accuracy (Han et al. (2015)). However, understanding how to prune ï¬lters of multiple layers at once can be useful: 1) For deep networks, pruning and retraining on a layer by layer basis can be extremely time-consuming 2) Pruning layers across the network gives a holistic view of the robustness of the network resulting in a smaller network 3) For complex networks, a holistic approach may be necessary. For example, for the ResNet, pruning the identity feature maps or the second layer of each residual block results in additional pruning of other layers.
To prune ï¬lters across multiple layers, we consider two strategies for layer-wise ï¬lter selection:
4
Published as a conference paper at ICLR 2017
⢠Independent pruning determines which ï¬lters should be pruned at each layer independent of other layers.
⢠Greedy pruning accounts for the ï¬lters that have been removed in the previous layers. This strategy does not consider the kernels for the previously pruned feature maps while calculating the sum of absolute weights.
Figure 3 illustrates the difference between two approaches in calculating the sum of absolute weights. The greedy approach, though not globally optimal, is holistic and results in pruned networks with higher accuracy especially when many ï¬lters are pruned.
Xi+] Xi4qo N42
Figure 3: Pruning ï¬lters across consecutive layers. The independent pruning strategy calculates the ï¬lter sum (columns marked in green) without considering feature maps removed in previous layer (shown in blue), so the kernel weights marked in yellow are still included. The greedy pruning strategy does not count kernels for the already pruned feature maps. Both approaches result in a (ni+1 â 1) à (ni+2 â 1) kernel matrix.
projection shortcut ie Xi U Xi+1 X42 » residual block . P(x)
Figure 4: Pruning residual blocks with the projection shortcut. The ï¬lters to be pruned for the second layer of the residual block (marked as green) are determined by the pruning result of the shortcut projection. The ï¬rst layer of the residual block can be pruned without restrictions.
For simpler CNNs like VGGNet or AlexNet, we can easily prune any of the ï¬lters in any convolutional layer. However, for complex network architectures such as Residual networks (He et al. (2016)), pruning ï¬lters may not be straightforward. The architecture of ResNet imposes restrictions and the ï¬lters need to be pruned carefully. We show the ï¬lter pruning for residual blocks with projection mapping in Figure 4. Here, the ï¬lters of the ï¬rst layer in the residual block can be arbitrarily pruned, as it does not change the number of output feature maps of the block. However, the correspondence between the output feature maps of the second convolutional layer and the identity feature maps makes it difï¬cult to prune. Hence, to prune the second convolutional layer of the residual block, the corresponding projected feature maps must also be pruned. Since the identical feature maps are more important than the added residual maps, the feature maps to be pruned should be determined by the pruning results of the shortcut layer. To determine which identity feature maps are to be pruned, we use the same selection criterion based on the ï¬lters of the shortcut convolutional layers (with 1 à 1 kernels). The second layer of the residual block is pruned with the same ï¬lter index as selected by the pruning of the shortcut layer.
# 3.4 RETRAINING PRUNED NETWORKS TO REGAIN ACCURACY
After pruning the ï¬lters, the performance degradation should be compensated by retraining the network. There are two strategies to prune the ï¬lters across multiple layers:
5
Published as a conference paper at ICLR 2017
1. Prune once and retrain: Prune ï¬lters of multiple layers at once and retrain them until the original accuracy is restored. 2. Prune and retrain iteratively: Prune ï¬lters layer by layer or ï¬lter by ï¬lter and then retrain iteratively. The model is retrained before pruning the next layer for the weights to adapt to the changes from the pruning process.
We ï¬nd that for the layers that are resilient to pruning, the prune and retrain once strategy can be used to prune away signiï¬cant portions of the network and any loss in accuracy can be regained by retraining for a short period of time (less than the original training time). However, when some ï¬lters from the sensitive layers are pruned away or large portions of the networks are pruned away, it may not be possible to recover the original accuracy. Iterative pruning and retraining may yield better results, but the iterative process requires many more epochs especially for very deep networks.
# 4 EXPERIMENTS
We prune two types of networks: simple CNNs (VGG-16 on CIFAR-10) and Residual networks (ResNet-56/110 on CIFAR-10 and ResNet-34 on ImageNet). Unlike AlexNet or VGG (on ImageNet) that are often used to demonstrate model compression, both VGG (on CIFAR-10) and Residual networks have fewer parameters in the fully connected layers. Hence, pruning a large percentage of parameters from these networks is challenging. We implement our ï¬lter pruning method in Torch7 (Collobert et al. (2011)). When ï¬lters are pruned, a new model with fewer ï¬lters is created and the remaining parameters of the modiï¬ed layers as well as the unaffected layers are copied into the new model. Furthermore, if a convolutional layer is pruned, the weights of the subsequent batch normalization layer are also removed. To get the baseline accuracies for each network, we train each model from scratch and follow the same pre-processing and hyper-parameters as ResNet (He et al. (2016)). For retraining, we use a constant learning rate 0.001 and retrain 40 epochs for CIFAR-10 and 20 epochs for ImageNet, which represents one-fourth of the original training epochs. Past work has reported up to 3à original training times to retrain pruned networks (Han et al. (2015)).
Table 1: Overall results. The best test/validation accuracy during the retraining process is reported. Training a pruned model from scratch performs worse than retraining a pruned model, which may indicate the difï¬culty of training a network with a small capacity.
Model VGG-16 VGG-16-pruned-A VGG-16-pruned-A scratch-train ResNet-56 ResNet-56-pruned-A ResNet-56-pruned-B ResNet-56-pruned-B scratch-train ResNet-110 ResNet-110-pruned-A ResNet-110-pruned-B ResNet-110-pruned-B scratch-train ResNet-34 ResNet-34-pruned-A ResNet-34-pruned-B ResNet-34-pruned-C Error(%) 6.75 6.60 6.88 6.96 6.90 6.94 8.69 6.47 6.45 6.70 7.06 26.77 27.44 27.83 27.52 FLOP 3.13 Ã 108 2.06 Ã 108 1.25 Ã 108 1.12 Ã 108 9.09 Ã 107 2.53 Ã 108 2.13 Ã 108 1.55 Ã 108 3.64 Ã 109 3.08 Ã 109 2.76 Ã 109 3.37 Ã 109 Pruned % Parameters 1.5 Ã 107 5.4 Ã 106 34.2% 10.4% 27.6% 8.5 Ã 105 7.7 Ã 105 7.3 Ã 105 15.9% 38.6% 1.72 Ã 106 1.68 Ã 106 1.16 Ã 106 15.5% 24.2% 7.5% 2.16 Ã 107 1.99 Ã 107 1.93 Ã 107 2.01 Ã 107 64.0% 9.4% 13.7% 2.3% 32.4% 7.6% 10.8% 7.2%
# 4.1 VGG-16 ON CIFAR-10
VGG-16 is a high-capacity network originally designed for the ImageNet dataset (Simonyan & Zisserman (2015)). Recently, Zagoruyko (2015) applies a slightly modiï¬ed version of the model on CIFAR-10 and achieves state of the art results. As shown in Table 2, VGG-16 on CIFAR-10 consists of 13 convolutional layers and 2 fully connected layers, in which the fully connected layers do not occupy large portions of parameters due to the small input size and less hidden units. We use the model described in Zagoruyko (2015) but add Batch Normalization (Ioffe & Szegedy (2015))
6
Published as a conference paper at ICLR 2017
Table 2: VGG-16 on CIFAR-10 and the pruned model. The last two columns show the number of feature maps and the reduced percentage of FLOP from the pruned model. #Maps 32 64 128 128 256 256 256 256 256 256 256 256 256 512 10
layer type wi à hi 32 à 32 Conv 1 32 à 32 Conv 2 16 à 16 Conv 3 16 à 16 Conv 4 8 à 8 Conv 5 8 à 8 Conv 6 8 à 8 Conv 7 4 à 4 Conv 8 4 à 4 Conv 9 4 à 4 Conv 10 2 à 2 Conv 11 2 à 2 Conv 12 2 à 2 Conv 13 1 Linear Linear 1 Total #Maps 64 64 128 128 256 256 256 512 512 512 512 512 512 512 10 FLOP 1.8E+06 3.8E+07 1.9E+07 3.8E+07 1.9E+07 3.8E+07 3.8E+07 1.9E+07 3.8E+07 3.8E+07 9.4E+06 9.4E+06 9.4E+06 2.6E+05 5.1E+03 3.1E+08 #Params 1.7E+03 3.7E+04 7.4E+04 1.5E+05 2.9E+05 5.9E+05 5.9E+05 1.2E+06 2.4E+06 2.4E+06 2.4E+06 2.4E+06 2.4E+06 2.6E+05 5.1E+03 1.5E+07 FLOP% 50% 50% 0% 0% 0% 0% 0% 50% 75% 75% 75% 75% 75% 50% 0% 34%
layer after each convolutional layer and the ï¬rst linear layer, without using Dropout (Srivastava et al. (2014)). Note that when the last convolutional layer is pruned, the input to the linear layer is changed and the connections are also removed.
As shown in Figure 2(b), each of the convolutional layers with 512 feature maps can drop at least 60% of ï¬lters without affecting the accuracy. Figure 2(c) shows that with retraining, almost 90% of the ï¬lters of these layers can be safely removed. One possible explanation is that these ï¬lters operate on 4 à 4 or 2 à 2 feature maps, which may have no meaningful spatial connections in such small dimensions. For instance, ResNets for CIFAR-10 do not perform any convolutions for feature maps below 8 à 8 dimensions. Unlike previous work (Zeiler & Fergus (2014); Han et al. (2015)), we observe that the ï¬rst layer is robust to pruning as compared to the next few layers. This is possible for a simple dataset like CIFAR-10, on which the model does not learn as much useful ï¬lters as on ImageNet (as shown in Figure. 5). Even when 80% of the ï¬lters from the ï¬rst layer are pruned, the number of remaining ï¬lters (12) is still larger than the number of raw input channels. However, when removing 80% ï¬lters from the second layer, the layer corresponds to a 64 to 12 mapping, which may lose signiï¬cant information from previous layers, thereby hurting the accuracy. With 50% of the ï¬lters being pruned in layer 1 and from 8 to 13, we achieve 34% FLOP reduction for the same accuracy.
Figure 5: Visualization of filters in the first convolutional layer of VGG-16 trained on CIFAR-10. Filters are ranked by ¢;-norm.
4.2 RESNET-56/110 ON CIFAR-10
ResNets for CIFAR-10 have three stages of residual blocks for feature maps with sizes of 32 à 32, 16 à 16 and 8 à 8. Each stage has the same number of residual blocks. When the number of feature maps increases, the shortcut layer provides an identity mapping with an additional zero padding for the increased dimensions. Since there is no projection mapping for choosing the identity feature maps, we only consider pruning the ï¬rst layer of the residual block. As shown in Figure 6, most of the layers are robust to pruning. For ResNet-110, pruning some single layers without retraining even
7
Published as a conference paper at ICLR 2017
CIFARLO, ResNet-56, prune smallest filters CIFARLO, ResNet-56, prune smallest filters CIFARLO, ResNet-56, prune smallest filters > conv 216 => conv 20 32 S =" |[e= conv 38 64 EF Je conva0 64 Z| conva2 64}, " Me conv aa 64| ©-© conv 1016 © conv.2832 e2 conv_46 64}, e+ conv.12 16 i e+ conv_30 32 e+ conv_43 64 90}] e© conv.14 16 : 90}| © cony_32 32 90}] e-© conv 064 © conv.1616 â 2 conv 3432 2 conv 5264 2 conv_1816 ' 2 conv 3632 2 conv 5464 5 7 ry cy 6 Too 7 ry cy % Too 7 ry cy % Too Fiters Prune away) Fiters Prune away) Fiters Prune away) Pe CIFARLO, ResNet-110, prune smallest filters Pe CIFARLO, ResNet-110, prune smallest filters Pe CIFAR1O, ResNet-110, prune smallest filters e* conv 38 32 cony_40 32 cony_46 32 cony_48 32 conv 10 16 conv 12 16 2 conv1a16|| > conv.5032|] 3 5 convasis|| = conv.s232|| © eu conv1816|| 2 \ convi5432|| 2 conv_20 16 conv. 56 32 conv 24 16 cony_26 16 cony_28 16 13016 13216 [yp o_o cony_34 16 Fiters Prunea Awayis) | © * conv_36 16 conv_60 32 conv_62 32 conv 64 32 v6 32 V6B32H4o conv_70 32 # conv 72.32 c z io Fikes Praned Away(%)| © ® conv_106 64 # conv 108 64 c Fiters Praned Away(%) | ©
CIFARLO, ResNet-56, prune smallest filters > conv 216 ©-© conv 1016 e+ conv.12 16 i 90}] e© conv.14 16 : © conv.1616 â 2 conv_1816 ' 5 7 ry cy 6 Too Fiters Prune away)
CIFARLO, ResNet-56, prune smallest filters => conv 20 32 S " © conv.2832 e+ conv_30 32 90}| © cony_32 32 2 conv 3432 2 conv 3632 7 ry cy % Too Fiters Prune away)
CIFARLO, ResNet-56, prune smallest filters =" |[e= conv 38 64 EF Je conva0 64 Z| conva2 64}, Me conv aa 64| e2 conv_46 64}, e+ conv_43 64 90}] e-© conv 064 2 conv 5264 2 conv 5464 7 ry cy % Too Fiters Prune away)
Pe CIFARLO, ResNet-110, prune smallest filters conv 10 16 conv 12 16 2 conv1a16|| 5 convasis|| eu conv1816|| conv_20 16 conv 24 16 cony_26 16 cony_28 16 13016 13216 [yp cony_34 16 * conv_36 16 c Fiters Praned Away(%) | ©
Pe CIFARLO, ResNet-110, prune smallest filters e* conv 38 32 cony_40 32 cony_46 32 cony_48 32 > conv.5032|] = conv.s232|| 2 \ convi5432|| conv. 56 32 o_o Fiters Prunea Awayis) | © conv_60 32 conv_62 32 conv 64 32 v6 32 V6B32H4o conv_70 32 # conv 72.32
Pe CIFAR1O, ResNet-110, prune smallest filters 3 © 2 c z io Fikes Praned Away(%)| © ® conv_106 64 # conv 108 64
Figure 6: Sensitivity to pruning for the ï¬rst layer of each residual block of ResNet-56/110.
improves the performance. In addition, we ï¬nd that layers that are sensitive to pruning (layers 20, 38 and 54 for ResNet-56, layer 36, 38 and 74 for ResNet-110) lie at the residual blocks close to the layers where the number of feature maps changes, e.g., the ï¬rst and the last residual blocks for each stage. We believe this happens because the precise residual errors are necessary for the newly added empty feature maps.
The retraining performance can be improved by skipping these sensitive layers. As shown in Table 1, ResNet-56-pruned-A improves the performance by pruning 10% ï¬lters while skipping the sensitive layers 16, 20, 38 and 54. In addition, we ï¬nd that deeper layers are more sensitive to pruning than layers in the earlier stages of the network. Hence, we use a different pruning rate for each stage. We use pi to denote the pruning rate for layers in the ith stage. ResNet-56-pruned-B skips more layers (16, 18, 20, 34, 38, 54) and prunes layers with p1=60%, p2=30% and p3=10%. For ResNet-110, the ï¬rst pruned model gets a slightly better result with p1=50% and layer 36 skipped. ResNet-110-pruned-B skips layers 36, 38, 74 and prunes with p1=50%, p2=40% and p3=30%. When there are more than two residual blocks at each stage, the middle residual blocks may be redundant and can be easily pruned. This might explain why ResNet-110 is easier to prune than ResNet-56.
4.3 RESNET-34 ON ILSVRC2012
ResNets for ImageNet have four stages of residual blocks for feature maps with sizes of 56 à 56, 28 à 28, 14 à 14 and 7 à 7. ResNet-34 uses the projection shortcut when the feature maps are down-sampled. We ï¬rst prune the ï¬rst layer of each residual block. Figure 7 shows the sensitivity of the ï¬rst layer of each residual block. Similar to ResNet-56/110, the ï¬rst and the last residual blocks of each stage are more sensitive to pruning than the intermediate blocks (i.e., layers 2, 8, 14, 16, 26, 28, 30, 32). We skip those layers and prune the remaining layers at each stage equally. In Table 1 we compare two conï¬gurations of pruning percentages for the ï¬rst three stages: (A) p1=30%, p2=30%, p3=30%; (B) p1=50%, p2=60%, p3=40%. Option-B provides 24% FLOP reduction with about 1% loss in accuracy. As seen in the pruning results for ResNet-50/110, we can predict that ResNet-34 is relatively more difï¬cult to prune as compared to deeper ResNets.
We also prune the identity shortcuts and the second convolutional layer of the residual blocks. As these layers have the same number of ï¬lters, they are pruned equally. As shown in Figure 7(b), these layers are more sensitive to pruning than the ï¬rst layers. With retraining, ResNet-34-pruned-C prunes the third stage with p3=20% and results in 7.5% FLOP reduction with 0.75% loss in accuracy. Therefore, pruning the ï¬rst layer of the residual block is more effective at reducing the overall FLOP
8
Published as a conference paper at ICLR 2017
8 ImageNet, ResNet-34, prune smallest filters conv_2 64 conv_4 64 70 conv_6 64 conv_8 128 conv_10 128 conv_12 128 conv_14 128 conv_16 256 conv_18 256 conv_20 256 conv_22 256 conv_24 256 conv_26 256 conv_28 512 conv_30 512 0 20 40 60 30 * conv_32 512 [4p Filters Pruned Away(%) âAccuracy 55
(a) Pruning the ï¬rst layer of residual blocks (b) Pruning the second layer of residual blocks
ImageNet, ResNet-34, prune the second layer of the basicblock 70 o* 1-7, step=2 ee 9-15, step=2 60 © 17-27, step=2 e+ 29 - 33, step=2 a °o 20 40 Cr) Too Parameter Pruned Away(%) Test Accuracy
Figure 7: Sensitivity to pruning for the residual blocks of ResNet-34.
than pruning the second layer. This ï¬nding also correlates with the bottleneck block design for deeper ResNets, which ï¬rst reduces the dimension of input feature maps for the residual layer and then increases the dimension to match the identity mapping.
# 4.4 COMPARISON WITH PRUNING RANDOM FILTERS AND LARGEST FILTERS
We compare our approach with pruning random filters and largest filters. As shown in Figure [8] pruning the smallest filters outperforms pruning random filters for most of the layers at different pruning ratios. For example, smallest filter pruning has better accuracy than random filter pruning for all layers with the pruning ratio of 90%. The accuracy of pruning filters with the largest ¢;-norms drops quickly as the pruning ratio increases, which indicates the importance of filters with larger ¢,-norms.
100 GIFAR10, VGG-16, prune fiters with smallest f-norm ot CIFAR10, VGG-16, prune random filters CIFAR1O, VGG-16, prune fiters with largest /,-norm = con 166 es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 © conv.9 512 © conv.10512 © conv.11512, © conv.12512 © conv.13512 pecuracy 0 Ea Co Too 0 3 Cy Too 0 w Co a0 % a0 % a0 % Fits Pruned Awayit) Fites Pred Awayit) Fits Pruned Awayit)
100 GIFAR10, VGG-16, prune fiters with smallest f-norm = con 166 es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 © conv.9 512 © conv.10512 © conv.11512, © conv.12512 © conv.13512 0 Ea Co Too a0 % Fits Pruned Awayit)
ot CIFAR10, VGG-16, prune random filters pecuracy 0 3 Cy Too a0 % Fites Pred Awayit)
CIFAR1O, VGG-16, prune fiters with largest /,-norm 0 w Co Too a0 % Fits Pruned Awayit)
Figure 8: Comparison of three pruning methods for VGG-16 on CIFAR-10: pruning the smallest ï¬lters, pruning random ï¬lters and pruning the largest ï¬lters. In random ï¬lter pruning, the order of ï¬lters to be pruned is randomly permuted.
# 4.5 COMPARISON WITH ACTIVATION-BASED FEATURE MAP PRUNING
The activation-based feature map pruning method removes the feature maps with weak activation patterns and their corresponding filters and kernels (Polyak & Wol! )), which needs sample data as input to determine which feature maps to prune. A feature map x;41,; ⬠Râ¢+!*"+1 is generated by applying filter F;,; ⬠R"**â¢* to feature maps of previous layer x; ⬠Râ¢*â'*", ie., Xi41,j = Fi,j * Xi. Given N randomly selected images {x'}}_, from the training set, the statistics of each feature map can be estimated with one epoch forward pass of the N sampled data. Note that we calculate statistics on the feature maps generated from the convolution operations before batch normalization or non-linear activation. We compare our ¢;-norm based filter pruning with feature map pruning using the following criteria: Omean-mean(Xi,j) = + a mean(x?;), Onean-sta(Xij) = Fe hr St d(KM,)s Smean-ts (Kij) = FH Der (XP [la> Gmeanee (Kg) = WH Doras IPxjlle and
9
Published as a conference paper at ICLR 2017
100 GIFAR10, VGG-16, prune fiters with smallest f-norm yoo CIFAR10, VGG-16, prune feature maps with smallest uns may 1300 CIFARLO. VGG-16. prune feature maps with smallest âeol|*> conv 64] e+ conv2 64 => conv. 68 es conv.2 64 + conv.3 128 + conv.4 128 ee conv.s 256 |\ + conv6 256 | ° e* conv.7 256 © conv.8 512 © conv.9 512 © conv.10512 oo|[e* conv 6m es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 © conv.9 512 © conv.10512 © conv.11512, © conv.12512 © conv.13512 8 + conv.3 128 + conv.4 128 > lee conv 5 256 £ |[e-* conv_6 256 | |le-e conv_7 256 8 pecurecy 8 8 © conv.8 512 © conv.9 512 © conv.10512 © conv.11512, 20) ee conv11512 © conv.12512 © conv.12512 © conv.13512 © conv.13512 0 3 Too 0 Ea Too 0 3 % % % Fiters Pruned Awayi%) Pruned Awayis) ned wayi%) (a) ||Fi,glla (b) Omean-mean (C) Omean-sta CIFARIO, VGG-16, prune feature maps with smallest run CCIFARIO, VGG-16, prune feature maps with smallest ie CIFAR1O, VGG-16, prune feature maps with smallest ov 109, 109, oo|[e* cont 6m es conv.2 64 = conl 6a es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 oo|[e* conv 6m es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 8 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 pecurecy 8 pecurecy 8 8 os coma 12 oo comr9 512 oo comr9 512 o 3 coneiosi2 o 3 coneiosi2 \ o 3 coneiosi2 20S conv 5i2 20S conv 5i2 20S conv 5i2 o 3 comei2si2 : o 3 comei2si2 o 3 comei2si2 oo comet3 512 oo comet3 512 oo comet3 512 % 20 60 100 % 20 60 100 % 20 60 (d) Onean-ey (â¬) Omean-â¬2 (f) Ovar-ts
100 GIFAR10, VGG-16, prune fiters with smallest f-norm âeol|*> conv 64] e+ conv2 64 + conv.3 128 + conv.4 128 > lee conv 5 256 £ |[e-* conv_6 256 | |le-e conv_7 256 8 © conv.8 512 © conv.9 512 © conv.10512 © conv.11512, © conv.12512 © conv.13512 0 3 Too % Fiters Pruned Awayi%)
yoo CIFAR10, VGG-16, prune feature maps with smallest uns may => conv. 68 es conv.2 64 + conv.3 128 + conv.4 128 ee conv.s 256 |\ + conv6 256 | ° e* conv.7 256 © conv.8 512 © conv.9 512 © conv.10512 8 pecurecy 8 20) ee conv11512 © conv.12512 © conv.13512 0 Ea Too % Pruned Awayis)
1300 CIFARLO. VGG-16. prune feature maps with smallest oo|[e* conv 6m es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 © conv.9 512 © conv.10512 © conv.11512, © conv.12512 © conv.13512 8 0 3 % ned wayi%)
CIFARIO, VGG-16, prune feature maps with smallest run ie oo|[e* cont 6m es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 pecurecy 8 os coma 12 o 3 coneiosi2 20S conv 5i2 o 3 comei2si2 : oo comet3 512 % 20 60 100
CCIFARIO, VGG-16, prune feature maps with smallest 109, = conl 6a es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 8 pecurecy 8 oo comr9 512 o 3 coneiosi2 \ 20S conv 5i2 o 3 comei2si2 oo comet3 512 % 20 60 100
CIFAR1O, VGG-16, prune feature maps with smallest ov 109, oo|[e* conv 6m es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 8 oo comr9 512 o 3 coneiosi2 20S conv 5i2 o 3 comei2si2 oo comet3 512 % 20 60
Figure 9: Comparison of activation-based feature map pruning for VGG-16 on CIFAR-10.
Ovar-to (i,j) = var({||x?;l]2}NL1), where mean, std and var are standard statistics (average, standard deviation and variance) of the input. Here, o.,2+-¢, 18 the contribution variance of channel criterion proposed in (2015), which is motivated by the intuition that an unimportant feature map has almost similar outputs for the whole training data and acts like an additional bias.
The estimation of the criteria becomes more accurate when more sample data is used. Here we use the whole training set (NV = 50,000 for CIFAR-10) to compute the statistics. The performance of feature map pruning with above criteria for each layer is shown in Figure[9] Smallest filter pruning outperforms feature map pruning with the criteria Onean-means Smeanâl;> Tmeanâly ANd Oyar-¢,. The Omean-sta Criterion has better or similar performance to ¢;-norm up to pruning ratio of 60%. However, its performance drops quickly after that especially for layers of conv_1, conv_2 and conv_3. We find £-norm is a good heuristic for filter selection considering that it is data free.
# 5 CONCLUSIONS
Modern CNNs often have high capacity with large training and inference costs. In this paper we present a method to prune ï¬lters with relatively low weight magnitudes to produce CNNs with reduced computation costs without introducing irregular sparsity. It achieves about 30% reduction in FLOP for VGGNet (on CIFAR-10) and deep ResNets without signiï¬cant loss in the original accuracy. Instead of pruning with speciï¬c layer-wise hayperparameters and time-consuming iterative retraining, we use the one-shot pruning and retraining strategy for simplicity and ease of implementation. By performing lesion studies on very deep CNNs, we identify layers that are robust or sensitive to pruning, which can be useful for further understanding and improving the architectures.
# ACKNOWLEDGMENTS
The authors would like to thank the anonymous reviewers for their valuable feedback.
# REFERENCES
Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. Structured Pruning of Deep Convolutional Neural Networks. arXiv preprint arXiv:1512.08571, 2015.
10
Published as a conference paper at ICLR 2017
Ronan Collobert, Koray Kavukcuoglu, and Cl´ement Farabet. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, 2011.
Matthieu Courbariaux and Yoshua Bengio. Binarynet: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830, 2016.
Misha Denil, Babak Shakibi, Laurent Dinh, Nando de Freitas, et al. Predicting parameters in deep learning. In NIPS, 2013.
Song Han, Jeff Pool, John Tran, and William Dally. Learning both Weights and Connections for Efï¬cient Neural Network. In NIPS, 2015.
Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A Horowitz, and William J Dally. EIE: Efï¬cient Inference Engine on Compressed Deep Neural Network. In ISCA, 2016a.
Song Han, Huizi Mao, and William J Dally. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. In ICLR, 2016b.
Babak Hassibi and David G Stork. Second Order Derivatives for Network Pruning: Optimal Brain Surgeon. In NIPS, 1993.
Kaiming He and Jian Sun. Convolutional Neural Networks at Constrained Time Cost. In CVPR, 2015.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In CVPR, 2016.
Forrest Iandola, Matthew Moskewicz, Khalidand Ashraf, Song Han, William Dally, and Keutzer Kurt. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and ¡ 1MB model size. arXiv preprint arXiv:1602.07360, 2016.
Yani Ioannou, Duncan Robertson, Jamie Shotton, Roberto Cipolla, and Antonio Criminisi. Training CNNs with Low-Rank Filters for Efï¬cient Image Classiï¬cation. In ICLR, 2016.
Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. 2015.
Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. In BMVC, 2014.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet Classiï¬cation with Deep Convo- lutional Neural Networks. In NIPS, 2012.
Andrew Lavin and Scott Gray. Fast Algorithms for Convolutional Neural Networks. In CVPR, 2016.
Yann Le Cun, John S Denker, and Sara A Solla. Optimal Brain Damage. In NIPS, 1989.
Vadim Lebedev and Victor Lempitsky. Fast Convnets Using Group-wise Brain Damage. In CVPR, 2016.
Min Lin, Qiang Chen, and Shuicheng Yan. Network in Network. arXiv preprint arXiv:1312.4400, 2013.
Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse Convolu- tional Neural Networks. In CVPR, 2015.
Zelda Mariet and Suvrit Sra. Diversity Networks. In ICLR, 2016.
Michael Mathieu, Mikael Henaff, and Yann LeCun. Fast Training of Convolutional Networks through FFTs. arXiv preprint arXiv:1312.5851, 2013.
Adam Polyak and Lior Wolf. Channel-Level Acceleration of Deep Face Representations. IEEE Access, 2015.
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks. In ECCV, 2016.
11
Published as a conference paper at ICLR 2017
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015.
Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. In ICLR, 2015.
Suraj Srinivas and R Venkatesh Babu. Data-free Parameter Pruning for Deep Neural Networks. In BMVC, 2015.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A Simple Way to Prevent Neural Networks from Overï¬tting. JMLR, 2014.
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going Deeper with Convolutions. In CVPR, 2015a.
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethink- ing the Inception Architecture for Computer Vision. arXiv preprint arXiv:1512.00567, 2015b.
Cheng Tai, Tong Xiao, Xiaogang Wang, and Weinan E. Convolutional neural networks with low-rank regularization. In ICLR, 2016.
Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning Structured Sparsity in Deep Learning. In NIPS, 2016.
Sergey Zagoruyko. 92.45% on CIFAR-10 in Torch. http://torch.ch/blog/2015/07/30/ cifar.html, 2015.
Matthew D Zeiler and Rob Fergus. Visualizing and Understanding Convolutional Networks. In ECCV, 2014.
Xiangyu Zhang, Jianhua Zou, Kaiming He, and Jian Sun. Accelerating Very Deep Convolutional Networks for Classiï¬cation and Detection. IEEE T-PAMI, 2015a.
Xiangyu Zhang, Jianhua Zou, Xiang Ming, Kaiming He, and Jian Sun. Efï¬cient and accurate approximations of nonlinear convolutional networks. In CVPR, 2015b.
Hao Zhou, Jose Alvarez, and Fatih Porikli. Less Is More: Towards Compact CNNs. In ECCV, 2016.
12
Published as a conference paper at ICLR 2017
6 APPENDIX
6.1 COMPARISON WITH £2-NORM BASED FILTER PRUNING
We compare ¢;-norm with £-norm for filter pruning. As shown in Figure[10] £,-norm works slightly better than ¢j-norm for layer conv_2. There is no significant difference between the two norms for other layers.
CIFAR10, VGG-16, prune filters with smallest f-norm CIFAR10, VGG-16, prune filters with smallest fy-norm 109, 109, + conv_164 + conv_2 64 + conv_3 128 + conv_4128 ee conv_5 256 e* conv_6 256 ee conv_7 256 © -* conv_8 512 © -* conv_9 512 © conv_10512 © conv_11512 © -© conv_12 512 © -* conv_13 512 80 + conv_2 64 + conv_3 128 + conv_4128 ee conv_5 256 e* conv_6 256 ee conv_7 256 © -* conv_8 512 © -* conv_9 512 © conv_10512 © conv_11512 © -© conv_12 512 © -* conv_13 512 60 Accuracy Accuracy 20 0 20 a0 60 30 100 0 20 a0 60 30 100 Filters Pruned Away(94) Filters Pruned Away(%) (a) ||Faslla (b) ||Fi,sll2
CIFAR10, VGG-16, prune filters with smallest f-norm 109, + conv_164 + conv_2 64 + conv_3 128 + conv_4128 ee conv_5 256 e* conv_6 256 ee conv_7 256 © -* conv_8 512 © -* conv_9 512 © conv_10512 © conv_11512 © -© conv_12 512 © -* conv_13 512 80 60 Accuracy 20 0 20 a0 60 30 100 Filters Pruned Away(94)
CIFAR10, VGG-16, prune filters with smallest fy-norm 109, + conv_2 64 + conv_3 128 + conv_4128 ee conv_5 256 e* conv_6 256 ee conv_7 256 © -* conv_8 512 © -* conv_9 512 © conv_10512 © conv_11512 © -© conv_12 512 © -* conv_13 512 Accuracy 0 20 a0 60 30 100 Filters Pruned Away(%)
Figure 10: Comparison of ¢;-norm and ¢3-norm based filter pruning for VGG-16 on CIFAR-10.
6.2 FLOP AND WALL-CLOCK TIME
FLOP is a commonly used measure to compare the computation complexities of CNNs. It is easy to compute and can be done statically, which is independent of the underlying hardware and software implementations. Since we physically prune the ï¬lters by creating a smaller model and then copy the weights, there are no masks or sparsity introduced to the original dense BLAS operations. Therefore the FLOP and wall-clock time of the pruned model is the same as creating a model with smaller number of ï¬lters from scratch.
We report the inference time of the original model and the pruned model on the test set of CIFAR-10 and the validation set of ILSVRC 2012, which contains 10,000 32 Ã 32 images and 50,000 224 Ã 224 images respectively. The ILSVRC 2012 dataset is used only for ResNet-34. The evaluation is conducted in Torch7 with Titan X (Pascal) GPU and cuDNN v5.1, using a mini-batch size 128. As shown in Table 3, the saved inference time is close to the FLOP reduction. Note that the FLOP number only considers the operations in the Conv and FC layers, while some calculations such as Batch Normalization and other overheads are not accounted.
# Table 3: The reduction of FLOP and wall-clock time for inference.
FLOP Model 3.13 Ã 108 VGG-16 2.06 Ã 108 VGG-16-pruned-A 1.25 Ã 108 ResNet-56 9.09 Ã 107 ResNet-56-pruned-B 2.53 Ã 108 ResNet-110 ResNet-110-pruned-B 1.55 Ã 108 3.64 Ã 109 ResNet-34 2.76 Ã 109 ResNet-34-pruned-B Pruned % Time (s) 34.2% 27.6% 38.6% 24.2% 1.23 0.73 1.31 0.99 2.38 1.86 36.02 22.93 Saved % 40.7% 24.4% 21.8% 28.0%
13 | {
"id": "1602.07360"
} |
1608.08614 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | 6 1 0 2 c e D 0 1
] V C . s c [ 2 v 4 1 6 8 0 . 8 0 6 1 : v i X r a
# What makes ImageNet good for transfer learning?
# Pulkit Agrawal Berkeley Artiï¬cial Intelligence Research (BAIR) Laboratory UC Berkeley {minyoung,pulkitag,aaefros}@berkeley.edu
# Abstract
The tremendous success of ImageNet-trained deep fea- tures on a wide range of transfer tasks raises the question: what is it about the ImageNet dataset that makes the learnt features as good as they are? This work provides an em- pirical investigation into the various facets of this question, such as, looking at the importance of the amount of exam- ples, number of classes, balance between images-per-class and classes, and the role of ï¬ne and coarse grained recog- nition. We pre-train CNN features on various subsets of the ImageNet dataset and evaluate transfer performance on a variety of standard vision tasks. Our overall ï¬ndings sug- gest that most changes in the choice of pre-training data long thought to be critical, do not signiï¬cantly affect trans- fer performance.
# 1. Introduction
the dataset (1.2 million labeled images) that forces the rep- resentation to be general. Others argue that it is the large number of distinct object classes (1000), which forces the network to learn a hierarchy of generalizable features. Yet others believe that the secret sauce is not just the large num- ber of classes, but the fact that many of these classes are visually similar (e.g. many different breeds of dogs), turn- ing this into a ï¬ne-grained recognition task and pushing the representation to âwork harderâ. But, while almost every- one in computer vision seems to have their own opinion on this hot topic, little empirical evidence has been produced so far.
In this work, we systematically investigate which as- pects of the ImageNet task are most critical for learning good general-purpose features. We evaluate the features by ï¬ne-tuning on three tasks: object detection on PASCAL- VOC 2007 dataset (PASCAL-DET), action classiï¬cation on PASCAL-VOC 2012 dataset (PASCAL-ACT-CLS) and scene classiï¬cation on the SUN dataset (SUN-CLS); see Section 3 for more details.
It has become increasingly common within the com- puter vision community to treat image classiï¬cation on Im- ageNet [35] not as an end in itself, but rather as a âpre- text taskâ for training deep convolutional neural networks (CNNs [25, 22]) to learn good general-purpose features. This practice of ï¬rst training a CNN to perform image clas- siï¬cation on ImageNet (i.e. pre-training) and then adapting these features for a new target task (i.e. ï¬ne-tuning) has be- come the de facto standard for solving a wide range of com- puter vision problems. Using ImageNet pre-trained CNN features, impressive results have been obtained on several image classiï¬cation datasets [10, 33], as well as object de- tection [12, 37], action recognition [38], human pose esti- mation [6], image segmentation [7], optical ï¬ow [42], im- age captioning [9, 19] and others [24].
Given the success of ImageNet pre-trained CNN fea- tures, it is only natural to ask: what is it about the ImageNet dataset that makes the learnt features as good as they are? One school of thought believes that it is the sheer size of
The paper is organized as a set of experiments answering a list of key questions about feature learning with ImageNet. The following is a summary of our main ï¬ndings:
1. How many pre-training ImageNet examples are sufï¬cient for transfer learning? Pre-training with only half the Im- ageNet data (500 images per class instead of 1000) results in only a small drop in transfer learning performance (1.5 mAP drop on PASCAL-DET). This drop is much smaller than the drop on the ImageNet classiï¬cation task itself. See Section 4 and Figure 1 for details.
2. How many pre-training ImageNet classes are sufï¬cient for transfer learning? Pre-training with an order of mag- nitude fewer classes (127 classes instead of 1000) results in only a small drop in transfer learning performance (2.8 mAP drop on PASCAL-DET). Curiously, we also found that for some transfer tasks, pre-training with fewer classes leads to better performance. See Section 5.1 and Figure 2 for details.
1
S a x 2 0.6 0.6 9 & < 3 05 05 2 2 c & 2 2 04 04 2 E âficati % = -® SUN - Classification & > 03 -@ PASCAL - Object Detection 9-3 2% © - - 5 02 =® PASCAL - Action Recognition 02 5 g -@ |mageNet - Classification Z < 4 01 01 6 piu] ov 3 0 200 400 600 800 1000 2 Number of Pretraining Images Per ImageNet Class
Figure 1: Change in transfer task performance of a CNN pre-trained with varying number of images per ImageNet class. The left y-axis is the mean class accuracy used for SUN and ImageNet CLS. The right y-axis measures mAP for PASCAL DET and ACTION-CLS. The number of examples per class are reduced by random sam- pling. Accuracy on the ImageNet classiï¬cation task increases faster as compared to performance on transfer tasks.
3. How important is ï¬ne-grained recognition for learning good features for transfer learning? Features pre-trained with a subset of ImageNet classes that do not require ï¬ne- grained discrimination still demonstrate good transfer per- formance. See Section 5.2 and Figure 2 for details.
4. Does pre-training on coarse classes produce features ca- pable of ï¬ne-grained recognition (and vice versa) on Ima- geNet itself? We found that a CNN trained to classify only between the 127 coarse ImageNet classes produces fea- tures capable of telling apart ï¬ne-grained ImageNet classes whose labels it has never seen in training (section 5.3). Likewise, a CNN trained to classify the 1000 ImageNet classes is able to distinguish between unseen coarse-level classes higher up in the WordNet hierarchy (section 5.4).
5. Given the same budget of pre-training images, should we have more classes or more images per class? Training with fewer classes but more images per class performs slightly better at transfer tasks than training with more classes but fewer images per class. See Section 5.5 and Table 2 for details.
6. Is more data always helpful? We found that training with 771 ImageNet classes (out of 1000) that exclude all PAS- CAL VOC classes, achieves nearly the same performance on PASCAL-DET as training on complete ImageNet. Fur- ther experiments conï¬rm that blindly adding more training data does not always lead to better performance and can sometimes hurt performance. See Section 6, and Table 9 for more details.
2
0.6 0.5 0.4 Class Accuracy ( ImageNet & SUN ) Mean Average Precision ( PASCAL ) 0.3 = SUN - Classification 0.3 =@ PASCAL - Object Detection 0.2 -® PASCAL - Action Recognition 9-2 0.1 -@ |mageNet - Classification 0.1 0 200 400 600 800 1000 Number of Pretraining ImageNet Classes
Figure 2: Change in transfer task performance with varying number of pre-training ImageNet classes. The number of ImageNet classes are varied using the technique described in Section 5.1. With only 486 pre-training classes, transfer performances are unaffected and only a small drop is observed when only 79 classes are used for pre- training. The ImageNet classiï¬cation performance is measured by ï¬ntetuning the last layer to the original 1000-way classiï¬cation.
# 2. Related Work
A number of papers have studied transfer learning in CNNs, including the various factors that affect pre-training and ï¬ne-tuning. For example, the question of whether pre- training should be terminated early to prevent over-ï¬tting and what layers should be used for transfer learning was studied by [2, 44]. A thorough investigation of good archi- tectural choices for transfer learning was conducted by [3], while [26] propose an approach to ï¬ne-tuning for new tasks without âforgettingâ the old ones. In contrast to these works, we use a ï¬xed ï¬ne-tuning pr
One central downside of supervised pre-training is that large quantity of expensive manually-supervised training data is required. The possibility of using large amounts of unlabelled data for feature learning has therefore been very attractive. Numerous methods for learning features by optimizing some auxiliary criterion of the data itself have been proposed. The most well-known such criteria are image reconstruction [5, 36, 29, 27, 32, 20] (see [4] for a comprehensive overview) and feature slowness [43, 14]. Unfortunately, features learned using these methods turned out not to be competitive with those obtained from super- vised ImageNet pre-training [31]. To try and force better feature generalization, more recent âself-supervisedâ meth- ods use more difï¬cult data prediction auxiliary tasks in an effort to make the CNNs âwork harderâ. Attempted self- supervised tasks include predictions of ego-motion [1, 16], spatial context [8, 31, 28], temporal context [41], and even color [45, 23] and sound [30]. While features learned using these methods often come close to ImageNet performance, to date, none have been able to beat it.
) I ar Label set 1 Original label set Label set 2
Figure 3: An illustration of the bottom up procedure used to con- struct different label sets using the WordNet tree. Each node of the tree represents a class and the leaf nodes are shown in red. Differ- ent label sets are iteratively constructed by clustering together all the leaf nodes with a common parent. In each iteration, only leaf nodes are clustered. This procedure results into a sequence of label sets for 1.2M images, where each consequent set contains labels coarser than the previous one. Because the WordNet tree is im- balanced, even after multiple iterations, label sets contain some classes that are present in the 1000 way ImageNet challenge.
A reasonable middle ground between the expensive, fully-supervised pre-training and free unsupervised pre- training is to use weak supervision. For example, [18] use the YFCC100M dataset of 100 million Flickr images la- beled with noisy user tags as pre-training instead of Ima- geNet. But yet again, even though YFCC100M is almost two orders of magnitude larger than ImageNet, somewhat surprisingly, the resulting features do not appear to give any substantial boost over these pre-trained on ImageNet.
Overall, despite keen interest in this problem, alterna- tive methods for learning general-purpose deep features have not managed to outperform ImageNet-supervised pre- training on transfer tasks.
The goal of this work is to try and understand what is the secret to ImageNetâs continuing success.
# 3. Experimental Setup
The process of using supervised learning to initialize CNN parameters using the task of ImageNet classiï¬cation is referred to as pre-training. The process of adapting pre- trained CNN to continuously train on a target dataset is referred to as ï¬netuning. All of our experiments use the Caffe [17] implementation of the a single network architec- ture proposed by Krizhevsky et al. [22]. We refer to this architecture as AlexNet.
We closely follow the experimental setup of Agrawal et al. [2] for evaluating the generalization of pre-trained features on three transfer tasks: PASCAL VOC 2007 ob- ject detection (PASCAL-DET), PASCAL VOC 2012 action recognition (PASCAL-ACT-CLS) and scene classiï¬cation on SUN dataset (SUN-CLS).
⢠For PASCAL-DET, we used the PASCAL VOC 2007 train/val for ï¬netuning using the experimental setup and
3
Pre-trained Dataset Original 127 Classes Random PASCAL 58.3 55.5 41.3 [21] SUN 52.2 48.7 35.7 [2]
Table 1: The transfer performance of a network pre-trained us- ing 127 (coarse) classes obtained after top-down clustering of the WordNet tree is comparable to a transfer performance after ï¬ne- tuning on all 1000 ImageNet classes. This indicates that ï¬ne- grained recognition is not necessary for learning good transferable features.
code provided by Faster-RCNN [34] and report perfor- mance on the test set. Finetuning on PASCAL-DET was performed by adapting the pre-trained convolution layers of AlexNet. The model was trained for 70K iterations using stochastic gradient descent (SGD), with an initial learning rate of 0.001 with a reduction by a factor of 10 at 40K iteration.
⢠For PASCAL-ACT-CLS, we used PASCAL VOC 2012 train/val for ï¬netuning and testing using the experimen- tal setup and code provided by R*CNN [13]. The ï¬ne- tuning process for PASCAL-ACT-CLS mimics the pro- cedure described for PASCAL-DET.
⢠For SUN-CLS we used the same train/val/test splits as used by [2]. Finetuning on SUN was performed by ï¬rst replacing the FC-8 layer in the AlexNet model with a ran- domly initialized, and fully connected layer with 397 out- put units. Finetuning was performed for 50K iterations using SGD with an initial learning rate of 0.001 which was reduced by a factor of 10 every 20K iterations.
Faster-RCNN and R*CNN are known to have variance across training runs; we therefore run it three times and re- port the mean ± standard deviation. On the other hand, [2], reports little variance between runs on SUN-CLS so we re- port our result using a single run.
In some experiments we pre-train on ImageNet using a different number of images per class. The model with 1000 images/class uses the original ImageNet ILSVRC 2012 training set. Models with N images/class for N < 1000 are trained by drawing a random sample of N images from all images of that class made available as part of the ImageNet training set.
# 4. How does the amount of pre-training data affect transfer performance?
For answering this question, we trained 5 different AlexNet models from scratch using 50, 125, 250, 500 and 1000 images per each of the 1000 ImageNet classes using the procedure described in Section 3. The variation in per- formance with amount of pre-training data when these mod- els are ï¬netuned for PASCAL-DET, PASCAL-ACT-CLS
⢠Baseline Accuracy So Fo Top 1 Nearest Neighbors Accuracy N 918 Classes 753 Classes 486 Classes 127 Classes 79 Classes 9 Classes (104) (303) (620) (979) (1000) (1000) ° Random (1000) 2 8 ® Induction Accuracy LL mS ms Soe & Top 5 Nearest Neighbors Accuracy 6 ⢠Baseline Accuracy ] | ⢠Induction Accuracy 918 Classes 753 Classes 486 Classes 127 Classes 79Classes 9Classes Random (104) (303) (620) (979) (1000) (1000) (1000) ()
⢠Baseline Accuracy So Fo Top 1 Nearest Neighbors Accuracy N 918 Classes 753 Classes 486 Classes 127 Classes 79 Classes 9 Classes (104) (303) (620) (979) (1000) (1000) ° Random (1000) ® Induction Accuracy LL
2 8 mS ms Soe & Top 5 Nearest Neighbors Accuracy 6 ⢠Baseline Accuracy ] | ⢠Induction Accuracy 918 Classes 753 Classes 486 Classes 127 Classes 79Classes 9Classes Random (104) (303) (620) (979) (1000) (1000) (1000) ()
Figure 4: Does a CNN trained for discriminating between coarse classes learns a feature embedding capable of distinguishing between ï¬ne classes? We quantiï¬ed this by measuring the induction accuracy deï¬ned as following: after training a feature embedding for a particular set of classes (set A), the induction accuracy is the nearest neighbor (top-1 and top-5) classiï¬cation accuracy measured in the FC8 feature space of the subset of 1000 ImageNet classes not present in set A. The syntax on the x-axis A Classes(B) indicates that the network was trained with A classes and the induction accuracy was measured on B classes. The baseline accuracy is the accuracy on B classes when the CNN was trained for all 1000 classes. The margin between the baseline and the induction accuracy indicates a drop in the networkâs ability to distinguish ï¬ne classes when being trained on coarse classes. The results show that features learnt by pre-training on just 127 classes still lead to fairly good induction.
and SUN-CLS is shown in Figure 1. For PASCAL-DET, the mean average precision (mAP) for CNNs with 1000, 500 and 250 images/class is found to be 58.3, 57.0 and 54.6. A similar trend is observed for PASCAL-ACT-CLS and SUN- CLS. These results indicate that using half the amount of pre-training data leads to only a marginal reduction in per- formance on transfer tasks. It is important to note that the performance on the ImageNet classiï¬cation task (the pre- training task) steadily increases with the amount of training data, whereas on transfer tasks, the performance increase with respect to additional pre-training data is signiï¬cantly slower. This suggests that while adding additional exam- ples to ImageNet classes will improve the ImageNet per- formance, it has diminishing return for transfer task perfor- mance.
# 5. How does the taxonomy of the pre-training task affect transfer performance?
In the previous section we investigated how varying number of pre-training images per class effects the perfor- mance in transfer tasks. Here we investigate the ï¬ip side: keeping the amount of data constant while changing the nomenclature of training labels.
# 5.1. The effect of number of pre-training classes on transfer performance
down clustering). Using bottom up clustering, 18 possible taxonomies can be generated. Among these, we chose 5 sets of labels constituting 918, 753, 486, 79 and 9 classes respectively. Using top-down clustering only 3 label sets of 127, 10 and 2 can be generated, and we used the one with 127 classes. For studying the effect of number of pre- training classes on transfer performance, we trained sepa- rate AlexNet CNNs from scratch using these label sets.
Figure 2 shows the effect of number of pre-training classes obtained using bottom up clustering of WordNet tree on transfer performance. We also include the performance of these different networks on the Imagenet classiï¬cation task itself after ï¬netuning only the last layer to distinguish between all the 1000 classes. The results show that increase in performance on transfer tasks is signiï¬cantly slower with increase in number of classes as compared to performance on Imagenet itself. Using only 486 classes results in a per- formance drop of 1.7 mAP for PASCAL-DET, 0.8% accu- racy for SUN-CLS and a boost of 0.6 mAP for PASCAL- ACT-CLS. Table 1 shows the transfer performance after pre-training with 127 classes obtained from top down clus- tering. The results from this table and the ï¬gure indicate that only diminishing returns in transfer performance are observed when more than 127 classes are used. Our results also indicate that making the ImageNet classes ï¬ner will not help improve transfer performance.
The 1000 classes of the ImageNet challenge [35] are de- rived from leaves of the WordNet tree [11]. Using this tree, it is possible to generate different class taxonomies while keeping the total number of images constant. One can gen- erate taxonomies in two ways: (1) bottom up clustering, wherein the leaf nodes belonging to a common parent are iteratively clustered together (see Figure 3), or (2) by ï¬x- top ing the distance of the nodes from the root node (i.e.
It can be argued that the PASCAL task requires discrim- ination between only 20 classes and therefore pre-training with only 127 classes should not lead to substantial reduc- tion in performance. However, the trend also holds true for SUN-CLS that requires discrimination between 397 classes. These two results taken together suggest that although train- ing with a large number of classes is beneï¬cial, diminishing returns are observed beyond using 127 distinct classes for
4
Induction < 2 oO =) uel £
Figure 5: Can feature embeddings obtained by training on coarse classes be able to distinguish ï¬ne classes they were never trained on? E.g. by training on monkeys, can the network pick out macaques? Here we look at the FC7 nearest neighbors (NN) of two randomly sampled images: a macaque (left column) and a giant schnauzer (right column), with each row showing feature embeddings trained with different number of classes (from ï¬ne to coarse). The row(s) above the dotted line indicate that the image class (i.e. macaque/giant schnauzer) was one of the training classes, whereas in rows below the image class was not present in the training set. Images in green indicate that the NN image belongs to the correct ï¬ne class (i.e. either macaque or giant schnauzer); orange indicates the correct coarse class (based on the WordNet hierarchy) but incorrect ï¬ne class; red indicated incorrect coarse class. All green images below the dotted line indicate instances of correct ï¬ne-grain nearest neighbor retrieval for features that were never trained on that class.
# pre-training.
Furthermore, for PASCAL-ACT-CLS and SUN-CLS, ï¬netuning on CNNs pre-trained with class set sizes of 918, and 753 actually results in better performance than using all 1000 classes. This may indicate that having too many classes for pre-training works against learning good gener- alizable features. Hence, when generating a dataset, one should be attentive of the nomenclature of the classes.
# 5.2. Is ï¬ne-grain recognition necessary for learning transferable features?
# 5.3. Does training with coarse classes induce fea- tures relevant for ï¬ne-grained recognition?
Earlier, we have shown that the features learned on the 127 coarse classes perform almost as well on our transfer tasks as the full set of 1000 ImageNet classes. Here we will probe this further by asking a different question: is the feature embedding induced by the coarse class classiï¬ca- tion task capable of separating the ï¬ne labels of ImageNet (which it never saw at training)?
ImageNet challenge requires a classiï¬er to distinguish between 1000 classes, some of which are very ï¬ne-grained, such as different breeds of dogs and cats. Indeed, most hu- mans do not perform well on ImageNet unless speciï¬cally trained [35], and yet are easily able to perform most every- day visual tasks. This raises the question: is ï¬ne-grained recognition necessary for CNN models to learn good fea- ture representations, or is coarse-grained object recognition (e.g. just distinguishing cats from dogs) is sufï¬cient?
To investigate this, we used top-1 and top-5 nearest neighbors in the FC7 feature space to measure the ac- curacy of identifying ï¬ne-grained ImageNet classes after training only on a set of coarse classes. We call this mea- sure, âinduction accuracyâ. As a qualitative example, Fig- ure 5 shows nearest neighbors for a macaque (left) and a schnauzer (right) for feature embeddings trained on Ima- geNet but with different number of classes. All green- border images below the dotted line indicate instances of correct ï¬ne-grain nearest neighbor retrieval for features that were never trained on that class.
Note that the label set of 127 classes from the previous experiment contains 65 classes that are present in the origi- nal set of 1000 classes and the remainder are inner nodes of the WordNet tree. However, all these 127 classes (see sup- plementary materials) represent coarse semantic concepts. As discussed earlier, pre-training with these classes results in only a small drop in transfer performance (see Table 1). This suggests that performing ï¬ne-grained recognition is only marginally helpful and does not appear to be critical for learning good transferable features.
Quantitative results are shown in Figure 4. The results show that when 127 classes are used, ï¬ne-grained recogni- tion k-NN performance is only about 15% lower compared to training directly for these ï¬ne-grained classes (i.e. base- line accuracy). This is rather surprising and suggests that CNNs implicitly discover features capable of distinguish- ing between ï¬ner classes while attempting to distinguish between relatively coarse classes.
5
mammal (17%) snake (13%) arthropod (12%) turtle (10%) tool (3%) covering (3%) fabric (2%) fungus (2%) game equipment (2%) stick (1%) mollusk (1%) boat (1%) home appliance (1%) container (8%) garment (8%) structure (7%) fruit (7%) bird (7%)
Figure 6: Does the network learn to discriminate coarse seman- tic concepts by training only on ï¬ner sub-classes? The degree to which the concept of coarse class is learnt was quantiï¬ed by mea- suring the difference (in percentage points) between the accuracy of classifying the coarse class and the average accuracy of indi- vidually classifying all the sub-classes of this coarse class. Here, the top and bottom classes sorted by this metric are shown using the label set of size 127 with classes with at least 5 subclasses. We observe that classes whose subclasses are visually consistent (e.g. mammal) are better represented than these that are visually dissimilar (e.g. home appliance).
# 5.4. Does training with ï¬ne-grained classes induce features relevant for coarse recognition?
Investigating whether the network learns features rel- evant for ï¬ne-grained recognition by training on coarse classes raises the reverse question: does training with ï¬ne- grained classes induce features relevant for coarse recog- nition? If this is indeed the case, then we would expect that when a CNN makes an error, it is more likely to con- fuse a sub-class (i.e. error in ï¬ne-grained recognition) with other sub-classes of the same coarse class. This effect can be measured by computing the difference between the accu- racy of classifying the coarse class and the average accuracy of individually classifying all the sub-classes of this coarse class (please see supplementary materials for details).
Figure 6 shows the results. We ï¬nd that coarse seman- that contain tic classes such as mammal, fruit, bird, etc. visually similar sub-classes show the hypothesized effect, whereas classes such as tool and home appliance that con- tain visually dissimilar subclasses do not exhibit this effect. These results indicate that subclasses that share a common visual structure allow the CNN to learn features that are more generalizable. This might suggest a way to improve feature generalization by making class labels respect visual commonality rather than simply WordNet semantics.
# 5.5. More Classes or More Examples Per Class?
Results in previous sections show that it is possible to achieve good performance on transfer tasks using signiï¬- cantly less pre-training data and fewer pre-training classes. However it is unclear what is more important â the number of classes or the number or examples per class. One ex-
6
Dataset Data size More examples/class More classes 500K 57.1 57.0 PASCAL 250K 54.8 52.5 SUN 125K 500K 250K 125K 42.2 50.6 42.3 49.8 50.6 49.7 45.7 46.7
Table 2: For a ï¬xed budget of pre-training data, is it better to have more examples per class and fewer classes or vice-versa? The row âmore examples/classâ was pretrained with subsets of Ima- geNet containing 500, 250 and 125 classes with 1000 examples each. The row âmore classesâ was pretrained with 1000 classes, but 500, 250 and 125 examples each. Interestingly, the transfer performance on both PASCAL and SUN appears to be broadly similar under both scenarios.
Pre-trained Dataset ImageNet Pascal removed ImageNet Places PASCAL 58.3 ± 0.3 57.8 ± 0.1 53.8 ± 0.1
Table 3: PASCAL-DET results after pre-training on entire Im- ageNet, PASCAL-removed-ImageNet and Places data sets. Re- moving PASCAL classes from ImageNet leads to an insigniï¬cant reduction in performance.
treme is to only have 1 class and all 1.2M images from this class and the other extreme is to have 1.2M classes and 1 image per class. It is clear that both ways of splitting the data will result in poor generalization, so the answer must lie somewhere in-between.
To investigate this, we split the same amount of pre- training data in two ways: (1) more classes with fewer im- ages per class, and (2) fewer classes with more images per class. We use datasets of size 500K, 250K and 125K im- ages for this experiment. For 500K images, we considered two ways of constructing the training set â (1) 1000 classes with 500 images/class, and (2) 500 classes with 1000 im- ages/class. Similar splits were made for data budgets of 250K and 125K images. The 500, 250 and 125 classes for these experiments were drawn from a uniform distribution among the 1000 ImageNet classes. Similarly, the image subsets containing 500, 250 and 125 images were drawn from a uniform distribution among the images that belong to the class.
The results presented in Table 2 show that having more images per class with fewer number of classes results in features that perform very slightly better on PASCAL- DET, whereas for SUN-CLS, the performance is compara- ble across the two settings.
# 5.6. How important is to pre-train on classes that are also present in a target task?
It is natural to expect that higher correlation between pre- training and transfer tasks leads to better performance on a transfer task. This indeed has been shown to be true in [44]. One possible source of correlation between pre-training and
Minimal Split Random Split
Figure 7: An illustration of the procedure used to split the Ima- geNet dataset. Splits were constructed in 2 different ways. The random split selects classes at random from the 1000 ImageNet classes. The minimal split is made in a manner that ensures no two classes in the same split have a common ancestor up to depth four of WordNet tree. Collage in Figure 8 visualizes the random and minimal splits.
transfer tasks are classes common to both tasks. In order to investigate how strong is the inï¬uence of these common classes, we ran an experiment where we removed all the classes from ImageNet that are contained in the PASCAL challenge. PASCAL has 20 classes, some of which map to more than one ImageNet class and thus, after applying this exclusion criterion we are only left with 771 ImageNet classes.
Table 3 compares the results on PASCAL-DET when the PASCAL-removed-ImageNet is used for pre-training against the original ImageNet and a baseline of pre- training on the Places [46] dataset. The PASCAL-removed- ImageNet achieves mAP of 57.8 (compared to 58.3 with the full ImageNet) indicating that training on ImageNet classes that are not present in PASCAL is sufï¬cient to learn features that are also good for PASCAL classes.
# 6. Does data augmentation from non-target classes always improve performance?
The analysis using PASCAL-removed ImageNet indi- cates that pre-training on non-PASCAL classes aids perfor- mance on PASCAL. This raises the question: is it always better to add pre-training data from additional classes that are not part of the target task? To investigate and test this hypothesis, we chose two different methods of splitting the ImageNet classes. The ï¬rst is random split, in which the 1000 ImageNet classes are split randomly; the second is a minimal split, in which the classes are deliberately split to ensure that similar classes are not in the same split, (Fig- ure 7). In order to determine if additional data helps perfor- mance for classes in split A, we pre-trained two CNNs â one for classifying all classes in split A and the other for clas- sifying all classes in both split A and B (i.e. full dataset). We then ï¬netuned the last layer of the network trained on the full dataset on split A only. If it is the case that addi-
7
Minimal Splits
Figure 8: Visualization of the random and minimal splits used for testing - is adding more pre-training data always useful? The two minimal sets contain disparate sets of objects. The minimal split A and B consists mostly of inanimate objects and living things re- spectively. On the other hand, random splits contain semantically similar objects.
tional data from split B helps performance on split A, then the CNN pre-trained with the full dataset should perform better than CNN pre-trained only on split A.
Using the random split, Figure 9 shows that the results of this experiment conï¬rms the intuition that additional data is indeed useful for both splits. However, under a random class split within ImageNet, we are almost certain to have extremely similar classes (e.g. two different breeds of dogs) ending up on the different sides of the split. So, what we have shown so far is that we can improve performance on, say, husky classiï¬cation by also training on poodles. Hence, the motivation for the minimal split: does adding arbitrary, unrelated classes, such as ï¬re trucks, help dog classiï¬ca- tion?
The classes in minimal split A do not share any common ancestor with minimal split B up until the nodes at depth 4 of the WordNet hierarchy (Figure 7). This ensures that any class in split A is sufï¬ciently disjoint from split B. Split A has 522 classes and split B has 478 classes (N.B.: for con- sistency, random splits A and B also had the same number of classes). In order to intuitively understand the difference between min splits A and B, we have visualized a random sample of images in these splits in Figure 8. Min split A consists of mostly static images and min split B consists of living objects.
Contrary to the earlier observation, Figure 9 shows that both min split A and B performs better than the full dataset when we ï¬netune only the last layer. This result is quite sur- prising because it shows that ï¬netuning the last layer from a network pre-trained on the full dataset, it is not possible
> 7 = Full Dataset £ 3 65 = Split Dataset S x a © O60 = joe wn wo 55 Zz wv oO 2 = E 50 Random Split A Random Split B Minimum Split A Minimum Split B
Figure 9: Does adding arbitrary classes to pre-training data al- ways improve transfer performance? This question was tested by training two CNNs, one for classifying classes in split A and other for classifying classes in split A and B both. We then ï¬netuned the CNN trained on both the splits on split A. If it is the case that adding more pre-training data helps, then performance of the CNN pre-trained on both the splits (black) should be higher than a CNN pre-trained on a single split (orange). For random splits, this indeed is the case, whereas for minimal splits adding more pre-training data hurts performance. This suggests, that additional pre-training data is useful only if it is correlated to the target task.
to match the performance of a network trained on just one split. We have observed that when training all the layers for an extensive amount of time (420K iterations), the accuracy of min split A does beneï¬t from pre-training on split B but does not for min split B. One explanation could be that im- ages in split B (e.g. person) is contained in images in split A, (e.g. buildings, clothing) but not vice versa.
While it might be possible to recover performance with very clever adjustments of learning rates, current results suggest that training with data from unrelated classes may push the network into a local minimum from which it might be hard to ï¬nd a better optima that can be obtained by train- ing the network from scratch.
# 7. Discussion
In this work we analyzed factors that affect the quality of ImageNet pre-trained features for transfer learning. Our goal was not to consider alternative neural network archi- tectures, but rather to establish facts about which aspects of the training data are important for feature learning.
The current consensus in the ï¬eld is that the key to learn- ing highly generalizable deep features is the large amounts of training data and the large number of classes.
To quote the inï¬uential R-CNN paper: â..success re- sulted from training a large CNN on 1.2 million labeled images...â [12]. After the publication of R-CNN, most re- searchers assumed that the full ImageNet is necessary to pre-train good general-purpose features. Our work quan- titatively questions this assumption, and yields some quite surprising results. For example, we have found that a sig-
8
niï¬cant reduction in the number of classes or the number of images used in pre-training has only a modest effect on transfer task performance.
While we do not have an explanation as to the cause of this resilience, we list some speculative possibilities that should inform further study of this topic:
⢠In our experiments, we investigated only one CNN ar- chitecture â AlexNet. While ImageNet-trained AlexNet features are currently the most popular starting point for ï¬ne-tuning on transfer tasks, there exist deeper architectures such as VGG [39], ResNet [15], and It would be interesting to see if our GoogLeNet [40]. ï¬ndings hold up on deeper networks. If not, it might suggest that AlexNet capacity is less than previously thought.
⢠Our results might indicate that researchers have been overestimating the amount of data required for learn- ing good general CNN features. If that is the case, it might suggest that CNN training is not as data-hungry as previously thought. It would also suggest that beat- ing ImageNet-trained features with models trained on a much bigger data corpus will be much harder than once thought.
⢠Finally, it might be that the currently popular target tasks, such as PASCAL and SUN, are too similar to the origi- nal ImageNet task to really test the generalization of the learned features. Alternatively, perhaps a more appropri- ate approach to test the generalization is with much less ï¬ne-tuning (e.g. one-shot-learning) or no ï¬ne-tuning at all (e.g. nearest neighbour in the learned feature space).
In conclusion, while the answer to the titular question âWhat makes ImageNet good for transfer learning?â still lacks a deï¬nitive answer, our results have shown that a lot of âfolk wisdomâ on why ImageNet works well is not ac- curate. We hope that this paper will pique our colleaguesâ curiosity and facilitate further research on this fascinating topic.
# 8. Acknowledgements
This work was supported in part by ONR MURI N00014-14-1-0671. We gratefully acknowledge NVIDIA corporation for the donation of K40 GPUs and access to the NVIDIA PSG cluster for this research. We would like to acknowledge the support from the Berkeley Vision and Learning Center (BVLC) and Berkeley DeepDrive (BDD). Minyoung Huh was partially supported by the Rose Hill Foundation.
# References
[1] P. Agrawal, J. Carreira, and J. Malik. Learning to see by moving. In Proceedings of the IEEE International Confer- ence on Computer Vision, pages 37â45, 2015.
[2] P. Agrawal, R. Girshick, and J. Malik. Analyzing the perfor- mance of multilayer neural networks for object recognition. In Computer VisionâECCV 2014, pages 329â344. Springer, 2014.
[3] H. Azizpour, A. Razavian, J. Sullivan, A. Maki, and S. Carls- son. From generic to speciï¬c deep representations for vi- sual recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 36â45, 2015.
[4] Y. Bengio, A. C. Courville, and P. Vincent. Unsupervised feature learning and deep learning: A review and new per- spectives. CoRR, abs/1206.5538, 1, 2012.
[5] H. Bourlard and Y. Kamp. Auto-association by multilayer perceptrons and singular value decomposition. Biological cybernetics, 59(4-5):291â294, 1988.
[6] J. Carreira, P. Agrawal, K. Fragkiadaki, and J. Malik. Human pose estimation with iterative error feedback. arXiv preprint arXiv:1507.06550, 2015.
Instance-aware semantic seg- mentation via multi-task network cascades. arXiv preprint arXiv:1512.04412, 2015.
[8] C. Doersch, A. Gupta, and A. A. Efros. Unsupervised vi- sual representation learning by context prediction. In Pro- ceedings of the IEEE International Conference on Computer Vision, pages 1422â1430, 2015.
S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Dar- rell. Long-term recurrent convolutional networks for visual In Proceedings of the IEEE recognition and description. Conference on Computer Vision and Pattern Recognition, pages 2625â2634, 2015.
[10] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional acti- vation feature for generic visual recognition. arXiv preprint arXiv:1310.1531, 2013.
[11] C. Fellbaum. WordNet: An Electronic Lexical Database. Bradford Books, 1998.
[12] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- ture hierarchies for accurate object detection and semantic segmentation. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 580â587. IEEE, 2014.
[13] G. Gkioxari, R. Girshick, and J. Malik. Contextual action recognition with rcnn. In ICCV, 2015.
[14] R. Goroshin, J. Bruna, J. Tompson, D. Eigen, and Y. LeCun. Unsupervised feature learning from temporal data. arXiv preprint arXiv:1504.02518, 2015.
[15] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015.
9
[16] D. Jayaraman and K. Grauman. Learning image representa- tions tied to ego-motion. In Proceedings of the IEEE Inter- national Conference on Computer Vision, pages 1413â1421, 2015.
[17] Y. Jia. Caffe: An open source convolutional archi- http://caffe. tecture for fast feature embedding. berkeleyvision.org/, 2013.
[18] A. Joulin, L. van der Maaten, A. Jabri, and N. Vasilache. Learning visual features from large weakly supervised data. In ECCV, 2016.
[19] A. Karpathy and L. Fei-Fei. Deep visual-semantic align- In Proceedings ments for generating image descriptions. of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3128â3137, 2015.
[20] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
[21] P. Kr¨ahenb¨uhl, C. Doersch, J. Donahue, and T. Darrell. Data- dependent initializations of convolutional neural networks. In ICLR, 2016.
Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
[23] G. Larsson, M. Maire, and G. Shakhnarovich. Learning rep- resentations for automatic colorization. In ECCV, 2016.
[24] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436â444, 2015.
[25] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural compu- tation, 1(4):541â551, 1989.
[26] Z. Li and D. Hoiem. Learning without forgetting. In ECCV, 2016.
[27] H. Mobahi, R. Collobert, and J. Weston. Deep learning from temporal coherence in video. In Proceedings of the 26th An- nual International Conference on Machine Learning, pages 737â744. ACM, 2009.
[28] M. Noroozi and F. Paolo. Unsupervised learning of visual representations by solving jigsaw puzzles. In ECCV, 2016.
[29] B. A. Olshausen et al. Emergence of simple-cell receptive ï¬eld properties by learning a sparse code for natural images. Nature, 381(6583):607â609, 1996.
[30] A. Owens, P. Isola, J. McDermott, A. Torralba, E. Adelson, and F. William. Visually indicated sounds. In CVPR, 2016.
[31] D. Pathak, P. Kr¨ahenb¨uhl, J. Donahue, T. Darrell, and A. Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016.
[32] M. Ranzato, F. J. Huang, Y.-L. Boureau, and Y. LeCun. Un- supervised learning of invariant feature hierarchies with ap- In Computer Vision and plications to object recognition. Pattern Recognition, 2007. CVPRâ07. IEEE Conference on, pages 1â8. IEEE, 2007.
[33] A. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. Cnn features off-the-shelf: an astounding baseline for recogni- tion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 806â813, 2014.
[34] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems, pages 91â99, 2015.
[35] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 2015.
[36] R. Salakhutdinov and G. E. Hinton. Deep boltzmann ma- chines. In International Conference on Artiï¬cial Intelligence and Statistics, pages 448â455, 2009.
[37] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229, 2013.
[38] K. Simonyan and A. Zisserman. Two-stream convolutional In Advances networks for action recognition in videos. in Neural Information Processing Systems, pages 568â576, 2014.
Very deep convolu- tional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
[40] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, 2015.
[41] X. Wang and A. Gupta. Unsupervised learning of visual rep- resentations using videos. In Proceedings of the IEEE Inter- national Conference on Computer Vision, pages 2794â2802, 2015.
[42] P. Weinzaepfel, J. Revaud, Z. Harchaoui, and C. Schmid. Deepï¬ow: Large displacement optical ï¬ow with deep match- ing. In Proceedings of the IEEE International Conference on Computer Vision, pages 1385â1392, 2013.
[43] L. Wiskott and T. J. Sejnowski. Slow feature analysis: Un- supervised learning of invariances. Neural computation, 14(4):715â770, 2002.
[44] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. How trans- ferable are features in deep neural networks? In Advances in Neural Information Processing Systems, pages 3320â3328, 2014.
[45] R. Zhang, P. Isola, and A. Efros. Colorful image colorization. In ECCV, 2016.
[46] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene recognition using places database. NIPS, 2014.
10 | {
"id": "1507.06550"
} |
1608.07905 | Machine Comprehension Using Match-LSTM and Answer Pointer | Machine comprehension of text is an important problem in natural language
processing. A recently released dataset, the Stanford Question Answering
Dataset (SQuAD), offers a large number of real questions and their answers
created by humans through crowdsourcing. SQuAD provides a challenging testbed
for evaluating machine comprehension algorithms, partly because compared with
previous datasets, in SQuAD the answers do not come from a small set of
candidate answers and they have variable lengths. We propose an end-to-end
neural architecture for the task. The architecture is based on match-LSTM, a
model we proposed previously for textual entailment, and Pointer Net, a
sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the
output tokens to be from the input sequences. We propose two ways of using
Pointer Net for our task. Our experiments show that both of our two models
substantially outperform the best results obtained by Rajpurkar et al.(2016)
using logistic regression and manually crafted features. | http://arxiv.org/pdf/1608.07905 | Shuohang Wang, Jing Jiang | cs.CL, cs.AI | 11 pages; 3 figures | null | cs.CL | 20160829 | 20161107 | 6 1 0 2
v o N 7 ] L C . s c [
2 v 5 0 9 7 0 . 8 0 6 1 : v i X r a
# Under review as a conference paper at ICLR 2017
# MACHINE COMPREHENSION USING MATCH-LSTM AND ANSWER POINTER
Shuohang Wang School of Information Systems Singapore Management University shwang.2014@phdis.smu.edu.sg
Jing Jiang School of Information Systems Singapore Management University jingjiang@smu.edu.sg
# ABSTRACT
Machine comprehension of text is an important problem in natural language pro- cessing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for eval- uating machine comprehension algorithms, partly because compared with previ- ous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architec- ture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al. (2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al. (2016) using logistic regression and manually crafted features.
# INTRODUCTION
Machine comprehension of text is one of the ultimate goals of natural language processing. While the ability of a machine to understand text can be assessed in many different ways, in recent years, several benchmark datasets have been created to focus on answering questions as a way to evaluate machine comprehension (Richardson et al., 2013; Hermann et al., 2015; Hill et al., 2016; Weston et al., 2016; Rajpurkar et al., 2016). In this setup, typically the machine is ï¬rst presented with a piece of text such as a news article or a story. The machine is then expected to answer one or multiple questions related to the text.
In most of the benchmark datasets, a question can be treated as a multiple choice question, whose correct answer is to be chosen from a set of provided candidate answers (Richardson et al., 2013; Hill et al., 2016). Presumably, questions with more given candidate answers are more challenging. The Stanford Question Answering Dataset (SQuAD) introduced recently by Rajpurkar et al. (2016) contains such more challenging questions whose correct answers can be any sequence of tokens from the given text. Moreover, unlike some other datasets whose questions and answers were created automatically in Cloze style (Hermann et al., 2015; Hill et al., 2016), the questions and answers in SQuAD were created by humans through crowdsourcing, which makes the dataset more realistic. Given these advantages of the SQuAD dataset, in this paper, we focus on this new dataset to study machine comprehension of text. A sample piece of text and three of its associated questions are shown in Table 1.
Traditional solutions to this kind of question answering tasks rely on NLP pipelines that involve mul- tiple steps of linguistic analyses and feature engineering, including syntactic parsing, named entity recognition, question classiï¬cation, semantic parsing, etc. Recently, with the advances of applying neural network models in NLP, there has been much interest in building end-to-end neural architec- tures for various NLP tasks, including several pieces of work on machine comprehension (Hermann et al., 2015; Hill et al., 2016; Yin et al., 2016; Kadlec et al., 2016; Cui et al., 2016). However, given the properties of previous machine comprehension datasets, existing end-to-end neural architectures for the task either rely on the candidate answers (Hill et al., 2016; Yin et al., 2016) or assume that the
1
# Under review as a conference paper at ICLR 2017
In 1870, Tesla moved to Karlovac, to attend school at the Higher Real Gymnasium, where he was profoundly inï¬uenced by a math teacher Martin Sekuli´c. The classes were held in German, as it was a school within the Austro-Hungarian Military Frontier. Tesla was able to perform integral calculus in his head, which prompted his teachers to believe that he was cheating. He ï¬nished a four-year term in three years, graduating in 1873.
1. In what language were the classes given? 2. Who was Teslaâs main inï¬uence in Karlovac? Martin Sekuli´c 3. Why did Tesla go to Karlovac? German attend school at the Higher Real Gymnasium
Table 1: A paragraph from Wikipedia and three associated questions together with their answers, taken from the SQuAD dataset. The tokens in bold in the paragraph are our predicted answers while the texts next to the questions are the ground truth answers.
answer is a single token (Hermann et al., 2015; Kadlec et al., 2016; Cui et al., 2016), which make these methods unsuitable for the SQuAD dataset. In this paper, we propose a new end-to-end neural architecture to address the machine comprehension problem as deï¬ned in the SQuAD dataset.
Speciï¬cally, observing that in the SQuAD dataset many questions are paraphrases of sentences from the original text, we adopt a match-LSTM model that we developed earlier for textual entail- ment (Wang & Jiang, 2016). We further adopt the Pointer Net (Ptr-Net) model developed by Vinyals et al. (2015), which enables the predictions of tokens from the input sequence only rather than from a larger ï¬xed vocabulary and thus allows us to generate answers that consist of multiple tokens from the original text. We propose two ways to apply the Ptr-Net model for our task: a sequence model and a boundary model. We also further extend the boundary model with a search mechanism. Ex- periments on the SQuAD dataset show that our two models both outperform the best performance reported by Rajpurkar et al. (2016). Moreover, using an ensemble of several of our models, we can achieve very competitive performance on SQuAD.
Our contributions can be summarized as follows: (1) We propose two new end-to-end neural network models for machine comprehension, which combine match-LSTM and Ptr-Net to handle the special properties of the SQuAD dataset. (2) We have achieved the performance of an exact match score of 67.9% and an F1 score of 77.0% on the unseen test dataset, which is much better than the feature- engineered solution (Rajpurkar et al., 2016). Our performance is also close to the state of the art on SQuAD, which is 71.6% in terms of exact match and 80.4% in terms of F1 from Salesforce Research. (3) Our further analyses of the models reveal some useful insights for further improving the method. Beisdes, we also made our code available online 1.
# 2 METHOD
In this section, we ï¬rst brieï¬y review match-LSTM and Pointer Net. These two pieces of existing work lay the foundation of our method. We then present our end-to-end neural architecture for machine comprehension.
2.1 MATCH-LSTM
In a recent work on learning natural language inference, we proposed a match-LSTM model for predicting textual entailment (Wang & Jiang, 2016). In textual entailment, two sentences are given where one is a premise and the other is a hypothesis. To predict whether the premise entails the hypothesis, the match-LSTM model goes through the tokens of the hypothesis sequentially. At each position of the hypothesis, attention mechanism is used to obtain a weighted vector representation of the premise. This weighted premise is then to be combined with a vector representation of the current token of the hypothesis and fed into an LSTM, which we call the match-LSTM. The match- LSTM essentially sequentially aggregates the matching of the attention-weighted premise to each token of the hypothesis and uses the aggregated matching result to make a ï¬nal prediction.
# 1 https://github.com/shuohangwang/SeqMatchSeq
2
# Under review as a conference paper at ICLR 2017
Answer Pointer Layer Tt Le Match-LSTM layer = .sTM preprocess- ing Layer forP LsT⢠preprocess- ing Layer fora +4 hg Wg Tesla ? (a) Sequence Model hy Why Why did Tesla ? (b) Boundary Model
Figure 1: An overview of our two models. Both models consist of an LSTM preprocessing layer, a match-LSTM layer and an Answer Pointer layer. For each match-LSTM in a particular direction, hi, which is defined as Hâa1, is computed using the a in the corresponding direction, as described in either Eqn. (2)
2.2 POINTER NET
Vinyals et al. (2015) proposed a Pointer Network (Ptr-Net) model to solve a special kind of problems where we want to generate an output sequence whose tokens must come from the input sequence. Instead of picking an output token from a ï¬xed vocabulary, Ptr-Net uses attention mechanism as a pointer to select a position from the input sequence as an output symbol. The pointer mechanism has inspired some recent work on language processing (Gu et al., 2016; Kadlec et al., 2016). Here we adopt Ptr-Net in order to construct answers using tokens from the input text.
# 2.3 OUR METHOD
Formally, the problem we are trying to solve can be formulated as follows. We are given a piece of text, which we refer to as a passage, and a question related to the passage. The passage is represented by matrix P â RdÃP , where P is the length (number of tokens) of the passage and d is the dimensionality of word embeddings. Similarly, the question is represented by matrix Q â RdÃQ where Q is the length of the question. Our goal is to identify a subsequence from the passage as the answer to the question.
As pointed out earlier, since the output tokens are from the input, we would like to adopt the Pointer Net for this problem. A straightforward way of applying Ptr-Net here is to treat an answer as a sequence of tokens from the input passage but ignore the fact that these tokens are consecutive in the original passage, because Ptr-Net does not make the consecutivity assumption. Speciï¬cally, we represent the answer as a sequence of integers a = (a1, a2, . . .), where each ai is an integer between 1 and P , indicating a certain position in the passage.
Alternatively, if we want to ensure consecutivity, that is, if we want to ensure that we indeed select a subsequence from the passage as an answer, we can use the Ptr-Net to predict only the start and the end of an answer. In this case, the Ptr-Net only needs to select two tokens from the input passage, and all the tokens between these two tokens in the passage are treated as the answer. Speciï¬cally, we can represent the answer to be predicted as two integers a = (as, ae), where as an ae are integers between 1 and P .
3
# Under review as a conference paper at ICLR 2017
We refer to the ï¬rst setting above as a sequence model and the second setting above as a bound- ary model. For either model, we assume that a set of training examples in the form of triplets {(Pn, Qn, an)}N
An overview of the two neural network models are shown in Figure 1. Both models consist of three layers: (1) An LSTM preprocessing layer that preprocesses the passage and the question using LSTMs. (3) An (2) A match-LSTM layer that tries to match the passage against the question. Answer Pointer (Ans-Ptr) layer that uses Ptr-Net to select a set of tokens from the passage as the answer. The difference between the two models only lies in the third layer.
# LSTM Preprocessing Layer
The purpose for the LSTM preprocessing layer is to incorporate contextual information into the representation of each token in the passage and the question. We use a standard one-directional LSTM (Hochreiter & Schmidhuber, 1997) 2 to process the passage and the question separately, as shown below:
ââââ LSTM(P), Hq = ââââ LSTM(Q). Hp =
(1) The resulting matrices Hp â RlÃP and Hq â RlÃQ are hidden representations of the passage and the question, where l is the dimensionality of the hidden vectors. In other words, the ith column vector hp i ) in Hp (or Hq) represents the ith token in the passage (or the question) together with some contextual information from the left.
# Match-LSTM Layer
We apply the match-LSTM model (Wang & Jiang, 2016) proposed for textual entailment to our machine comprehension problem by treating the question as a premise and the passage as a hypoth- esis. The match-LSTM sequentially goes through the passage. At position i of the passage, it ï¬rst uses the standard word-by-word attention mechanism to obtain attention weight vector ââα i â RQ as follows:
G, = tanh(W4HS + (WPh? + WRT, +b?) @ 0), Qi = softmax(wTG; +b®eg), (2)
# ââ h r
iâ1 â Rl is the where Wq, Wp, Wr â RlÃl, bp, w â Rl and b â R are parameters to be learned, hidden vector of the one-directional match-LSTM (to be explained below) at position i â 1, and the outer product (· â eQ) produces a matrix or row vector by repeating the vector or scalar on the left for Q times. Essentially, the resulting attention weight ââα i,j above indicates the degree of matching between the ith token in the passage with the jth token in the question. Next, we use the attention weight vector ââα i to obtain a weighted version of the question and combine it with the current token of the passage to form a vector ââz i:
Pp Zi- [ane| @)
This vector ââz i is fed into a standard one-directional LSTM to form our so-called match-LSTM: ââââ LSTM(ââz i,
# ââ h r
i â Rl.
where
We further build a similar match-LSTM in the reverse direction. The purpose is to obtain a repre- sentation that encodes the contexts from both directions for each token in the passage. To build this reverse match-LSTM, we ï¬rst deï¬ne
a G; = tanh(W°HS + (Wh? + Wh", , +bâ) @e0), @, = softmax(w' G; +b® eg). (5)
2As the output gates in the preprocessing layer affect the ï¬nal performance little, we remove it in our experiments.
4
(4)
# Under review as a conference paper at ICLR 2017
Note that the parameters here (Wq, Wp, Wr, bp, w and b) are the same as used in Eqn. (2). We ââ then deï¬ne ââz i in a similar way and ï¬nally deï¬ne h r i to be the hidden representation at position i produced by the match-LSTM in the reverse direction.
ââ h r 1,
# ââ h r
# ââ h r
ââ h r 1, ââ h r ââ h r ââ Hr â RlÃP represent the hidden states [ Let ââ ââ h r h r 1, [ 2, . . . , P ] and ââ h r P ]. We deï¬ne Hr â R2lÃP as the concatenation of the two: 2, . . . , ââ Hr â RlÃP represent
w- F
# Answer Pointer Layer
The top layer, the Answer Pointer (Ans-Ptr) layer, is motivated by the Pointer Net introduced by Vinyals et al. (2015). This layer uses the sequence Hr as input. Recall that we have two different models: The sequence model produces a sequence of answer tokens but these tokens may not be consecutive in the original passage. The boundary model produces only the start token and the end token of the answer, and then all the tokens between these two in the original passage are considered to be the answer. We now explain the two models separately.
The Sequence Model: Recall that in the sequence model, the answer is represented by a sequence of integers a = (a1, a2, . . .) indicating the positions of the selected tokens in the original passage. The Ans-Ptr layer models the generation of these integers in a sequential manner. Because the length of an answer is not ï¬xed, in order to stop generating answer tokens at certain point, we allow each ak to take up an integer value between 1 and P + 1, where P + 1 is a special value indicating the end of the answer. Once ak is set to be P + 1, the generation of the answer stops. In order to generate the kth answer token indicated by ak, ï¬rst, the attention mechanism is used again to obtain an attention weight vector βk â R(P +1), where βk,j (1 ⤠j ⤠P + 1) is the probability of selecting the jth token from the passage as the kth token in the answer, and βk,(P +1) is the probability of stopping the answer generation at position k. βk is modeled as follows: kâ1 + ba) â e(P +1)),
F, = tanh(VH" + (W*hi_, + b*) By = softmax(vâ¢F, + ¢® e(p41)),
(8)
where H' ⬠R2!*(P+1) is the concatenation of H' with a zero vector, defined as H' = [H'; 0], V eR*â, W* ⬠Râ¢!, b*,v ⬠R! and c ⬠R are parameters to be learned, (- ® e(p+1)) follows the same definition as before, and hj,_, ⬠Râ is the hidden vector at position k â 1 of an answer LSTM as defined below:
# LSTM(H
hj, = LSTM(H 3}, hi,_,)- (9)
We can then model the probability of generating the answer sequence as
p(a|Hr) = p(ak|a1, a2, . . . , akâ1, Hr), (10) k
and
p(ak = j|a1, a2, . . . , akâ1, Hr) = βk,j. (11)
To train the model, we minimize the following loss function based on the training examples:
N = SF log p(an{Pn; Qn): (12) n=1
â
n=1
The Boundary Model: The boundary model works in a way very similar to the sequence model above, except that instead of predicting a sequence of indices a1, a2, . . ., we only need to predict two indices as and ae. So the main difference from the sequence model above is that in the boundary model we do not need to add the zero padding to Hr, and the probability of generating an answer is simply modeled as
p(a|Hr) = p(as|Hr)p(ae|as, Hr). (13)
5
# Under review as a conference paper at ICLR 2017
l |θ| Exact Match Test Dev Dev F1 Random Guess Logistic Regression DCR - - - 0 - - 1.1 40.0 62.5 1.3 40.4 62.5 4.1 51.0 71.2 Match-LSTM with Ans-Ptr (Sequence) Match-LSTM with Ans-Ptr (Boundary) Match-LSTM with Ans-Ptr (Boundary+Search) Match-LSTM with Ans-Ptr (Boundary+Search) Match-LSTM with Ans-Ptr (Boundary+Search+b) Match-LSTM with Bi-Ans-Ptr (Boundary+Search+b) 150 150 150 300 150 150 882K 54.4 882K 61.1 882K 63.0 3.2M 63.1 1.1M 63.4 1.4M 64.1 - - - - - 64.7 68.2 71.2 72.7 72.7 73.0 73.9 Match-LSTM with Ans-Ptr (Boundary+Search+en) 150 882K 67.6 67.9 76.8 Test 4.3 51.0 71.0 - - - - - 73.7 77.0
Table 2: Experiment Results. Here âSearchâ refers to globally searching the spans with no more than 15 tokens, âbâ refers to using bi-directional pre-processing LSTM, and âenâ refers to ensemble method.
We further extend the boundary model by incorporating a search mechanism. Speciï¬cally, during prediction, we try to limit the length of the span and globally search the span with the highest probability computed by p(as) à p(ae). Besides, as the boundary has a sequence of ï¬xed number of values, bi-directional Ans-Ptr can be simply combined to ï¬ne-tune the correct span.
# 3 EXPERIMENTS
In this section, we present our experiment results and perform some analyses to better understand how our models works.
# 3.1 DATA
We use the Stanford Question Answering Dataset (SQuAD) v1.1 to conduct our experiments. Pas- sages in SQuAD come from 536 articles from Wikipedia covering a wide range of topics. Each passage is a single paragraph from a Wikipedia article, and each passage has around 5 questions associated with it. In total, there are 23,215 passages and 107,785 questions. The data has been split into a training set (with 87,599 question-answer pairs), a development set (with 10,570 question- answer pairs) and a hidden test set.
3.2 EXPERIMENT SETTINGS
We ï¬rst tokenize all the passages, questions and answers. The resulting vocabulary contains 117K unique words. We use word embeddings from GloVe (Pennington et al., 2014) to initialize the model. Words not found in GloVe are initialized as zero vectors. The word embeddings are not updated during the training of the model.
The dimensionality l of the hidden layers is set to be 150 or 300. We use ADAMAX (Kingma & Ba, 2015) with the coefï¬cients β1 = 0.9 and β2 = 0.999 to optimize the model. Each update is computed through a minibatch of 30 instances. We do not use L2-regularization.
The performance is measured by two metrics: percentage of exact match with the ground truth answers, and word-level F1 score when comparing the tokens in the predicted answers with the tokens in the ground truth answers. Note that in the development set and the test set each question has around three ground truth answers. F1 scores with the best matching answers are used to compute the average F1 score.
3.3 RESULTS
The results of our models as well as the results of the baselines given by Rajpurkar et al. (2016) and Yu et al. (2016) are shown in Table 2. We can see that both of our two models have clearly outper-
6
# Under review as a conference paper at ICLR 2017
Answer: German in what F language | Ware = = = = _| J the classes = = = given Li aU = nm _ | 20 30 40 Question Answer: Martin Sekuli¢ Who = = T = â = 7 was + a = = 4 Tesla = S main influence | | in Karlovac Lt L | | = | n 0 10 20 30 40 Question Answer: attend school at the Higher Real Gymnasium | | s Z | a oe 3a Question S' Karlovac 2 he} was | profoundly | The classes Inf 1870 Tesla moved tof Karlovac to attend school at the Higher Real where influenced | by F al math teacher Martin Sekulié were | held | ink German by as was | school within the Austro-Hungarian [ij Military Frontier } Gymnasium Paragraph
Figure 2: Visualization of the attention weights α for three questions associated with the same passage.
formed the logistic regression model by Rajpurkar et al. (2016), which relies on carefully designed features. Furthermore, our boundary model has outperformed the sequence model, achieving an ex- act match score of 61.1% and an F1 score of 71.2%. In particular, in terms of the exact match score, the boundary model has a clear advantage over the sequence model. The improvement of our models over the logistic regression model shows that our end-to-end neural network models without much feature engineering are very effective on this task and this dataset. Considering the effectiveness of boundary model, we further explore this model. Observing that most of the answers are the spans with relatively small sizes, we simply limit the largest predicted span to have no more than 15 tokens and conducted experiment with span searching This resulted in 1.5% improvement in F1 on the de- velopment data and that outperformed the DCR model (Yu et al., 2016), which also introduced some language features such as POS and NE into their model. Besides, we tried to increase the memory dimension l in the model or add bi-directional pre-processing LSTM or add bi-directional Ans-Ptr. The improvement on the development data using the ï¬rst two methods is quite small. While by adding Bi-Ans-Ptr with bi-directional pre-processing LSTM, we can get 1.2% improvement in F1. Finally, we explore the ensemble method by simply computing the product of the boundary prob- abilities collected from 5 boundary models and then searching the most likely span with no more than 15 tokens. This ensemble method achieved the best performance as shown in the table.
3.4 FURTHER ANALYSES
To better understand the strengths and weaknesses of our models, we perform some further analyses of the results below.
First, we suspect that longer answers are harder to predict. To verify this hypothesis, we analysed the performance in terms of both exact match and F1 score with respect to the answer length on the development set. For example, for questions whose answers contain more than 9 tokens, the F1 score of the boundary model drops to around 55% and the exact match score drops to only around 30%, compared to the F1 score and exact match score of close to 72% and 67%, respectively, for questions with single-token answers. And that supports our hypothesis.
7
# Under review as a conference paper at ICLR 2017
Next, we analyze the performance of our models on different groups of questions. We use a crude way to split the questions into different groups based on a set of question words we have deï¬ned, including âwhat,â âhow,â âwho,â âwhen,â âwhich,â âwhere,â and âwhy.â These different question words roughly refer to questions with different types of answers. For example, âwhenâ questions look for temporal expressions as answers, whereas âwhereâ questions look for locations as answers. According to the performance on the development data set, our models work the best for âwhenâ questions. This may be because in this dataset temporal expressions are relatively easier to recog- nize. Other groups of questions whose answers are noun phrases, such as âwhatâ questions, âwhichâ questions and âwhereâ questions, also get relatively better results. On the other hand, âwhyâ ques- tions are the hardest to answer. This is not surprising because the answers to âwhyâ questions can be very diverse, and they are not restricted to any certain type of phrases.
Finally, we would like to check whether the attention mechanism used in the match-LSTM layer is effective in helping the model locate the answer. We show the attention weights α in Figure 2. In the ï¬gure the darker the color is the higher the weight is. We can see that some words have been well aligned based on the attention weights. For example, the word âGermanâ in the passage is aligned well to the word âlanguageâ in the ï¬rst question, and the model successfully predicts âGermanâ as the answer to the question. For the question word âwhoâ in the second question, the word âteacherâ actually receives relatively higher attention weight, and the model has predicted the phrase âMartin Sekulicâ after that as the answer, which is correct. For the last question that starts with âwhyâ, the attention weights are more evenly distributed and it is not clear which words have been aligned to âwhyâ.
# 4 RELATED WORK
Machine comprehension of text has gained much attention in recent years, and increasingly re- searchers are building data-drive, end-to-end neural network models for the task. We will ï¬rst review the recently released datasets and then some end-to-end models on this task.
# 4.1 DATASETS
A number of datasets for studying machine comprehension were created in Cloze style by removing a single token from a sentence in the original corpus, and the task is to predict the missing word. For example, Hermann et al. (2015) created questions in Cloze style from CNN and Daily Mail highlights. Hill et al. (2016) created the Childrenâs Book Test dataset, which is based on childrenâs stories. Cui et al. (2016) released two similar datasets in Chinese, the People Daily dataset and the Childrenâs Fairy Tale dataset.
Instead of creating questions in Cloze style, a number of other datasets rely on human annotators to create real questions. Richardson et al. (2013) created the well-known MCTest dataset and Tapaswi et al. (2016) created the MovieQA dataset. In these datasets, candidate answers are provided for each question. Similar to these two datasets, the SQuAD dataset (Rajpurkar et al., 2016) was also created by human annotators. Different from the previous two, however, the SQuAD dataset does not provide candidate answers, and thus all possible subsequences from the given passage have to be considered as candidate answers.
Besides the datasets above, there are also a few other datasets created for machine comprehension, such as WikiReading dataset (Hewlett et al., 2016) and bAbI dataset (Weston et al., 2016), but they are quite different from the datasets above in nature.
4.2 END-TO-END NEURAL NETWORK MODELS FOR MACHINE COMPREHENSION
There have been a number of studies proposing end-to-end neural network models for machine comprehension. A common approach is to use recurrent neural networks (RNNs) to process the given text and the question in order to predict or generate the answers (Hermann et al., 2015). Attention mechanism is also widely used on top of RNNs in order to match the question with the given passage (Hermann et al., 2015; Chen et al., 2016). Given that answers often come from the given passage, Pointer Network has been adopted in a few studies in order to copy tokens from the given passage as answers (Kadlec et al., 2016; Trischler et al., 2016). Compared with existing
8
# Under review as a conference paper at ICLR 2017
work, we use match-LSTM to match a question and a given passage, and we use Pointer Network in a different way such that we can generate answers that contain multiple tokens from the given passage.
Memory Networks (Weston et al., 2015) have also been applied to machine comprehen- sion (Sukhbaatar et al., 2015; Kumar et al., 2016; Hill et al., 2016), but its scalability when applied to a large dataset is still an issue. In this work, we did not consider memory networks for the SQuAD dataset.
# 5 CONCLUSIONS
In this paper, We developed two models for the machine comprehension problem deï¬ned in the Stanford Question Answering (SQuAD) dataset, both making use of match-LSTM and Pointer Net- work. Experiments on the SQuAD dataset showed that our second model, the boundary model, could achieve an exact match score of 67.6% and an F1 score of 77% on the test dataset, which is better than our sequence model and Rajpurkar et al. (2016)âs feature-engineered model.
In the future, we plan to look further into the different types of questions and focus on those questions which currently have low performance, such as the âwhyâ questions. We also plan to test how our models could be applied to other machine comprehension datasets.
# 6 ACKNOWLEDGMENTS
We thank Pranav Rajpurkar for testing our model on the hidden test dataset and Percy Liang for helping us with the Dockerï¬le for Codalab.
# REFERENCES
Danqi Chen, Jason Bolton, and Christopher D. Manning. A thorough examination of the CNN/Daily Mail reading comprehension task. In Proceedings of the Conference on Association for Compu- tational Linguistics, 2016.
Yiming Cui, Ting Liu, Zhipeng Chen, Shijin Wang, and Guoping Hu. Consensus attention-based neural networks for chinese reading comprehension. In arXiv preprint arXiv:1607.02250, 2016.
Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the Conference on Association for Computa- tional Linguistics, 2016.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Proceedings of the Conference on Advances in Neural Information Processing Systems, pp. 1693â1701, 2015.
Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, and David Berthelot. WIKIREADING: A novel large-scale language under- standing task over wikipedia. In Proceedings of the Conference on Association for Computational Linguistics, 2016.
Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The Goldilocks principle: Read- ing childrenâs books with explicit memory representations. In Proceedings of the International Conference on Learning Representations, 2016.
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735â1780, 1997.
Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with the attention sum reader network. In Proceedings of the Conference on Association for Computational Linguistics, 2016.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations, 2015.
9
# Under review as a conference paper at ICLR 2017
Ankit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter On- druska, Ishaan Gulrajani, and Richard Socher. Ask me anything: Dynamic memory networks In Proceedings of the International Conference on Machine for natural language processing. Learning, 2016.
Jeffrey Pennington, Richard Socher, and Christopher D Manning. GloVe: Global vectors for word In Proceedings of the Conference on Empirical Methods in Natural Language representation. Processing, 2014.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2016.
Matthew Richardson, Christopher JC Burges, and Erin Renshaw. MCTest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2013.
Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Proceed- ings of the Conference on Advances in neural information processing systems, 2015.
Makarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. MovieQA: Understanding stories in movies through question-answering. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2016.
Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehension with the EpiReader. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2016.
Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Proceedings of the Con- ference on Advances in Neural Information Processing Systems, 2015.
Shuohang Wang and Jing Jiang. Learning natural language inference with LSTM. In Proceedings of the Conference on the North American Chapter of the Association for Computational Linguistics, 2016.
Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In Proceedings of the Inter- national Conference on Learning Representations, 2015.
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri¨enboer, Armand Joulin, and Tomas Mikolov. Towards AI-complete question answering: A set of prerequisite toy tasks. In Proceedings of the International Conference on Learning Representations, 2016.
Wenpeng Yin, Sebastian Ebert, and Hinrich Sch¨utze. Attention-based convolutional neural network for machine comprehension. arXiv preprint arXiv:1602.04341, 2016.
Yang Yu, Wei Zhang, Kazi Hasan, Mo Yu, Bing Xiang, and Bowen Zhou. End-to-end answer chunk extraction and ranking for reading comprehension. arXiv preprint arXiv:1610.09996, 2016.
10
# Under review as a conference paper at ICLR 2017
F1 score(s) Exact match(s) F1 score(b) Exact match(b) F1 score(e) Exact match(e) Score w i=} Instance number TTLIIt 1 2 3 4 5 6 7 8 9 >9 Answer length Answer length 90 (3) 7000 (4) 80 _. 6000 70 2 5000 @ 60 2 4000 5 50 & 3000 s 40} a 2000 30} = 1000 ears SS 3 oy Sy >) & © Ss by @ o & e s by & o LLL SS SS ZC CLL LS SK KS Question types Question types
Figure 3: Performance breakdown by answer lengths and question types. Top: Plot (1) shows the performance of our two models (where s refers to the sequence model , b refers to the boundary model, and e refers to the ensemble boundary model) over answers with different lengths. Plot (2) shows the numbers of answers with different lengths. Bottom: Plot (3) shows the performance our the two models on different types of questions. Plot (4) shows the numbers of different types of questions.
# A APPENDIX
We show the performance breakdown by answer lengths and question types for our sequence model, boundary model and the ensemble model in Figure 3.
11 | {
"id": "1602.04341"
} |
1608.06993 | Densely Connected Convolutional Networks | Recent work has shown that convolutional networks can be substantially
deeper, more accurate, and efficient to train if they contain shorter
connections between layers close to the input and those close to the output. In
this paper, we embrace this observation and introduce the Dense Convolutional
Network (DenseNet), which connects each layer to every other layer in a
feed-forward fashion. Whereas traditional convolutional networks with L layers
have L connections - one between each layer and its subsequent layer - our
network has L(L+1)/2 direct connections. For each layer, the feature-maps of
all preceding layers are used as inputs, and its own feature-maps are used as
inputs into all subsequent layers. DenseNets have several compelling
advantages: they alleviate the vanishing-gradient problem, strengthen feature
propagation, encourage feature reuse, and substantially reduce the number of
parameters. We evaluate our proposed architecture on four highly competitive
object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet).
DenseNets obtain significant improvements over the state-of-the-art on most of
them, whilst requiring less computation to achieve high performance. Code and
pre-trained models are available at https://github.com/liuzhuang13/DenseNet . | http://arxiv.org/pdf/1608.06993 | Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger | cs.CV, cs.LG | CVPR 2017 | null | cs.CV | 20160825 | 20180128 | 8 1 0 2
n a J 8 2 ] V C . s c [ 5 v 3 9 9 6 0 . 8 0 6 1 : v i X r a
# Densely Connected Convolutional Networks
# Gao Huangâ Cornell University gh349@cornell.edu
# Zhuang Liuâ Tsinghua University liuzhuang13@mails.tsinghua.edu.cn
# Laurens van der Maaten Facebook AI Research lvdmaaten@fb.com
# Kilian Q. Weinberger Cornell University kqw4@cornell.edu
# Abstract
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efï¬cient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convo- lutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connectionsâone between each layer and its subsequent layerâour network has L(L+1) direct connections. For 2 each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several com- pelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage fea- ture reuse, and substantially reduce the number of parame- ters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain sig- niï¬cant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high per- formance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet.
# 1. Introduction
Convolutional neural networks (CNNs) have become the dominant machine learning approach for visual object recognition. Although they were originally introduced over 20 years ago [18], improvements in computer hardware and network structure have enabled the training of truly deep CNNs only recently. The original LeNet5 [19] consisted of 5 layers, VGG featured 19 [29], and only last year Highway
Figure 1: A 5-layer dense block with a growth rate of k = 4. Each layer takes all preceding feature-maps as input.
Networks [34] and Residual Networks (ResNets) [11] have surpassed the 100-layer barrier.
As CNNs become increasingly deep, a new research problem emerges: as information about the input or gra- dient passes through many layers, it can vanish and âwash outâ by the time it reaches the end (or beginning) of the network. Many recent publications address this or related problems. ResNets [11] and Highway Networks [34] by- pass signal from one layer to the next via identity connec- tions. Stochastic depth [13] shortens ResNets by randomly dropping layers during training to allow better information and gradient ï¬ow. FractalNets [17] repeatedly combine sev- eral parallel layer sequences with different number of con- volutional blocks to obtain a large nominal depth, while maintaining many short paths in the network. Although these different approaches vary in network topology and training procedure, they all share a key characteristic: they create short paths from early layers to later layers.
âAuthors contributed equally
1
In this paper, we propose an architecture that distills this insight into a simple connectivity pattern: to ensure maxi- mum information flow between layers in the network, we connect all layers (with matching feature-map sizes) di- rectly with each other. To preserve the feed-forward nature, each layer obtains additional inputs from all preceding lay- ers and passes on its own feature-maps to all subsequent layers. Figure | illustrates this layout schematically. Cru- cially, in contrast to ResNets, we never combine features through summation before they are passed into a layer; in- stead, we combine features by concatenating them. Hence, the ¢*â layer has @ inputs, consisting of the feature-maps of all preceding convolutional blocks. Its own feature-maps are passed on to all L â ¢ subsequent layers. This introduces Ett) connections in an L-layer network, instead of just I, as in traditional architectures. Because of its dense con- nectivity pattern, we refer to our approach as Dense Convo- lutional Network (DenseNet).
A possibly counter-intuitive effect of this dense connec- tivity pattern is that it requires fewer parameters than tra- ditional convolutional networks, as there is no need to re- learn redundant feature-maps. Traditional feed-forward ar- chitectures can be viewed as algorithms with a state, which is passed on from layer to layer. Each layer reads the state from its preceding layer and writes to the subsequent layer. It changes the state but also passes on information that needs to be preserved. ResNets [11] make this information preser- vation explicit through additive identity transformations. Recent variations of ResNets [13] show that many layers contribute very little and can in fact be randomly dropped during training. This makes the state of ResNets similar to (unrolled) recurrent neural networks [21], but the num- ber of parameters of ResNets is substantially larger because each layer has its own weights. Our proposed DenseNet ar- chitecture explicitly differentiates between information that is added to the network and information that is preserved. DenseNet layers are very narrow (e.g., 12 ï¬lters per layer), adding only a small set of feature-maps to the âcollective knowledgeâ of the network and keep the remaining feature- maps unchangedâand the ï¬nal classiï¬er makes a decision based on all feature-maps in the network.
Besides better parameter efï¬ciency, one big advantage of DenseNets is their improved ï¬ow of information and gra- dients throughout the network, which makes them easy to train. Each layer has direct access to the gradients from the loss function and the original input signal, leading to an im- plicit deep supervision [20]. This helps training of deeper network architectures. Further, we also observe that dense connections have a regularizing effect, which reduces over- ï¬tting on tasks with smaller training set sizes.
We evaluate DenseNets on four highly competitive benchmark datasets (CIFAR-10, CIFAR-100, SVHN, and ImageNet). Our models tend to require much fewer param-
eters than existing algorithms with comparable accuracy. Further, we signiï¬cantly outperform the current state-of- the-art results on most of the benchmark tasks.
# 2. Related Work
The exploration of network architectures has been a part of neural network research since their initial discovery. The recent resurgence in popularity of neural networks has also revived this research domain. The increasing number of lay- ers in modern networks ampliï¬es the differences between architectures and motivates the exploration of different con- nectivity patterns and the revisiting of old research ideas.
A cascade structure similar to our proposed dense net- work layout has already been studied in the neural networks literature in the 1980s [3]. Their pioneering work focuses on fully connected multi-layer perceptrons trained in a layer- by-layer fashion. More recently, fully connected cascade networks to be trained with batch gradient descent were proposed [40]. Although effective on small datasets, this approach only scales to networks with a few hundred pa- rameters. In [9, 23, 31, 41], utilizing multi-level features in CNNs through skip-connnections has been found to be effective for various vision tasks. Parallel to our work, [1] derived a purely theoretical framework for networks with cross-layer connections similar to ours.
Highway Networks [34] were amongst the ï¬rst architec- tures that provided a means to effectively train end-to-end networks with more than 100 layers. Using bypassing paths along with gating units, Highway Networks with hundreds of layers can be optimized without difï¬culty. The bypass- ing paths are presumed to be the key factor that eases the training of these very deep networks. This point is further supported by ResNets [11], in which pure identity mappings are used as bypassing paths. ResNets have achieved im- pressive, record-breaking performance on many challeng- ing image recognition, localization, and detection tasks, such as ImageNet and COCO object detection [11]. Re- cently, stochastic depth was proposed as a way to success- fully train a 1202-layer ResNet [13]. Stochastic depth im- proves the training of deep residual networks by dropping layers randomly during training. This shows that not all layers may be needed and highlights that there is a great amount of redundancy in deep (residual) networks. Our pa- per was partly inspired by that observation. ResNets with pre-activation also facilitate the training of state-of-the-art networks with > 1000 layers [12].
An orthogonal approach to making networks deeper (e.g., with the help of skip connections) is to increase the network width. The GoogLeNet [36, 37] uses an âIncep- tion moduleâ which concatenates feature-maps produced by ï¬lters of different sizes. In [38], a variant of ResNets with wide generalized residual blocks was proposed. In fact, simply increasing the number of ï¬lters in each layer of
Input Dense Block 1 âO-vO eve v TORRIOAUOD v TONMOAUOD v Buyoog v Dense Block 2 Prediction 9 Dense Block 3 3 2 v ec S}e|8le| C+eveveve |>/8}o/3])-| âhorseâ i= 2 ih Sab aber Ei 3B 5 = Figure 2: A deep DenseNet with three dense blocks. The layers between two adjacent blocks are referred to as transition layers and change feature-map sizes via convolution and pooling. ResNets can improve its performance provided the depth is sufficient [42]. FractalNets also achieve competitive results on several datasets using a wide network structure [17]. Instead of drawing representational power from ex- tremely deep or wide architectures, DenseNets exploit the potential of the network through feature reuse, yielding con- densed models that are easy to train and highly parameter- efficient. Concatenating feature-maps learned by different layers increases variation in the input of subsequent layers and improves efficiency. This constitutes a major difference between DenseNets and ResNets. Compared to Inception networks [36, 37], which also concatenate features from dif- An advantage of ResNets is that the gradient can flow di- rectly through the identity function from later layers to the earlier layers. However, the identity function and the output of Hy are combined by summation, which may impede the information flow in the network. Dense connectivity. To further improve the information flow between layers we propose a different connectivity pattern: we introduce direct connections from any layer to all subsequent layers. Figure | illustrates the layout of the resulting DenseNet schematically. Consequently, the ¢'â layer receives the feature-maps of all preceding layers, Xo,---,X¢_â1, as input:
ferent layers, DenseNets are simpler and more efficient.
There are other notable network architecture innovations which have yielded competitive results. The Network in Network (NIN) [22] structure includes micro multi-layer perceptrons into the ï¬lters of convolutional layers to ex- tract more complicated features. In Deeply Supervised Net- work (DSN) [20], internal layers are directly supervised by auxiliary classiï¬ers, which can strengthen the gradients received by earlier layers. Ladder Networks [27, 25] in- troduce lateral connections into autoencoders, producing impressive accuracies on semi-supervised learning tasks. In [39], Deeply-Fused Nets (DFNs) were proposed to im- prove information ï¬ow by combining intermediate layers of different base networks. The augmentation of networks with pathways that minimize reconstruction losses was also shown to improve image classiï¬cation models [43].
# 3. DenseNets
Consider a single image xo that is passed through a con- volutional network. The network comprises L layers, each of which implements a non-linear transformation He(-), where ¢ indexes the layer. H;(-) can be a composite func- tion of operations such as Batch Normalization (BN) [14], rectified linear units (ReLU) [6], Pooling [19], or Convolu- tion (Conv). We denote the output of the ¢â layer as xy.
x¢ = He([xo,X1,---,Xe-1]), (2)
where [x,X1,...,Xe_1] refers to the concatenation of the feature-maps produced in layers 0,...,壉1. Because of its dense connectivity we refer to this network architecture as Dense Convolutional Network (DenseNet). For ease of im- plementation, we concatenate the multiple inputs of H;(-) in eq. (2) into a single tensor.
Composite function. Motivated by [12], we define Hy(-) as a composite function of three consecutive operations: batch normalization (BN) [14], followed by a rectified lin- ear unit (ReLU) [6] and a3 x 3 convolution (Conv).
Pooling layers. The concatenation operation used in Eq. (2) is not viable when the size of feature-maps changes. However, an essential part of convolutional networks is down-sampling layers that change the size of feature-maps. To facilitate down-sampling in our architecture we divide the network into multiple densely connected dense blocks; see Figure 2. We refer to layers between blocks as transition layers, which do convolution and pooling. The transition layers used in our experiments consist of a batch normal- ization layer and an 1Ã1 convolutional layer followed by a 2Ã2 average pooling layer.
ResNets. Traditional convolutional feed-forward _ net- works connect the output of the ¢ââ layer as input to the (⬠+ 1)" layer [16], which gives rise to the following layer transition: xe = Hy(xe_1). ResNets [11] add a skip-connection that bypasses the non-linear transforma- tions with an identity function:
Growth rate. If each function Hy produces k feature- maps, it follows that the ¢ââ layer has ko +k x (â¬â1) input feature-maps, where ko is the number of channels in the in- put layer. An important difference between DenseNet and existing network architectures is that DenseNet can have very narrow layers, e.g., k = 12. We refer to the hyper- parameter k as the growth rate of the network. We show in Section 4 that a relatively small growth rate is sufficient to
xe = He(xe-1) + X0-1. ()
Layers Output Size DenseNet-121 DenseNet-169 DenseNet-201 DenseNet-264 Convolution 112 x 112 7 x 7 conv, stride 2 Pooling 56 x 56 3 x 3 max pool, stride 2 Dense Block 1 x lL conv 1 x I conv 1 x L conv 1 x I conv 56 x 56 6 6 6 6 qd) * 3 x 3 conv * 3 x 3 conv * 3 x 3 conv * 3 x 3. conv * Transition Layer 56 x 56 1 x lL conv qd) 28 x 28 2 x 2 average pool, stride 2 Dense Block 1 x lL conv 1 x I conv 1 x L conv 1 x I conv 28 x 28 12 12 12 12 (2) * 3x 3conv | * 3 x 3 conv * 3 x 3 conv * 3 x 3 conv * Transition Layer 28 x 28 1 x lL conv (2) 14x 14 2 x 2 average pool, stride 2 Dense Block 1 x lL conv 1 x I conv 1 x L conv 1 x I conv 14x 14 24 32 48 64 (3) * 3x 3conv | * 3 x 3 conv * 3 x 3 conv * 3 x 3 conv * Transition Layer 14x 14 1 x lL conv (3) 7x7 2 x 2 average pool, stride 2 Dense Block 1 x lL conv 1 x I conv 1 x L conv 1 x I conv 7x7 16 32 32 48 (4) * 3x 3conv | * 3 x 3 conv * 3 x 3 conv * 3 x 3 conv * Classification Ixil 7 x 7 global average pool Layer 1000D fully-connected, softmax
Table 1: DenseNet architectures for ImageNet. The growth rate for all the networks is k = 32. Note that each âconvâ layer shown in the table corresponds the sequence BN-ReLU-Conv.
obtain state-of-the-art results on the datasets that we tested on. One explanation for this is that each layer has access to all the preceding feature-maps in its block and, therefore, to the networkâs âcollective knowledgeâ. One can view the feature-maps as the global state of the network. Each layer adds k feature-maps of its own to this state. The growth rate regulates how much new information each layer con- tributes to the global state. The global state, once written, can be accessed from everywhere within the network and, unlike in traditional network architectures, there is no need to replicate it from layer to layer.
Bottleneck layers. Although each layer only produces k output feature-maps, it typically has many more inputs. It has been noted in [37, 11] that a 1 x 1 convolution can be in- troduced as bottleneck layer before each 3x3 convolution to reduce the number of input feature-maps, and thus to improve computational efficiency. We find this design es- pecially effective for DenseNet and we refer to our network with such a bottleneck layer, i.e., to the BN-ReLU-Conv(1 x 1)-BN-ReLU-Conv(3 x3) version of H», as DenseNet-B. In our experiments, we let each 1x1 convolution produce 4k feature-maps.
Compression. To further improve model compactness, we can reduce the number of feature-maps at transition layers. If a dense block contains m feature-maps, we let the following transition layer generate |@m| output feature- maps, where 0 <6 <1 is referred to as the compression fac- tor. When 6 = 1, the number of feature-maps across transi- tion layers remains unchanged. We refer the DenseNet with @<14as DenseNet-C, and we set @ = 0.5 in our experiment. When both the bottleneck and transition layers with 0 < 1 are used, we refer to our model as DenseNet-BC.
Implementation Details. On all datasets except Ima- geNet, the DenseNet used in our experiments has three dense blocks that each has an equal number of layers. Be- fore entering the ï¬rst dense block, a convolution with 16 (or twice the growth rate for DenseNet-BC) output channels is performed on the input images. For convolutional layers with kernel size 3Ã3, each side of the inputs is zero-padded by one pixel to keep the feature-map size ï¬xed. We use 1Ã1 convolution followed by 2Ã2 average pooling as transition layers between two contiguous dense blocks. At the end of the last dense block, a global average pooling is performed and then a softmax classiï¬er is attached. The feature-map sizes in the three dense blocks are 32à 32, 16Ã16, and 8Ã8, respectively. We experiment with the basic DenseNet structure with conï¬gurations {L = 40, k = 12}, {L = 100, k = 12} and {L = 100, k = 24}. For DenseNet- BC, the networks with conï¬gurations {L = 100, k = 12}, {L = 250, k = 24} and {L = 190, k = 40} are evaluated.
In our experiments on ImageNet, we use a DenseNet-BC structure with 4 dense blocks on 224Ã224 input images. The initial convolution layer comprises 2k convolutions of size 7Ã7 with stride 2; the number of feature-maps in all other layers also follow from setting k. The exact network conï¬gurations we used on ImageNet are shown in Table 1.
# 4. Experiments
We empirically demonstrate DenseNetâs effectiveness on several benchmark datasets and compare with state-of-the- art architectures, especially with ResNet and its variants.
Method Network in Network [22] All-CNN [32] Deeply Supervised Net [20] Highway Network [34] FractalNet [17] with Dropout/Drop-path ResNet [11] ResNet (reported by [13]) ResNet with Stochastic Depth [13] Wide ResNet [42] with Dropout ResNet (pre-activation) [12] DenseNet (k = 12) DenseNet (k = 12) DenseNet (k = 24) DenseNet-BC (k = 12) DenseNet-BC (k = 24) DenseNet-BC (k = 40) Depth - - - - 21 21 110 110 110 1202 16 28 16 164 1001 40 100 100 100 250 190 Params - - - - 38.6M 38.6M 1.7M 1.7M 1.7M 10.2M 11.0M 36.5M 2.7M 1.7M 10.2M 1.0M 7.0M 27.2M 0.8M 15.3M 25.6M C10 10.41 9.08 9.69 - 10.18 7.33 - 13.63 11.66 - - - - 11.26â 10.56â 7.00 5.77 5.83 5.92 5.19 - C10+ 8.81 7.25 7.97 7.72 5.22 4.60 6.61 6.41 5.23 4.91 4.81 4.17 - 5.46 4.62 5.24 4.10 3.74 4.51 3.62 3.46 C100 35.68 - - - 35.34 28.20 - 44.74 37.80 - - - - 35.58â 33.47â 27.55 23.79 23.42 24.15 19.64 - C100+ - 33.71 34.57 32.39 23.30 23.73 - 27.22 24.58 - 22.07 20.50 - 24.33 22.71 24.42 20.20 19.25 22.27 17.60 17.18 SVHN 2.35 - 1.92 - 2.01 1.87 - 2.01 1.75 - - - 1.64 - - 1.79 1.67 1.59 1.76 1.74 -
Table 2: Error rates (%) on CIFAR and SVHN datasets. k denotes networkâs growth rate. Results that surpass all competing methods are bold and the overall best results are blue. â+â indicates standard data augmentation (translation and/or mirroring). â indicates results run by ourselves. All the results of DenseNets without data augmentation (C10, C100, SVHN) are obtained using Dropout. DenseNets achieve lower error rates while using fewer parameters than ResNet. Without data augmentation, DenseNet performs better by a large margin.
# 4.1. Datasets
CIFAR. The two CIFAR datasets [15] consist of colored natural images with 32Ã32 pixels. CIFAR-10 (C10) con- sists of images drawn from 10 and CIFAR-100 (C100) from 100 classes. The training and test sets contain 50,000 and 10,000 images respectively, and we hold out 5,000 training images as a validation set. We adopt a standard data aug- mentation scheme (mirroring/shifting) that is widely used for these two datasets [11, 13, 17, 22, 28, 20, 32, 34]. We denote this data augmentation scheme by a â+â mark at the end of the dataset name (e.g., C10+). For preprocessing, we normalize the data using the channel means and stan- dard deviations. For the ï¬nal run we use all 50,000 training images and report the ï¬nal test error at the end of training.
SVHN. The Street View House Numbers (SVHN) dataset [24] contains 32Ã32 colored digit images. There are 73,257 images in the training set, 26,032 images in the test set, and 531,131 images for additional training. Following common practice [7, 13, 20, 22, 30] we use all the training data with- out any data augmentation, and a validation set with 6,000 images is split from the training set. We select the model with the lowest validation error during training and report the test error. We follow [42] and divide the pixel values by 255 so they are in the [0, 1] range.
ImageNet. The ILSVRC 2012 classiï¬cation dataset [2] consists 1.2 million images for training, and 50,000 for val- idation, from 1, 000 classes. We adopt the same data aug- mentation scheme for training images as in [8, 11, 12], and apply a single-crop or 10-crop with size 224Ã224 at test time. Following [11, 12, 13], we report classiï¬cation errors on the validation set.
# 4.2. Training
All the networks are trained using stochastic gradient de- scent (SGD). On CIFAR and SVHN we train using batch size 64 for 300 and 40 epochs, respectively. The initial learning rate is set to 0.1, and is divided by 10 at 50% and 75% of the total number of training epochs. On ImageNet, we train models for 90 epochs with a batch size of 256. The learning rate is set to 0.1 initially, and is lowered by 10 times at epoch 30 and 60. Note that a naive implemen- tation of DenseNet may contain memory inefï¬ciencies. To reduce the memory consumption on GPUs, please refer to our technical report on the memory-efï¬cient implementa- tion of DenseNets [26].
Following [8], we use a weight decay of 10â4 and a Nesterov momentum [35] of 0.9 without dampening. We adopt the weight initialization introduced by [10]. For the three datasets without data augmentation, i.e., C10, C100
Model top-1 top-5 DenseNet-121 25.02 / 23.61 7.71 / 6.66 DenseNet-169 23.80 / 22.08 6.85 / 5.92 DenseNet-201 22.58 / 21.46 6.34 / 5.54 DenseNet-264 22.15 / 20.80 6.12 / 5.29
& 21.5, â2=ResNets ResNet-34_|â&âDenseNets-BC 265 DenseNt-169: DenséNet"3Q1 ResNet-101 278 a= ResNets âA4~ DenseNets-BC Reshlet-34 25.5 DenteNet-121 ResNet?50°" 24.56 \- Fl ResNet=50 validation error (%) 23.5 ResNet-101 ResNe}~152 22.5 FlesNet~152 DenseNet-264 Denseflet-264 215 04 3. 4 +5 6 7 O5 075 1 125 15 175 2 225 #parameters, x10" #flops x10
# validation error (%)
Table 3: The top-1 and top-5 error rates on the ImageNet validation set, with single-crop / 10- crop testing.
Figure 3: Comparison of the DenseNets and ResNets top-1 error rates (single-crop testing) on the ImageNet validation dataset as a function of learned parameters (left) and FLOPs during test-time (right).
and SVHN, we add a dropout layer [33] after each convolu- tional layer (except the ï¬rst one) and set the dropout rate to 0.2. The test errors were only evaluated once for each task and model setting.
# 4.3. Classiï¬cation Results on CIFAR and SVHN
We train DenseNets with different depths, L, and growth rates, k. The main results on CIFAR and SVHN are shown in Table 2. To highlight general trends, we mark all results that outperform the existing state-of-the-art in boldface and the overall best result in blue.
Accuracy. Possibly the most noticeable trend may orig- inate from the bottom row of Table 2, which shows that DenseNet-BC with L = 190 and k = 40 outperforms the existing state-of-the-art consistently on all the CIFAR datasets. Its error rates of 3.46% on C10+ and 17.18% on C100+ are signiï¬cantly lower than the error rates achieved by wide ResNet architecture [42]. Our best results on C10 and C100 (without data augmentation) are even more encouraging: both are close to 30% lower than Fractal- Net with drop-path regularization [17]. On SVHN, with dropout, the DenseNet with L = 100 and k = 24 also surpasses the current best result achieved by wide ResNet. However, the 250-layer DenseNet-BC doesnât further im- prove the performance over its shorter counterpart. This may be explained by that SVHN is a relatively easy task, and extremely deep models may overï¬t to the training set.
Parameter Efï¬ciency. The results in Table 2 indicate that DenseNets utilize parameters more efï¬ciently than alterna- tive architectures (in particular, ResNets). The DenseNet- BC with bottleneck structure and dimension reduction at transition layers is particularly parameter-efï¬cient. For ex- ample, our 250-layer model only has 15.3M parameters, but it consistently outperforms other models such as FractalNet and Wide ResNets that have more than 30M parameters. We also highlight that DenseNet-BC with L = 100 and k = 12 achieves comparable performance (e.g., 4.51% vs 4.62% er- ror on C10+, 22.27% vs 22.71% error on C100+) as the 1001-layer pre-activation ResNet using 90% fewer parame- ters. Figure 4 (right panel) shows the training loss and test errors of these two networks on C10+. The 1001-layer deep ResNet converges to a lower training loss value but a similar test error. We analyze this effect in more detail below.
Overï¬tting. One positive side-effect of the more efï¬cient use of parameters is a tendency of DenseNets to be less prone to overï¬tting. We observe that on the datasets without data augmentation, the improvements of DenseNet architec- tures over prior work are particularly pronounced. On C10, the improvement denotes a 29% relative reduction in error from 7.33% to 5.19%. On C100, the reduction is about 30% from 28.20% to 19.64%. In our experiments, we observed potential overï¬tting in a single setting: on C10, a 4à growth of parameters produced by increasing k = 12 to k = 24 lead to a modest increase in error from 5.77% to 5.83%. The DenseNet-BC bottleneck and compression layers appear to be an effective way to counter this trend.
Capacity. Without compression or bottleneck layers, there is a general trend that DenseNets perform better as L and k increase. We attribute this primarily to the corre- sponding growth in model capacity. This is best demon- strated by the column of C10+ and C100+. On C10+, the error drops from 5.24% to 4.10% and ï¬nally to 3.74% as the number of parameters increases from 1.0M, over 7.0M to 27.2M. On C100+, we observe a similar trend. This sug- gests that DenseNets can utilize the increased representa- tional power of bigger and deeper models. It also indicates that they do not suffer from overï¬tting or the optimization difï¬culties of residual networks [11].
# 4.4. Classiï¬cation Results on ImageNet
We evaluate DenseNet-BC with different depths and growth rates on the ImageNet classiï¬cation task, and com- pare it with state-of-the-art ResNet architectures. To en- sure a fair comparison between the two architectures, we eliminate all other factors such as differences in data pre- processing and optimization settings by adopting the pub- licly available Torch implementation for ResNet by [8]1.
1https://github.com/facebook/fb.resnet.torch
25
16, 16 1 T T T r T T 16 â _ DenseNet â~ ResNet Test error: ResNet-1001 (10.2M) 400 14 â _ DenseNet-C 14 â DenseNet-BC 14 â Test error: DenseNet-BC-100 (0.8M) â _ DenseNet-B Training loss: ResNet-1001 (10.2M) ~ â DenseNet-Bc}| â. _ -ss.Training loss: DenseNet-BC-100 (0.8M) giz ge ge 1018 Zz ra S 2 £10 S10 S10 2 o oO 3 < 4 bey 3 ⬠ge 8s 8s 4028 6 6 6 4 4 4 Sore 108 o 1. 2 38 4 5 6 7 8 O 1 2 3 4 6 7 8 0 50 100 750 200 250 300 #parameters x10° #parameters 10° epoch
Figure 4: Left: Comparison of the parameter efï¬ciency on C10+ between DenseNet variations. Middle: Comparison of the parameter efï¬ciency between DenseNet-BC and (pre-activation) ResNets. DenseNet-BC requires about 1/3 of the parameters as ResNet to achieve comparable accuracy. Right: Training and testing curves of the 1001-layer pre-activation ResNet [12] with more than 10M parameters and a 100-layer DenseNet with only 0.8M parameters.
We simply replace the ResNet model with the DenseNet- BC network, and keep all the experiment settings exactly the same as those used for ResNet.
We report the single-crop and 10-crop validation errors of DenseNets on ImageNet in Table 3. Figure 3 shows the single-crop top-1 validation errors of DenseNets and ResNets as a function of the number of parameters (left) and FLOPs (right). The results presented in the ï¬gure reveal that DenseNets perform on par with the state-of-the-art ResNets, whilst requiring signiï¬cantly fewer parameters and compu- tation to achieve comparable performance. For example, a DenseNet-201 with 20M parameters model yields similar validation error as a 101-layer ResNet with more than 40M parameters. Similar trends can be observed from the right panel, which plots the validation error as a function of the number of FLOPs: a DenseNet that requires as much com- putation as a ResNet-50 performs on par with a ResNet-101, which requires twice as much computation.
ResNet architecture (middle). We train multiple small net- works with varying depths on C10+ and plot their test ac- curacies as a function of network parameters. In com- parison with other popular network architectures, such as AlexNet [16] or VGG-net [29], ResNets with pre-activation use fewer parameters while typically achieving better re- sults [12]. Hence, we compare DenseNet (k = 12) against this architecture. The training setting for DenseNet is kept the same as in the previous section.
The graph shows that DenseNet-BC is consistently the most parameter efï¬cient variant of DenseNet. Further, to achieve the same level of accuracy, DenseNet-BC only re- quires around 1/3 of the parameters of ResNets (middle plot). This result is in line with the results on ImageNet we presented in Figure 3. The right plot in Figure 4 shows that a DenseNet-BC with only 0.8M trainable parameters is able to achieve comparable accuracy as the 1001-layer (pre-activation) ResNet [12] with 10.2M parameters.
It is worth noting that our experimental setup implies that we use hyperparameter settings that are optimized for ResNets but not for DenseNets. It is conceivable that more extensive hyper-parameter searches may further improve the performance of DenseNet on ImageNet.
# 5. Discussion
Superficially, DenseNets are quite similar to ResNets: Eq. (2) differs from Eq. (1) only in that the inputs to H¢(-) are concatenated instead of summed. However, the implica- tions of this seemingly small modification lead to substan- tially different behaviors of the two network architectures.
Model compactness. As a direct consequence of the in- put concatenation, the feature-maps learned by any of the DenseNet layers can be accessed by all subsequent layers. This encourages feature reuse throughout the network, and leads to more compact models.
The left two plots in Figure 4 show the result of an experiment that aims to compare the parameter efï¬ciency of all variants of DenseNets (left) and also a comparable
Implicit Deep Supervision. One explanation for the im- proved accuracy of dense convolutional networks may be that individual layers receive additional supervision from the loss function through the shorter connections. One can interpret DenseNets to perform a kind of âdeep supervi- sionâ. The beneï¬ts of deep supervision have previously been shown in deeply-supervised nets (DSN; [20]), which have classiï¬ers attached to every hidden layer, enforcing the intermediate layers to learn discriminative features.
DenseNets perform a similar deep supervision in an im- plicit fashion: a single classiï¬er on top of the network pro- vides direct supervision to all layers through at most two or three transition layers. However, the loss function and gra- dient of DenseNets are substantially less complicated, as the same loss function is shared between all layers.
Stochastic vs. deterministic connection. There is an interesting connection between dense convolutional net- works and stochastic depth regularization of residual net- works [13]. In stochastic depth, layers in residual networks are randomly dropped, which creates direct connections be-
tween the surrounding layers. As the pooling layers are never dropped, the network results in a similar connectiv- ity pattern as DenseNet: there is a small probability for any two layers, between the same pooling layers, to be di- rectly connectedâif all intermediate layers are randomly dropped. Although the methods are ultimately quite dif- ferent, the DenseNet interpretation of stochastic depth may provide insights into the success of this regularizer.
Feature Reuse. By design, DenseNets allow layers ac- cess to feature-maps from all of its preceding layers (al- though sometimes through transition layers). We conduct an experiment to investigate if a trained network takes ad- vantage of this opportunity. We first train a DenseNet on C10+ with L = 40 and k = 12. For each convolutional layer ¢ within a block, we compute the average (absolute) weight assigned to connections with layer s. Figure 5 shows a heat-map for all three dense blocks. The average absolute weight serves as a surrogate for the dependency of a convo- lutional layer on its preceding layers. A red dot in position (, s) indicates that the layer £ makes, on average, strong use of feature-maps produced s-layers before. Several observa- tions can be made from the plot:
1. All layers spread their weights over many inputs within the same block. This indicates that features extracted by very early layers are, indeed, directly used by deep layers throughout the same dense block.
2. The weights of the transition layers also spread their weight across all layers within the preceding dense block, indicating information ï¬ow from the ï¬rst to the last layers of the DenseNet through few indirections. 3. The layers within the second and third dense block consistently assign the least weight to the outputs of the transition layer (the top row of the triangles), in- dicating that the transition layer outputs many redun- dant features (with low weight on average). This is in keeping with the strong results of DenseNet-BC where exactly these outputs are compressed.
4. Although the ï¬nal classiï¬cation layer, shown on the very right, also uses weights across the entire dense block, there seems to be a concentration towards ï¬nal feature-maps, suggesting that there may be some more high-level features produced late in the network.
# 6. Conclusion
We proposed a new convolutional network architec- ture, which we refer to as Dense Convolutional Network (DenseNet). It introduces direct connections between any two layers with the same feature-map size. We showed that DenseNets scale naturally to hundreds of layers, while ex- In our experiments, hibiting no optimization difï¬culties.
Dense Block 1 Dense Block 2 Dense Block 3 Transition layer 1 Transition layer 2. Classification layer 2 4 6 8 ww 2 4 6 8 Ww 1 2 4 6 8 ww Target layer (0) Target layer (/) Target layer (0)
Figure 5: The average absolute filter weights of convolutional lay- ers in a trained DenseNet. The color of pixel (s, £) encodes the av- erage L1 norm (normalized by number of input feature-maps) of the weights connecting convolutional layer s to @ within a dense block. Three columns highlighted by black rectangles correspond to two transition layers and the classification layer. The first row encodes weights connected to the input layer of the dense block.
DenseNets tend to yield consistent improvement in accu- racy with growing number of parameters, without any signs of performance degradation or overï¬tting. Under multi- ple settings, it achieved state-of-the-art results across sev- eral highly competitive datasets. Moreover, DenseNets require substantially fewer parameters and less computa- tion to achieve state-of-the-art performances. Because we adopted hyperparameter settings optimized for residual net- works in our study, we believe that further gains in accuracy of DenseNets may be obtained by more detailed tuning of hyperparameters and learning rate schedules.
Whilst following a simple connectivity rule, DenseNets naturally integrate the properties of identity mappings, deep supervision, and diversiï¬ed depth. They allow feature reuse throughout the networks and can consequently learn more compact and, according to our experiments, more accurate models. Because of their compact internal representations and reduced feature redundancy, DenseNets may be good feature extractors for various computer vision tasks that build on convolutional features, e.g., [4, 5]. We plan to study such feature transfer with DenseNets in future work.
Acknowledgements. The authors are supported in part by the NSF III-1618134, III-1526012, IIS-1149882, the Of- ï¬ce of Naval Research Grant N00014-17-1-2175 and the Bill and Melinda Gates foundation. GH is supported by the International Postdoctoral Exchange Fellowship Pro- gram of China Postdoctoral Council (No.20150015). ZL is supported by the National Basic Research Program of China Grants 2011CBA00300, 2011CBA00301, the NSFC 61361136003. We also thank Daniel Sedra, Geoff Pleiss and Yu Sun for many insightful discussions.
# References
[1] C. Cortes, X. Gonzalvo, V. Kuznetsov, M. Mohri, and S. Yang. Adanet: Adaptive structural learning of artiï¬cial neural networks. arXiv preprint arXiv:1607.01097, 2016. 2
[2] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei- Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. 5
[3] S. E. Fahlman and C. Lebiere. The cascade-correlation learn- ing architecture. In NIPS, 1989. 2
[4] J. R. Gardner, M. J. Kusner, Y. Li, P. Upchurch, K. Q. Weinberger, and J. E. Hopcroft. Deep manifold traversal: Changing labels with convolutional features. arXiv preprint arXiv:1511.06421, 2015. 8
[5] L. Gatys, A. Ecker, and M. Bethge. A neural algorithm of artistic style. Nature Communications, 2015. 8
[6] X. Glorot, A. Bordes, and Y. Bengio. Deep sparse rectiï¬er neural networks. In AISTATS, 2011. 3
[7] I. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks. In ICML, 2013. 5
[8] S. Gross and M. Wilber. Training and investigating residual nets, 2016. 5, 6
[9] B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik. Hyper- columns for object segmentation and ï¬ne-grained localiza- tion. In CVPR, 2015. 2
[10] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectiï¬ers: Surpassing human-level performance on imagenet classiï¬cation. In ICCV, 2015. 5
[11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. 1, 2, 3, 4, 5, 6 [12] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in
deep residual networks. In ECCV, 2016. 2, 3, 5, 7
[13] G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Q. Weinberger. Deep networks with stochastic depth. In ECCV, 2016. 1, 2, 5, 7
[14] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. 3
[15] A. Krizhevsky and G. Hinton. Learning multiple layers of
features from tiny images. Tech Report, 2009. 5 [16] A. Krizhevsky, I. Sutskever, and G. E. Hinton.
classiï¬cation with deep convolutional neural networks. NIPS, 2012. 3, 7 Imagenet In
[17] G. Larsson, M. Maire, and G. Shakhnarovich. Fractalnet: Ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648, 2016. 1, 3, 5, 6
[18] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural compu- tation, 1(4):541â551, 1989. 1
[19] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient- based learning applied to document recognition. Proceed- ings of the IEEE, 86(11):2278â2324, 1998. 1, 3
[20] C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply- supervised nets. In AISTATS, 2015. 2, 3, 5, 7
[21] Q. Liao and T. Poggio. Bridging the gaps between residual learning, recurrent neural networks and visual cortex. arXiv preprint arXiv:1604.03640, 2016. 2
[22] M. Lin, Q. Chen, and S. Yan. Network in network. In ICLR, 2014. 3, 5
[23] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. 2
[24] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with unsupervised fea- ture learning, 2011. In NIPS Workshop, 2011. 5
[25] M. Pezeshki, L. Fan, P. Brakel, A. Courville, and Y. Bengio. In ICML, Deconstructing the ladder network architecture. 2016. 3
[26] G. Pleiss, D. Chen, G. Huang, T. Li, L. van der Maaten, and K. Q. Weinberger. Memory-efï¬cient implementation of densenets. arXiv preprint arXiv:1707.06990, 2017. 5 [27] A. Rasmus, M. Berglund, M. Honkala, H. Valpola, and T. Raiko. Semi-supervised learning with ladder networks. In NIPS, 2015. 3
[28] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio. Fitnets: Hints for thin deep nets. In ICLR, 2015. 5
[29] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. IJCV. 1, 7
[30] P. Sermanet, S. Chintala, and Y. LeCun. Convolutional neu- ral networks applied to house numbers digit classiï¬cation. In ICPR, pages 3288â3291. IEEE, 2012. 5
[31] P. Sermanet, K. Kavukcuoglu, S. Chintala, and Y. LeCun. Pedestrian detection with unsupervised multi-stage feature learning. In CVPR, 2013. 2
[32] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Ried- miller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014. 5
[33] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overï¬tting. JMLR, 2014. 6
[34] R. K. Srivastava, K. Greff, and J. Schmidhuber. Training very deep networks. In NIPS, 2015. 1, 2, 5
[35] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In ICML, 2013. 5
[36] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, 2015. 2, 3 [37] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In CVPR, 2016. 2, 3, 4 [38] S. Targ, D. Almeida,
in resnet: Generalizing residual architectures. arXiv preprint arXiv:1603.08029, 2016. 2
[39] J. Wang, Z. Wei, T. Zhang, and W. Zeng. Deeply-fused nets. arXiv preprint arXiv:1605.07716, 2016. 3
[40] B. M. Wilamowski and H. Yu. Neural network learning without backpropagation. IEEE Transactions on Neural Net- works, 21(11):1793â1803, 2010. 2
[41] S. Yang and D. Ramanan. Multi-scale recognition with dag- cnns. In ICCV, 2015. 2
[42] S. Zagoruyko and N. Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016. 3, 5, 6
[43] Y. Zhang, K. Lee, and H. Lee. Augmenting supervised neural networks with unsupervised objectives for large-scale image classiï¬cation. In ICML, 2016. 3 | {
"id": "1605.07716"
} |
1608.04868 | Towards Music Captioning: Generating Music Playlist Descriptions | Descriptions are often provided along with recommendations to help users'
discovery. Recommending automatically generated music playlists (e.g.
personalised playlists) introduces the problem of generating descriptions. In
this paper, we propose a method for generating music playlist descriptions,
which is called as music captioning. In the proposed method, audio content
analysis and natural language processing are adopted to utilise the information
of each track. | http://arxiv.org/pdf/1608.04868 | Keunwoo Choi, George Fazekas, Brian McFee, Kyunghyun Cho, Mark Sandler | cs.MM, cs.AI, cs.CL | 2 pages, ISMIR 2016 Late-breaking/session extended abstract | null | cs.MM | 20160817 | 20170115 | 7 1 0 2
n a J 5 1 ] M M . s c [ 2 v 8 6 8 4 0 . 8 0 6 1 : v i X r a
# TOWARDS MUSIC CAPTIONING: GENERATING MUSIC PLAYLIST DESCRIPTIONS
Keunwoo Choi, Gy¨orgy Fazekas, Mark Sandler Centre for Digital Music Queen Mary University of London keunwoo.choi@qmul.ac.uk
Brian McFee, Kyunghyun Cho Center for Data Science New York University {first.last}@nyu.edu
# ABSTRACT
Descriptions are often provided along with recommenda- tions to help usersâ discovery. Recommending automati- cally generated music playlists (e.g. personalised playlists) introduces the problem of generating descriptions. In this paper, we propose a method for generating music playlist descriptions, which is called as music captioning. In the proposed method, audio content analysis and natural lan- guage processing are adopted to utilise the information of each track.
y Ly eH} Go x | am _ hungry
Figure 1. A block diagram of an RNN unit (left) and sequence-to-sequence module that is applied to English- Korean translation (right).
# 1. INTRODUCTION
Motivation: One of the crucial problems in music discov- ery is to deliver the summary of music without playing it. One common method is to add descriptions of a music item or playlist, e.g. Getting emotional with the undisputed King of Pop 1 , Just the right blend of chilled-out acoustic songs to work, relax, think, and dream to 2 . These exam- ples show that they are more than simple descriptions and even add value to the curated playlist as a product.
There have been attempts to automate the generation of these descriptions. In [8], Eck et al. proposed to use social tags to describe each music item. Fields proposed a similar idea for playlist using social tag and topic model [9] using Latent Dirichlet Allocation [1]. Besides text, Bogdanov in- troduced music avatars, whose outlook - hair style, clothes, and accessories - describes the recommended music [2].
⢠Seq2seq: Sequence-to-sequence (seq2seq) learning in- dicates training a model whose input and output are se- quences (Figure 1, right). Seq2seq models can be used to machine translation, where a phrase in a language is sum- marised by an encoder RNN, which is followed by a de- coder RNN to generate a phrase in another language [4].
⢠Word2vec: Word embeddings are distributed vector representations of words that aim to preserve the seman- tic relationships among words. One successful example is word2vec algorithm, which is usually trained with large corpora in an unsupervised manner [13].
⢠ConvNets: Convolutional neural networks (ConvNets) have been extensively adopted in nearly every computer vision task and algorithm since the record-breaking per- formance of AlexNet [12]. ConvNets also show state-of- the-art results in many music information retrieval tasks including auto-tagging [5].
Background: ⢠RNNs: RNNs are neural networks that have a unit with a recurrent connection, whose output is connect to the input of the unit (Figure 1, left). They cur- rently show state-of-the-art performances in tasks that in- volve sequence modelling. Two types of RNN unit are widely used: Long Short-Term Memory (LSTM) unit [10] and Gated Recurrent Unit (GRU) [3].
# 2. PROBLEM DEFINITION
The problem of music captioning can be deï¬ned as gener- ating a description for a set of music items using on their audio content and text data. When the set includes more than one item, it can be also called as music playlist cap- tioning.
1 Michael Jackson: Love songs and ballads by Apple Music 2 Your Coffee Break by Spotify
# 3. THE PROPOSED METHOD
© Keunwoo Choi, Gyérgy Fazekas, Mark Sandler, Brian McFee, Kyunghyun Cho. Licensed under a Creative Commons Attribu- tion 4.0 International License (CC BY 4.0). Attribution: © Keunwoo Choi, Gyérgy Fazekas, Mark Sandler, Brian McFee, Kyunghyun Cho. âTowards Music Captioning: Generating Music Playlist Descriptionsâ, Extended abstracts for the Late-Breaking Demo Session of the 17th In- ternational Society for Music Information Retrieval Conference, 2016.
Both of the approaches use sequence-to-sequence model, as illustrated in Figure 2. In the sequence-to-sequence model, the encoder consists of two-layer RNN with GRU and en- codes the track features into a vector, i.e., the encoded vec- tor summarises the information of the input. This vector is also called context vector because it provides context
1 2 3 yi wl w2 w âAT sepseq | 1 t2 audio text audio text
Figure 2. The diagrams of two proposed approaches, where coloured blocks indicate trainable modules. The ï¬rst approach uses a pre-trained ConvNet (conv) and word2vec (w2v) and only sequence-to-sequence model is trained. In the second approach, the whole blocks are trained - a ConvNet to summarise the audio content, an RNN to summarise the text data of each track. An addi- tional labels (y) such as genres or tags can be provided to help the training.
information to the decoder. The decoder consists of two- layer RNN with GRU and decodes the context vector to a sequence of word or word embeddings. The models are written in Keras and uploaded online 3 [6].
# 3.1 Pre-training approach
This approach takes advantage of a pre-trained word em- bedding model 4 and a pre-trained auto-tagger 5 . There- fore, the number of parameters to learn is reduced while leveraging additional data to train word-embedding and auto-tagger. Each data sample consists of a sequence of N track features as input and an output word sequence length of M , which is an album feature.
Input/Outpu{*} A n-th track feature, t® ⬠R°°°, rep- resents one track and is created by concatenating the audio feature, t? ⬠R°°, and the word feature, t®, ⬠R°°°, ie. t =[ta;ty]. For computing ta, a convolutional neural net- work that is trained to predict tags is used to output 50-dim vector for each track [5]. tw is computed by }>,, wx/K, where w;, refers to the embedding of k-th word in the metadatg ' | The word embedding were trained by word2vec algorithms and Google news dataset [13].
An playlist feature is a sequence of word embeddings of the playlist description, i.e. p = [wm]m=0,1,..mâ1.
# 3 http://github.com/keunwoochoi/
# ismir2016-ldb-audio-captioning-model-keras 4 https://radimrehurek.com/gensim/models/
# word2vec.html
# 5 https://github.com/keunwoochoi/music-auto_
tagging-keras, [5]
6 The dimensions can vary, we describe in details for better under-
standing.
7 Because these word embeddings are distributed representations in a semantic vector space, average of the words can summarise a bag of words and was used as a baseline in sentence and paragraph representa- tion [7].
# 3.2 Fully-training approach
The model in this approach includes the training of a Con- vNet for audio summarisation and an RNN for text sum- marisation of each track. The structure of ConvNet can be similar to the pre-trained one. The RNN module is trained to summarise the text of each track and outputs a sentence vector. These networks can be provided with additional labels (notated as y in the ï¬gure 2) to help the training, e.g., genres or tags. In that case, the objective of the whole structure consists of two different tasks and therefore the training can be more regulated and stable.
Since the audio and text summarisation modules are trainable, they can be more relevant to the captioning task. However, this ï¬exibility requires more training data.
# 4. EXPERIMENTS AND CONCLUSIONS
We tested the pre-training approach with a private pro- duction music dataset. The dataset has 374 albums and 17,354 tracks with descriptions of tracks, albums, audio signal and metadata. The learning rate is controlled by ADAM [11] with an objective function of 1-cosine prox- imity. The model was trained to predict the album descrip- tions.
The model currently overï¬ts and fails to generate cor- rect sentences. One example of generated word sequence is dramatic motivating the intense epic action adventure soaring soaring soaring gloriously Roger Deakins cinematography Maryse Alberti. This is expected since there are only 374 output sequences in the dataset â if we use early stopping, the model underï¬ts, otherwise it overï¬ts.
In the future, we plan to solve the current problem â lack of data. The sentence generation can be partly trained by (music) corpora. A word2vec model that is trained with music corpora can be used to reduce the embedding di- mension [14]. The model can also be modiï¬ed in the sense that the audio feature is optional and it mainly relies on metadata. In that case, acquisition of training data becomes more feasible.
# 5. ACKNOWLEDGEMENTS
This work was part funded by the FAST IMPACt EPSRC Grant EP/L019981/1 and the European Commission H2020 research and innovation grant AudioCommons (688382). Mark Sandler acknowledges the support of the Royal So- ciety as a recipient of a Wolfson Research Merit Award. Brian McFee is supported by the Moore Sloan Data Sci- ence Environment at NYU. Kyunghyun Cho thanks the support by Facebook, Google (Google Faculty Award 2016) and NVidia (GPU Center of Excellence 2015-2016). The work is done during Keunwoo Choi is visiting Center for Data Science in New York University.
# 6. REFERENCES
[1] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. Journal of machine Learn- ing research, 3(Jan):993â1022, 2003.
[2] Dmitry Bogdanov, Mart´ıN Haro, Ferdinand Fuhrmann, Anna Xamb´o, Emilia G´omez, and Perfecto Herrera. Semantic audio content-based music recommendation and visualization based on user preference examples. Information Processing & Management, 49(1):13â33, 2013.
[3] Kyunghyun Cho, Bart Van Merri¨enboer, Dzmitry Bah- danau, and Yoshua Bengio. On the properties of neu- ral machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014.
[4] Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase rep- resentations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
[5] Keunwoo Choi, George Fazekas, and Mark Sandler. Automatic tagging using deep convolutional neural networks. In International Society of Music Informa- tion Retrieval Conference. ISMIR, 2016.
[6] Franc¸ois Chollet. Keras. GitHub https://github. com/fchollet/keras, 2015. repository:
[7] Andrew M Dai, Christopher Olah, and Quoc V Le. Document embedding with paragraph vectors. arXiv preprint arXiv:1507.07998, 2015.
[8] Douglas Eck, Paul Lamere, Thierry Bertin-Mahieux, and Stephen Green. Automatic generation of social tags for music recommendation. In Advances in neural information processing systems, pages 385â392, 2008.
[9] Ben Fields, Christophe Rhodes, Mark dâInverno, et al. Using song social tags and topic models to describe and compare playlists. In 1st Workshop On Music Recom- mendation And Discovery (WOMRAD), ACM RecSys, 2010, Barcelona, Spain, 2010.
[10] Sepp Hochreiter and J¨urgen Schmidhuber. Long short- term memory. Neural computation, 9(8):1735â1780, 1997.
A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[12] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
[13] T Mikolov and J Dean. Distributed representations of words and phrases and their compositionality. Ad- vances in neural information processing systems, 2013.
[14] Sergio Oramas, Luies Espinosa-Anke, Shuo Zhang, Horacio Saggion, and Xavier Serra. Natural language processing for music information retrieval. In 17th In- ternational Society for Music Information Retrieval Conference (ISMIR 2016), 2016. | {
"id": "1507.07998"
} |
1608.04337 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | 7 1 0 2
n a J 4 2 ] V C . s c [ 2 v 7 3 3 4 0 . 8 0 6 1 : v i X r a
# Design of Efï¬cient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial âBottleneckâ Structure
Min Wang Department of EECS University of Central Florida Orlando, FL 32816 mwang@eecs.ucf.edu Baoyuan Liu Department of EECS University of Central Florida Orlando, FL 32816 bliu@eecs.ucf.edu Hassan Foroosh Department of EECS University of Central Florida Orlando, FL 32816 foroosh@eecs.ucf.edu
# Abstract
Deep convolutional neural networks achieve remarkable visual recognition performance, at the cost of high compu- tational complexity. In this paper, we have a new design of efï¬cient convolutional layers based on three schemes. The 3D convolution operation in a convolutional layer can be considered as performing spatial convolution in each chan- nel and linear projection across channels simultaneously. By unravelling them and arranging the spatial convolu- tion sequentially, the proposed layer is composed of a sin- gle intra-channel convolution, of which the computation is negligible, and a linear channel projection. A topological subdivisioning is adopted to reduce the connection between the input channels and output channels. Additionally, we also introduce a spatial âbottleneckâ structure that utilizes a convolution-projection-deconvolution pipeline to take ad- vantage of the correlation between adjacent pixels in the input. Our experiments demonstrate that the proposed lay- ers remarkably outperform the standard convolutional lay- ers with regard to accuracy/complexity ratio. Our models achieve similar accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less computation respec- tively.
consuming building block of the CNN, the convolutional layer, is performed by convolving the 3D input data with a series of 3D kernels. The computational complexity is quadratic in both the kernel size and the number of chan- nels. To achieve state-of-the-art performance, the number of channels needs to be a few hundred, especially for the layers with smaller spatial input dimension, and the kernel size is generally no less than 3.
Several attempts have been made to reduce the amount of computation and parameters in both convolutional lay- ers and fully connected layers. Low rank decomposi- tion has been extensively explored in various fashions [7][8][9][10][11] to obtain moderate efï¬ciency improve- ment. Sparse decomposition based methods [12][13] achieve higher theoretical reduction of complexity, while the actual speedup is bounded by the efï¬ciency of sparse multiplication implementations. Most of these decomposition-based methods start from a pre-trained model, and perform decomposition and ï¬ne-tuning based on it, while trying to maintain similar accuracy. This essen- tially precludes the option of improving efï¬ciency by de- signing and training new CNN models from scratch.
# 1. Introduction
Deep convolutional neural networks (CNN) have made signiï¬cant improvement on solving visual recognition prob- lems since the famous work by Krizhevsky et al. in 2012 [1][2][3][4][5]. Thanks to their deep structure, vision ori- ented layer designs, and efï¬cient training schemes, recent CNN models from Google [4] and MSRA [5] obtain better than human level accuracy on ImageNet ILSVRC dataset [6].
The computational complexity for the state-of-the-art models for both training and inference are extremely high, requiring several GPUs or cluster of CPUs. The most time-
On the other hand, in recent state-of-the-art deep CNN models, several heuristics are adopted to alleviate the bur- den of heavy computation. In [2], the number of channels are reduced by a linear projection before the actual convolu- tional layer; In [5], the authors utilize a bottleneck structure, in which both the input and the output channels are reduced by linear projection; In [4], 1Ãn and nÃ1 asymmetric con- volutions are adopted to achieve larger kernel sizes. While these strategies to some extent help to design moderately ef- ï¬cient and deep models in practice, they are not able to pro- vide a comprehensive analysis of optimizing the efï¬ciency of the convolutional layer.
In this work, we propose several schemes to improve the efï¬ciency of convolutional layers. In standard convolu- tional layers, the 3D convolution can be considered as per- forming intra-channel spatial convolution and linear chan- nel projection simultaneously, leading to highly redundant
1
ReLU
(a) Standard Convolutional Layer
computation. These two operations are ï¬rst unraveled to a set of 2D convolutions in each channel and subsequent lin- ear channel projection. Then, we make the further mod- iï¬cation of performing the 2D convolutions sequentially In this way, we obtain a single rather than in parallel. intra-channel convolutional (SIC) layer that involves only one ï¬lter for each input channel and linear channel projec- tion, thus achieving signiï¬cantly reduced complexity. By stacking multiple SIC layers, we can train models that are several times more efï¬cient with similar or higher accuracy than models based on standard convolutional layer.
In a SIC layer, linear channel projection consumes the majority of the computation. To reduce its complexity, we propose a topological subdivisioning framework between the input channels and output channels as follows: The in- put channels and the output channels are ï¬rst rearranged into a s-dimensional tensor, then each output channel is only connected to the input channels that are within its local neighborhood. Such a framework leads to a regular sparsity pattern of the convolutional kernels, which is shown to pos- sess a better performance/cost ratio than standard convolu- tional layer in our experiments.
Furthermore, we design a spatial âbottleneckâ structure to take advantage of the local correlation of adjacent pix- els in the input. The spatial dimensions are ï¬rst reduced by intra-channel convolution with stride, then recovered by de- convolution with the same stride after linear channel projec- tion. Such a design reduces the complexity of linear channel projection without sacriï¬cing the spatial resolution.
(b) Single Intra-Channel Convolutional Layer
Figure 1. Illustration of the convolution pipeline of standard con- volutional layer and Single Intra-channel Convolutional Layer. In SIC layer, only one 2D ï¬lter is convolved with each input channel.
The above three schemes (SIC layer, topological subdi- visioning and spatial âbottleneckâ structure) attempt to im- prove the efï¬ciency of traditional CNN models from dif- ferent perspectives, and can be easily combined together to achieve lower complexity as demonstrated thoroughly in the remainder of this paper. Each of these schemes will be ex- plained in detail in Section 2, evaluated against traditional CNN models, and analyzed in Section 3.
# 2.1. Standard Convolutional Layer
Consider the input data I in RhÃwÃn, where h, w and n are the height, width and the number of channels of the input feature maps, and the convolutional kernel K in RkÃkÃnÃn, where k is size of the convolutional kernel and n is the number of output channels. The operation of a stan- dard convolutional layer O â RhÃwÃn = K â I is given by Algorithm 1. The complexity of a convolutional layer mea- sured by the number of multiplications is
# 2. Method
n2k2hw (1)
In this section, we ï¬rst review the standard convolutional layer, then introduce the proposed schemes. For the purpose of easy understanding, the ï¬rst two schemes are explained with mathematical equations and pseudo-code, as well as illustrated with graphical visualization in Figure 5.
Since the complexity is quadratic with the kernel size, in most recent CNN models, the kernel size is limited to 3 Ã 3 to control the overall running time.
# 2.2. Single Intra-Channel Convolutional Layer
We make the assumption that the number of output chan- nels is equal to the number of input channels, and the in- put is padded so that the spatial dimensions of output is the same as input. We also assume that the residual learning technique is applied to each convolutional layer, namely the input is directly added to the output since they have the same dimension.
In standard convolutional layers, the output features are produced by convolving a group of 3D kernels with the in- put features along the spatial dimensions. Such a 3D con- volution operation can be considered as a combination of 2D spatial convolution inside each channel and linear pro- jection across channels. For each output channel, a spatial
Algorithm 1: Standard Convolutional Layer Input: I ¢ Râ*exâ Parameter: K ⬠R***xnxn Intermediate Data: I ¢ R(+#â-1)x(wtk-Dxn Output: O ¢ Râ*ex" I = zero-padding(I, "5+) for y = 1toh, x=1tow, 7 =1tondo Oly, 2.5) = nek LYLE Ku, v, i,j (yt+u-l,a+v-1,i) i=1 u=lv=1 end
convolution is performed on each input channel. The spatial convolution is able to capture local structural information, while the linear projection transforms the feature space for learning the necessary non-linearity in the neuron layers. When the number of input and output channels is large, typ- ically hundreds, such a 3D convolutional layer requires an exorbitant amount of computation.
A natural idea is, the 2D spatial convolution and linear channel projection can be unraveled and performed sepa- rately. Each input channel is ï¬rst convolved with b 2D ï¬lters, generating intermediate features that have b times channels of the input. Then the output is generated by lin- ear channel projection. Unravelling these two operations provides us more freedom of model design by tuning both b and k. The complexity of such a layer is
b(nk2 + n2)hw (2)
Typically, k is much smaller than n. The complexity is approximately linear with b. When b = k2, this is equiva- lent to a linear decomposition of the standard convolutional layers [12]. When b < k2, the complexity is lower than the standard convolutional layer in a low-rank fashion.
Our key observation is that instead of convolving b 2D ï¬lters with each input channel simultaneously, we can per- form the convolutions sequentially. The above convolu- tional layer with b ï¬lters can be transformed to a frame- work that has b layers. In each layer, each input channel is ï¬rst convolved with single 2D ï¬lter, then a linear pro- jection is applied to all the input channels to generate the output channels. In this way, the number of channels are maintained the same throughout all b layers. Algorithm. 2 formally describes this framework.
When we consider each of the b layers, only one k à k kernel is convolved with each input channel. This seems to be a risky choice. Convolving with only one ï¬lter will not be able to preserve all the information from the input data, and there is very little freedom to learn all the useful local structures. Actually, this will probably lead to a low pass ï¬lter, which is somewhat equivalent to the ï¬rst principal component of the image. However, the existence of resid- ual learning module helps to overcome this disadvantage.
With residual learning, the input is added to the output. The subsequent layers thus receive information from both the initial input and the output of preceding layers. Figure. 5 presents a visual comparison between the proposed method and standard convolutional layer.
Algorithm 2: Single Intra-Channel Convolutional Layer Input: I â RhÃwÃn Parameter: K â RkÃkÃn, P â RnÃn Intermediate Data: ËI â R(h+kâ1)Ã(w+kâ1)Ãn,
Input: Ie Rhxwxn Parameter: K ⬠R****", P Ee R"â¢â Intermediate Data: I ¢ R(!tk-))x(w+k-Dxn, Ge Rrxwxn Output: Oe Rhxwxn O=I // Initialize output as input I = zero-padding(I, a) fori=1tobdo// Repeat this layer b times for y =1toh, x=1tow, j =1tondo Gly, x, 3) = > SOK(u,v, Ay +u-âl1,¢+vâ-1,j) u=1v=1 end for y =1toh, x=1tow, 1=1tondo Oly, 2,1) = O(y, 2,1) + > Gly, 2,7) PG.) j=l end O = max(O,0) // ReLU I = zero-padding(O, at) end
# 2.3. Topologica Subdivisioning
Given that the standard convolutional layer boils down to single intra-channel convolution and linear projection in the SIC layer, we make further attempt to reduce the com- plexity of linear projection. In [12], the authors proved that extremely high sparsity could be accomplished without sac- riï¬cing accuracy. While the sparsity was obtained by ï¬ne- tuning and did not possess any structure, we study to build the sparsity with more regularity. Inspired by the topolog- ical ICA framework in [14], we propose a s-dimensional topological subdivisioning between the input and output channels in the convolutional layers. Assuming the number of input channels and output channels are both n, we ï¬rst arrange the input and output channels as an s-dimensional tensor [d1, d2, ..., ds].
8 [[ a=»: (3) i=1
Each output channel is only connected to its local neighbors in the tensor space rather than all input channels. The size of
Intra-channel Projet ecw
(a) 2D Topology
Figure 3. Illustration of Spatial âBottleneckâ Framework
In this section, we introduce a spatial âbottleneckâ struc- ture that reduces the amount of computation without de- creasing either the spatial resolution or the number of chan- nels by exploiting the spatial redundancy of the input.
Consider the 3D input data I in RhÃwÃn, we ï¬rst apply a single intra-channel convolution to each input channel as was introduced in Section 2.2. A k à k kernel is convolved with each input channel with stride k, so that the output k à w dimension is reduced to R h k Ãn. Then a linear projection layer is applied. Finally, We perform a k à k intra-channel deconvolution with stride k to recover the spatial resolution. Figure. 3 illustrates the proposed spatial âbottleneckâ
(b) 3D Topology
Figure 2. 2D &3D topology for input and output.
the local neighborhood is deï¬ned by another s-dimensional tensor, [c1, c2, ..., cs], and the total number of neighbors for each output channel is
s Il G=c (4) i=1
Algorithm 3: Convolutional Layer with Topological Subdivisioning Input: I ¢ Râ*<exr Parameter: []_, dj = n;c¢; < di, Vi = 1...s; K â¬
The complexity of the proposed topologically subdivi- sioned convolutional layers compared to the standard con- volutional layers can be simply measured by c n . Figure. 2 illustrate the 2D and 3D topological subdivisioning be- tween the input channels and the output channels. A formal description of this layer is presented in Algorithm 3.
i=1 di = n; ci ⤠di, âi = 1...s; K â RkÃkÃd1Ã..ÃdsÃc1Ã...Ãcs Intermediate Data: ËI â R(h+kâ1)Ã(w+kâ1)Ãn,ËI â R(h+kâ1)Ã(w+kâ1)Ãd1Ã...Ãds Output: O â RhÃwÃd1Ã...Ãds ËI = zero-padding(I, kâ1 2 ) Rearrange ËI to ËI for y = 1 to h, x = 1 to w, j1 = 1to d1, ... js = 1to ds do // Topological Subdivisioning
When k = 1, the algorithm is suitable for the linear pro- jection layer, and can be directly embedded into Algorithm 2 to further reduce the complexity of the SIC layer.
# 2.4. Spatial âBottleneckâ Structure
In the design of traditional CNN models, there has al- ways been a trade-off between the spatial dimensions and the number of channels. While high spatial resolution is necessary to preserve detailed local information, large num- ber of channels produce high dimensional feature spaces and learn more complex representations.The complexity of one convolutional layer is determined by the product of these two factors. To maintain an acceptable complexity, the spatial dimensions are reduced by max pooling or stride convolution while the number of channels are increased.
Oly, &, jis 5 i) = ->>. Dy S K(u, 0, jays Js5 tty +s ts): h=1 i,=lujv=1 I(ytu-latv-l, (j1 + 1 â 2)%di +1, (is + is â 2)%ds +1)
On the other hand, the adjacent pixels in the input of each convolutional layers are correlated, in a similar fash- ion to the image domain, especially when the spatial res- olution is high. While reducing the resolution by simple sub-sampling will obviously lead to a loss of information, such correlation presents considerable redundancy that can be taken advantage of.
# end
Stage Output 1082 1 2 362 A B C (7, 64)2 3 Ã 3 max pooling , stride 3 (1, 128) D E (3, 128) Ã 2 [3, 4, 128] Ã 2 < 3, 128 > Ã4 < 5, 128 > Ã4 < 3, 128 > Ã6 3 182 2 Ã 2 max pooling , stride 2 (1, 256) (3, 256) Ã 2 [3, 4, 256] Ã 2 < 3, 256 > Ã4 < 5, 256 > Ã4 < 3, 256 > Ã6 4 62 3 Ã 3 max pooling , stride 3 (1, 512) (3, 512) Ã 2 [3, 4, 512] Ã 2 < 3, 512 > Ã4 < 5, 512 > Ã4 < 3, 512 > Ã6 12 (1, 1024) 6 Ã 6 average pooling, stride 6 fully connected, 2048 fully connected, 1000 softmax
Table 1. Conï¬gurations of baseline models and models with proposed SIC layers . For each convolutional layer, we use numbers in brackets to represent its conï¬guration. k denotes the kernel size. n is the number of output channels. Different types of bracket correspond to different convolutional layer. (k, n) is a typical standard convolutional layer. [k, b, n] denotes an unraveled convolutional layer with b ï¬lters for each input channel. < k, n > represents our SIC layer. The number after the brackets indicates the times that the layer is repeated in each stage.
framework. The spatial resolution of the data is ï¬rst re- duced, then expanded, forming a bottleneck structure. In this 3-phase structure, the linear projection phase , which consumes most of the computation, is k2 times more efï¬- cient than plain linear projection on the original input. The intra-channel convolution and deconvolution phases learn to capture the local correlation of adjacent pixels, while main- taining the spatial resolution of the output.
Stage Intra-channel Convolution Linear Projection 4 3 2 6.6% 1.7% 3.4% 93.4% 96.6% 98.3%
Table 2. Distribution of the computation in the SIC layer of Model C. The intra-channel convolution generally consumes less than 10% of total computation, and its proportion decreases when the number of channels increases.
# 3. Experiments
We evaluate the performance of our method on the Im- ageNet LSVRC 2012 dataset, which contains 1000 cate- gories, with 1.2M training images, 50K validation images, and 100K test images. We use Torch to train the CNN mod- els in our framework. Our method is implemented with CUDA and Lua based on the Torch platform. The images are ï¬rst resized to 256 à 256, then randomly cropped into 221 à 221 and ï¬ipped horizontally while training. Batch normalization [3] is placed after each convolutional layer and before the ReLU layer. We also adopt the dropout [15] strategy with a ratio of 0.2 during training. Standard stochastic gradient descent with mini-batch containing 256 images is used to train the model. We start the learning rate from 0.1 and divide it by a factor of 10 every 30 epochs. Each model is trained for 100 epochs. For batch normal- ization, we use exponential moving average to calculate the batch statistics as is implemented in CuDNN [16]. The code is run on a server with 4 Pascal Titan X GPU. For all the models evaluated below, the top-1 and top-5 error of valida- tion set with central cropping is reported.
We evaluate the performance and efï¬ciency of a series of
models designed using the proposed efï¬cient convolutional layer. To make cross reference easier and help the readers keep track of all the models, each model is indexed with a capital letter.
We compare our method with a baseline CNN model that is built from standard convolutional layers. The details of the baseline models are given in Table 1. The convolutional layers are divided into stages according to their spatial di- mensions. Inside each stage, the convolutional kernels are performed with paddings so that the output has the same spatial dimensions as the input. Across the stages, the spa- tial dimensions are reduced by max pooling and the num- ber of channels are doubled by 1 à 1 convolutional layer. One fully connected layer with dropout is added before the logistic regression layer for ï¬nal classiï¬cation. Residual learning is added after every convolutional layer with same number of input and output channels.
We evaluate the performance of our method by substitut- ing the standard convolutional layers in the baseline mod- els with the proposed Single Intra-Channel Convolutional (SIC) layers. We leave the 7 à 7 convolutional layer in the ï¬rst stage and the 1 à 1 convolutional layers across stages the same, and only substitute the 3 à 3 convolutional layers.
Model A B C D E kernel size 3 3 3 5 3 2 2 4 4 6 30.67% 30.69% 29.78% 29.23% 28.83% 11.24% 11.27% 10.78% 10.48% 9.88% 1 Ë4/9 Ë2/9 Ë2/9 Ë1/3
Table 3. Top-1 & Top-5 error and complexity per stage of model A to E. The models with proposed design (model C, D, E)demonstrate signiï¬cantly better accuracy / complexity ratio than the baseline model.
In the following sections, the relative complexities are also measured with regards to these layers.
convolutional layers in the baseline model, so the overall complexity per stage is reduced by a factor of 2.
# 3.1. Single Intra-Channel Convolutional Layer
We ï¬rst substitute the standard convolutional layer with the unraveled convolution conï¬guration in model B. Each input channel is convolved with 4 ï¬lters, so that the com- plexity of B is approximately 4 9 of the baseline model A. In model C , we use two SIC layers to replace one standard convolutional layer. Even though our model C has more layers than the baseline model A, its complexity is only 2 9 of model A. In model E, we increase the number of SIC layers from 4 in model C to 6 in model E. The complexity of model E is only 1 3 of the baseline. Due to the extremely low complexity of the SIC layer, we can easily increase the model depth without too much increase of the computation. Table. 2 lists the distribution of computation between the intra-channel convolution and linear channel projection of each SIC layer in model C. The intra-channel convolution generally consumes less than 10% of the total layer com- putation. Thanks to this advantage, we can utilize a larger kernel size with only a small sacriï¬ce of efï¬ciency. Model D is obtained by setting the kernel size of model C to 5.
Table 3 lists the top-1 and top-5 errors and the complex- ity of models from A to E. Comparing model B and A, with same number of layers, model B can match the accuracy of model A with less than half computation. When comparing the SIC based model C with model B, model C reduces the top-1 error by 1% with half complexity. This veriï¬es the superior efï¬ciency of the proposed SIC layer. With 5 à 5 kernels, model E obtains 0.5% accuracy gain with as low as 5% increase of complexity on average. This demonstrates that increasing kernel size in SIC layer provides us another choice of improving the accuracy/complexity ratio.
# 3.2. Topological Subdivisioning
We ï¬rst compare the performance of two different topo- logical conï¬gurations against the baseline model. Model F adopts 2D topology and ci = di/2 for both dimensions, which leads to a reduction of complexity by a factor of 4. In Model G, we use 3D topology and set ci and di, so that the complexity is reduced by a factor of 4.27. The details of the network conï¬guration are listed in Table 4. The num- ber of topological layers is twice the number of standard
Stage 2 3 4 #Channels 128 256 512 2D topology d1 Ã d2 c1 Ã c2 8 Ã 16 4 Ã 8 16 Ã 16 8 Ã 8 16 Ã 32 8 Ã 16 3D topology d1 Ã d2 Ã d3 c1 Ã c2 Ã c3 4 Ã 8 Ã 4 2 Ã 5 Ã 3 8 Ã 8 Ã 4 4 Ã 5 Ã 3 8 Ã 8 Ã 8 4 Ã 5 Ã 6
Table 4. Conï¬gurations of model F and G that use 2D and 3D topological subdivisioning. di and ci stand for the tensor and neighbor dimensions in Algorithm 3. They are designed so that the complexity is reduced by (approximately for 3D) a factor of 4.
As a comparison, we also train a model H using the straightforward grouping strategy introduced in [1]. Both the input and output channels are divided into 4 groups. The output channels in each group are only dependent on the in- put channels in the corresponding group. The complexity is also reduced 4 times in this manner. Table 5 lists the top-1 & top-5 error rate and complexities of model F to H. Both the 2D and the 3D topology models outperform the grouping method with lower error rate while maintaining the same complexity. When compared with the baseline model, both of the two topology models achieve similar top-1 and top-5 error rate with half the computation.
Finally, we apply the topological subdivisioning to the SIC layer in model I. We choose 2D topology based on the In model I, there are 8 convolutional results in Table 5. layers for each stage, due to the layer doubling caused by both the SIC layer and the topological subdivisioning. The complexity of each layer is, however, approximately as low as 1 36 of a standard 3 Ã 3 convolutional layer. Compared to the baseline model, 2D topology together with SIC layer achieves similar error rate while being 9 times faster.
# 3.3. Spatial âBottleneckâ Structure
In our evaluation of layers with spatial âbottleneckâ structure, both the kernel size and the stride of the in- channel convolution and deconvolution is set to 2. The com- plexity of such a conï¬guration is a quarter of a SIC layer.
Model Methods Baseline Grouping 2D Top 3D Top SIC+2D A H F G I Top-5 Top-1 30.67% 11.24% 31.23% 11.73% 30.53% 11.28% 30.69% 11.38% 30.78% 11.29% Complexity 1 Ë1/2 Ë1/2 Ë15/32 Ë1/9
Table 5. Top-1&Top-5 error rate and complexity of topology mod- els and grouping model.
Both model J and model K are modiï¬ed from model C by replacing SIC layers with spatial âbottleneckâ layers. One SIC layer is substituted with two Spatial âBottleneckâ lay- ers, the ï¬rst one with no padding and the second one with one pixel padding, leading to a 50% complexity reduction. In model J, every other SIC layer is substituted; In model K, all SIC layers are substituted. Table 6 compares their performance with the baseline model and SIC based model. Compared to the SIC model C, model J reduces the com- plexity by 25% with no loss of accuracy; model K reduces the complexity by 50% with a slight drop of accuracy. Com- pared to the baseline model A, model K achieves 9 times speedup with similar accuracy.
Model A C J K #layers Top-1 err. Top-5 err. Complexity 2 4 6 8 30.67% 29.78% 29.72% 30.78% 11.24% 10.78% 10.66% 11. 34% 1 Ë2/9 Ë1/6 Ë1/9
Table 6. Top-1&Top-5 error rate and complexity of SIC layer with spatial âbottleneckâ structure.
# 3.4. Comparison with standard CNN models
In this section, we increase the depth of our models to compare with recent state-of-the-art CNN models. To go deeper but without increasing too much complexity, we adopt the channel-wise bottleneck structure similar to the one introduced in [5]. In each channel-wise bottleneck structure, the number of channels are ï¬rst reduced by half by the ï¬rst layer, then recovered by the second layer. Such a two-layer bottleneck structure has almost the same com- plexity to single layer with the same input and output chan- nels, thus increase the overall depth of the network.
We gradually increase the number of SIC layers with channel-wise bottleneck structure in each stage from 8 to 40, and compare their complexity to recent CNN models with similar accuracies. Model L , M, N and O correspond to the number of layers of 8, 12, 24, and 40, respectively. Due to training memory limitation, only the SIC layer is used in models in this section. While model L and M have the same spatial dimensions and stage structures as in Table 1, model N and O adopt the same structure as in [5]. They have different pooling strides and one more stages right af- ter the ï¬rst 7 à 7 convolutional layer. The detailed model
< 75 °° * a * eS |s = 8â » 5 3 2 % Alexnext = 65 HM Googlenet 2 D> ResNet-18 F a ResNet-34 we ResNet-50 @ ResNet-101 60 Our model L * A Our model M @ Our model N 55 @ Our model O 0 1000 2000 3000 4000 5000 6000 7000 8000 # Multiplications(10°)
Figure 4. Comparing top-1 accuracy and complexity between our model and several previous work
conï¬gurations are put in the supplemental materials.
Figure 4 compares the accuracy and complexity of our model from L to O with several previous works. Table 7 lists the detailed results. Figure 4 provides a visual compar- ison in the form of scattered plot. The red marks in the ï¬g- ure represent our models. All of our models demonstrate re- markably lower complexity while being as accurate. Com- pared to VGG, Resnet-34, Resnet-50 and Resnet-101 mod- els, our models are 42Ã, 7.3Ã, 4.5Ã, 6.5à more efï¬cient respectively with similar or lower top-1 or top-5 error.
# 3.5. Visualization of ï¬lters
Given the exceptionally good performance of the pro- posed methods, one might wonder what type of kernels are actually learned and how they compare with the ones in traditional convolutional layers. We randomly chose some kernels in the single intra-channel convolutional layers and the traditional convolutional layers, and visualize them side by side in Figure 5 to make an intuitive comparison. Both 3 à 3 kernels and 5 à 5 kernels are shown in the ï¬gure. The kernels learned by the proposed method demonstrate much higher level of regularized structure, while the kernels in standard convolutional layers exhibit more randomness. We attribute this to the stronger regularization caused by the reduction of number of ï¬lters.
# 3.6. Discussion on implementation details
In both SIC layer and spatial âbottleneckâ structure , most of the computation is consumed by the linear channel projection, which is basically a matrix multiplication. The 2D spatial convolution in each channel has similar complex- ity to a max pooling layer. Memory access takes most run- ning time due to low amount of computation. The efï¬ciency of our CUDA based implementation is similar to the open source libraries like Caffe and Torch. We believe higher ef- ï¬ciency can be easily achieved with an expert-level GPU
Model AlexNet GoogleNet ResNet 18 VGG Our Model L ResNet 34 Top-1 err. Top-5 err. 18.2% 10.07% 10.76% 9.9% 9.9% 8.74% Our Model M 27.07% 8.93% 7.8% 7.58% 7.1% 7.12% 42.5% 31.5% 30.43% 28.5% 28.29% 26.73% ResNet 50 Our Model N ResNet 101 Our Model O 24.7% 24.76% 23.6% 23.99% # of Multiplications 725M 1600M 1800M 16000M 381M 3600M 490M 3800M 845M 7600M 1172M
Table 7. Top-1 and Top-5 error rate of single-crop testing with single model, number of multiplication of our model and several previous work. The numbers in this table are generated with single model and center-crop. For AlexNet and GoogLeNet, the top-1 error is missing in original paper and we use the number of Caffeâs implementation[17]. For ResNet-34, we use the number with Facebookâs implementation[18].
BRUYVILUE DEENA eo BAFAUSRER EAPAOREe ed ee ELLER. Q985959560o ee ee Pe ERVCSRlk GA Aawaoe Ste GI WORE AE WWGVlkhl CRAGoaoF WEEE AIE AGB Sar SOREUEEE SSO TEIG BONAIRE OTe tkeea SAC EAP RSE Genoa Ed 0 a A PERM Aon Ce ee Lo | fd eel DOBRO Eo oe | Pd | | DEBRA e Pe EE a fated fal # eel + Ln BOOB oRE aS fame a | AHO pate foe | | | DHROR SEP BEBSEoo
Lo | fd eel DOBRO Eo oe | Pd | | DEBRA e Pe EE a fated fal # eel + Ln BOOB oRE aS fame a | AHO pate foe | | | DHROR SEP BEBSEoo
DEENA eo EAPAOREe Q985959560o ee Pe GA Aawaoe WORE AE CRAGoaoF AGB Sar SSO TEIG OTe tkeea Genoa
Ed 0 a A PERM Aon Ce ee
BRUYVILUE BAFAUSRER ed ee ELLER. ee ERVCSRlk Ste GI WWGVlkhl WEEE AIE SOREUEEE BONAIRE SAC EAP RSE
(a) 3 Ã 3 standard convolutional layer (b) 3 Ã 3 single intra-channel convolutional layer (c) 5 Ã 5 standard convolutional layer (d) 5 Ã 5 single intra-channel convolutional layer
Figure 5. Visualization of convolutional kernels. We compare the 3 Ã 3 and 5 Ã 5 kernels that are learned by the proposed single intra- channel convolutional layer and the standard convolutional layer. The kernels from single intra-channel convolution exhibit a higher level of regularity in structure.
implementation like in CuDNN. The topological subdivi- sioning layer resembles the structure of 2D and 3D convo- lution.Unlike the sparsity based methods, the regular con- nection pattern from topological subdivisioning makes ef- ï¬cient implementation possible. Currently, our implemen- tation simply discards all the non-connected weights in a convolutional layer.
# 4. Conclusion
This work introduces a novel design of efï¬cient convo- lutional layer in deep CNN that involves three speciï¬c im- provements: (i) a single intra-channel convolutional (SIC) layer ; (ii) a topological subdivision scheme; and (iii) a spa- tial âbottleneckâ structure. As we demonstrated, they are all powerful schemes in different ways to yield a new design of the convolutional layer that has higher efï¬ciency, while achieving equal or better accuracy compared to classical
designs. While the numbers of input and output channels remain the same as in the classical models, both the con- volutions and the number of connections can be optimized against accuracy in our model - (i) reduces complexity by unraveling convolution, (ii) uses topology to make connec- tions in the convolutional layer sparse, while maintaining local regularity and (iii) uses a conv-deconv bottleneck to reduce convolution while maintaining resolution. Although the CNN have been exceptionally successful regarding the recognition accuracy, it is still not clear what architecture is optimal and learns the visual information most effectively. The methods presented herein attempt to answer this ques- tion by focusing on improving the efï¬ciency of the convolu- tional layer. We believe this work will inspire more compre- hensive studies in the direction of optimizing convolutional layers in deep CNN.
# References
[1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012. 1, 6
[2] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Ser- manet, Scott Reed, Dragomir Anguelov, Dumitru Er- han, Vincent Vanhoucke, and Andrew Rabinovich. arXiv preprint Going deeper with convolutions. arXiv:1409.4842, 2014. 1
Batch nor- malization: Accelerating deep network training by arXiv preprint reducing internal covariate shift. arXiv:1502.03167, 2015. 1, 5
[4] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015. 1
[5] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015. 1, 7
[6] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchi- cal image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248â255. IEEE, 2009. 1
[7] Emily Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear structure within convolutional networks for efï¬cient evaluation. In Advances in Neural Information Processing Sys- tems, 2014. 1
[8] Max Jaderberg, Andrea Vedaldi, and Andrew Zisser- man. Speeding up convolutional neural networks with low rank expansions. In Proc. BMVC, 2014. 1
[9] Xiangyu Zhang, Jianhua Zou, Xiang Ming, Kaiming He, and Jian Sun. Efï¬cient and accurate approxima- tions of nonlinear convolutional networks. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1984â1992, 2015. 1
[10] Yani Ioannou, Duncan Robertson, Jamie Shotton, Roberto Cipolla, and Antonio Criminisi. Training cnns with low-rank ï¬lters for efï¬cient image classi- ï¬cation. arXiv preprint arXiv:1511.06744, 2015. 1
[11] Cheng Tai, Tong Xiao, Xiaogang Wang, et al. Convo- lutional neural networks with low-rank regularization. arXiv preprint arXiv:1511.06067, 2015. 1
[12] Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse convolutional In Proceedings of the IEEE Con- neural networks. ference on Computer Vision and Pattern Recognition, pages 806â814, 2015. 1, 3
[13] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. CoRR, abs/1510.00149, 2, 2015. 1
[14] Aapo Hyv¨arinen, Patrik Hoyer, and Mika Inki. To- pographic independent component analysis. Neural computation, 13(7):1527â1558, 2001. 3
[15] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from over- ï¬tting. The Journal of Machine Learning Research, 15(1):1929â1958, 2014. 5
[16] Sharan Chetlur, Cliff Woolley, Philippe Vandermer- sch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. cudnn: Efï¬cient primitives for deep learning. arXiv preprint arXiv:1410.0759, 2014. 5
[17] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Jonathan Long, Ross Girshick, Sergio Karayev, Guadarrama, and Trevor Darrell. Caffe: Convolu- tional architecture for fast feature embedding. In Proceedings of the ACM International Conference on Multimedia, pages 675â678. ACM, 2014. 8
[18] Sam Gross and Michael Wilber. Resnet training in https://github.com/charlespwd/ torch. project-title, 2016. 8 | {
"id": "1502.03167"
} |
1608.03983 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | 7 1 0 2
y a M 3 ] G L . s c [
5 v 3 8 9 3 0 . 8 0 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# SGDR: STOCHASTIC GRADIENT DESCENT WITH WARM RESTARTS
Ilya Loshchilov & Frank Hutter University of Freiburg Freiburg, Germany, {ilya,fh}@cs.uni-freiburg.de
# ABSTRACT
Restart techniques are common in gradient-free optimization to deal with multi- modal functions. Partial warm restarts are also gaining popularity in gradient- based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a sim- ple warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its per- formance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively. We also demonstrate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset. Our source code is available at https://github.com/loshchil/SGDR
# INTRODUCTION
Deep neural networks (DNNs) are currently the best-performing method for many classiï¬cation problems, such as object recognition from images (Krizhevsky et al., 2012a; Donahue et al., 2014) or speech recognition from audio data (Deng et al., 2013). Their training on large datasets (where DNNs perform particularly well) is the main computational bottleneck: it often requires several days, even on high-performance GPUs, and any speedups would be of substantial value.
The training of a DNN with n free parameters can be formulated as the problem of minimizing a function f : IRn â IR. The commonly used procedure to optimize f is to iteratively adjust xt â IRn (the parameter vector at time step t) using gradient information âft(xt) obtained on a relatively small t-th batch of b datapoints. The Stochastic Gradient Descent (SGD) procedure then becomes an extension of the Gradient Descent (GD) to stochastic optimization of f as follows:
xt+1 = xt â ηtâft(xt), (1)
where ηt is a learning rate. One would like to consider second-order information
xt+1 = xt â ηtHâ1 t âft(xt), (2)
but this is often infeasible since the computation and storage of the inverse Hessian Hâ1 is in- tractable for large n. The usual way to deal with this problem by using limited-memory quasi- Newton methods such as L-BFGS (Liu & Nocedal, 1989) is not currently in favor in deep learning, not the least due to (i) the stochasticity of âft(xt), (ii) ill-conditioning of f and (iii) the presence of saddle points as a result of the hierarchical geometric structure of the parameter space (Fukumizu & Amari, 2000). Despite some recent progress in understanding and addressing the latter problems (Bordes et al., 2009; Dauphin et al., 2014; Choromanska et al., 2014; Dauphin et al., 2015), state-of- the-art optimization techniques attempt to approximate the inverse Hessian in a reduced way, e.g., by considering only its diagonal to achieve adaptive learning rates. AdaDelta (Zeiler, 2012) and Adam (Kingma & Ba, 2014) are notable examples of such methods.
1
Published as a conference paper at ICLR 2017
Learning rate schedule
10 âO-â Default, Ir=0.1 Eb Default, ir=0.05 wb - B= T= 50, Tut T,=100,T â j ° mult 2 | \ T= 200, © 10 pe Th 1 Tha > mul ⬠3 A T= 10, Traut § 10 eal 10° i ] ! fi fi f } 20 40 60 80 100 120 140 160 180 200 Epochs
1
=1 Ty = 1 = 2
=
2
Figure 1: Alternative schedule schemes of learning rate ηt over batch index t: default schemes with η0 = 0.1 (blue line) and η0 = 0.05 (red line) as used by Zagoruyko & Komodakis (2016); warm restarts simulated every T0 = 50 (green line), T0 = 100 (black line) and T0 = 200 (grey line) epochs with ηt decaying during i-th run from ηi min = 0 according to eq. (5); warm restarts starting from epoch T0 = 1 (dark green line) and T0 = 10 (magenta line) with doubling (Tmult = 2) periods Ti at every new warm restart.
Intriguingly enough, the current state-of-the-art results on CIFAR-10, CIFAR-100, SVHN, Ima- geNet, PASCAL VOC and MS COCO datasets were obtained by Residual Neural Networks (He et al., 2015; Huang et al., 2016c; He et al., 2016; Zagoruyko & Komodakis, 2016) trained with- out the use of advanced methods such as AdaDelta and Adam. Instead, they simply use SGD with momentum 1:
vt+1 = µtvt â ηtâft(xt), xt+1 = xt + vt+1,
Vexr = ee â MV file), (3)
Xeq1 = Xe + Vega, (4)
(3) (4)
where vt is a velocity vector initially set to 0, ηt is a decreasing learning rate and µt is a momentum rate which deï¬nes the trade-off between the current and past observations of âft(xt). The main difï¬culty in training a DNN is then associated with the scheduling of the learning rate and the amount of L2 weight decay regularization employed. A common learning rate schedule is to use a constant learning rate and divide it by a ï¬xed constant in (approximately) regular intervals. The blue line in Figure 1 shows an example of such a schedule, as used by Zagoruyko & Komodakis (2016) to obtain the state-of-the-art results on CIFAR-10, CIFAR-100 and SVHN datasets.
In this paper, we propose to periodically simulate warm restarts of SGD, where in each restart the learning rate is initialized to some value and is scheduled to decrease. Four different instantiations of this new learning rate schedule are visualized in Figure 1. Our empirical results suggest that SGD with warm restarts requires 2Ã to 4Ã fewer epochs than the currently-used learning rate schedule schemes to achieve comparable or even better results. Furthermore, combining the networks ob- tained right before restarts in an ensemble following the approach proposed by Huang et al. (2016a) improves our results further to 3.14% for CIFAR-10 and 16.21% for CIFAR-100. We also demon- strate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset.
1More speciï¬cally, they employ Nesterovâs momentum (Nesterov, 1983; 2013)
2
Published as a conference paper at ICLR 2017
2 RELATED WORK
2.1 RESTARTS IN GRADIENT-FREE OPTIMIZATION
When optimizing multimodal functions one may want to ï¬nd all global and local optima. The tractability of this task depends on the landscape of the function at hand and the budget of func- tion evaluations. Gradient-free optimization approaches based on niching methods (Preuss, 2015) usually can deal with this task by covering the search space with dynamically allocated niches of local optimizers. However, these methods usually work only for relatively small search spaces, e.g., n < 10, and do not scale up due to the curse of dimensionality (Preuss, 2010). Instead, the current state-of-the-art gradient-free optimizers employ various restart mechanisms (Hansen, 2009; Loshchilov et al., 2012). One way to deal with multimodal functions is to iteratively sample a large number λ of candidate solutions, make a step towards better solutions and slowly shape the sampling distribution to maximize the likelihood of successful steps to appear again (Hansen & Kern, 2004). The larger the λ, the more global search is performed requiring more function evaluations. In order to achieve good anytime performance, it is common to start with a small λ and increase it (e.g., by doubling) after each restart. This approach works best on multimodal functions with a global funnel structure and also improves the results on ill-conditioned problems where numerical issues might lead to premature convergence when λ is small (Hansen, 2009).
2.2 RESTARTS IN GRADIENT-BASED OPTIMIZATION
Gradient-based optimization algorithms such as BFGS can also perform restarts to deal with mul- timodal functions (Ros, 2009). In large-scale settings when the usual number of variables n is on the order of 103 â 109, the availability of gradient information provides a speedup of a factor of n w.r.t. gradient-free approaches. Warm restarts are usually employed to improve the convergence rate rather than to deal with multimodality: often it is sufï¬cient to approach any local optimum to a given precision and in many cases the problem at hand is unimodal. Fletcher & Reeves (1964) proposed to ï¬esh the history of conjugate gradient method every n or (n + 1) iterations. Powell (1977) proposed to check whether enough orthogonality between âf (xtâ1) and âf (xt) has been lost to warrant another warm restart. Recently, OâDonoghue & Candes (2012) noted that the iterates of accelerated gradient schemes proposed by Nesterov (1983; 2013) exhibit a periodic behavior if momentum is overused. The period of the oscillations is proportional to the square root of the local condition number of the (smooth convex) objective function. The authors showed that ï¬xed warm restarts of the algorithm with a period proportional to the conditional number achieves the optimal linear convergence rate of the original accelerated gradient scheme. Since the condition number is an unknown parameter and its value may vary during the search, they proposed two adaptive warm restart techniques (OâDonoghue & Candes, 2012):
The function scheme restarts whenever the objective function increases.
⢠The gradient scheme restarts whenever the angle between the momentum term and the negative gradient is obtuse, i.e, when the momentum seems to be taking us in a bad direc- tion, as measured by the negative gradient at that point. This scheme resembles the one of Powell (1977) for the conjugate gradient method.
OâDonoghue & Candes (2012) showed (and it was conï¬rmed in a set of follow-up works) that these simple schemes provide an acceleration on smooth functions and can be adjusted to accelerate state- of-the-art methods such as FISTA on nonsmooth functions.
Smith (2015; 2016) recently introduced cyclical learning rates for deep learning, his approach is closely-related to our approach in its spirit and formulation but does not focus on restarts.
Yang & Lin (2015) showed that Stochastic subGradient Descent with restarts can achieve a linear convergence rate for a class of non-smooth and non-strongly convex optimization problems where the epigraph of the objective function is a polyhedron. In contrast to our work, they never increase the learning rate to perform restarts but decrease it geometrically at each epoch. To perform restarts, they periodically reset the current solution to the averaged solution from the previous epoch.
3
Published as a conference paper at ICLR 2017
# 3 STOCHASTIC GRADIENT DESCENT WITH WARM RESTARTS (SGDR)
The existing restart techniques can also be used for stochastic gradient descent if the stochasticity is taken into account. Since gradients and loss values can vary widely from one batch of the data to another, one should denoise the incoming information: by considering averaged gradients and losses, e.g., once per epoch, the above-mentioned restart techniques can be used again.
In this work, we consider one of the simplest warm restart approaches. We simulate a new warm- started run / restart of SGD once Ti epochs are performed, where i is the index of the run. Impor- tantly, the restarts are not performed from scratch but emulated by increasing the learning rate ηt while the old value of xt is used as an initial solution. The amount of this increase controls to which extent the previously acquired information (e.g., momentum) is used.
Within the i-th run, we decay the learning rate with a cosine annealing for each batch as follows:
ηt = ηi min + 1 2 (ηi max â ηi min)(1 + cos( Tcur Ti Ï)), (5)
where ηi max are ranges for the learning rate, and Tcur accounts for how many epochs have been performed since the last restart. Since Tcur is updated at each batch iteration t, it can take discredited values such as 0.1, 0.2, etc. Thus, ηt = ηi max when t = 0 and Tcur = 0. Once Tcur = Ti, the cos function will output â1 and thus ηt = ηi min. The decrease of the learning rate is shown in Figure 1 for ï¬xed Ti = 50, Ti = 100 and Ti = 200; note that the logarithmic axis obfuscates the typical shape of the cosine function.
In order to improve anytime performance, we suggest an option to start with an initially small Ti and increase it by a factor of Tmult at every restart (see, e.g., Figure 1 for T0 = 1, Tmult = 2 and T0 = 10, Tmult = 2). It might be of great interest to decrease ηi min at every new restart. However, for the sake of simplicity, here, we keep ηi min the same for every i to reduce the number of hyperparameters involved.
Since our simulated warm restarts (the increase of the learning rate) often temporarily worsen per- formance, we do not always use the last xt as our recommendation for the best solution (also called the incumbent solution). While our recommendation during the ï¬rst run (before the ï¬rst restart) is indeed the last xt, our recommendation after this is a solution obtained at the end of the last per- formed run at ηt = ηi min. We emphasize that with the help of this strategy, our method does not require a separate validation data set to determine a recommendation.
# 4 EXPERIMENTAL RESULTS
4.1 EXPERIMENTAL SETTINGS
We consider the problem of training Wide Residual Neural Networks (WRNs; see Zagoruyko & Komodakis (2016) for details) on the CIFAR-10 and CIFAR-100 datasets (Krizhevsky, 2009). We will use the abbreviation WRN-d-k to denote a WRN with depth d and width k. Zagoruyko & Komodakis (2016) obtained the best results with a WRN-28-10 architecture, i.e., a Residual Neural Network with d = 28 layers and k = 10 times more ï¬lters per layer than used in the original Residual Neural Networks (He et al., 2015; 2016).
The CIFAR-10 and CIFAR-100 datasets (Krizhevsky, 2009) consist of 32Ã32 color images drawn from 10 and 100 classes, respectively, split into 50,000 train and 10,000 test images. For image preprocessing Zagoruyko & Komodakis (2016) performed global contrast normalization and ZCA whitening. For data augmentation they performed horizontal ï¬ips and random crops from the image padded by 4 pixels on each side, ï¬lling missing pixels with reï¬ections of the original image.
For training, Zagoruyko & Komodakis (2016) used SGD with Nesterovâs momentum with initial learning rate set to η0 = 0.1, weight decay to 0.0005, dampening to 0, momentum to 0.9 and minibatch size to 128. The learning rate is dropped by a factor of 0.2 at 60, 120 and 160 epochs, with a total budget of 200 epochs. We reproduce the results of Zagoruyko & Komodakis (2016) with the same settings except that i) we subtract per-pixel mean only and do not use ZCA whitening; ii) we use SGD with momentum as described by eq. (3-4) and not Nesterovâs momentum.
4
Published as a conference paper at ICLR 2017
# WRN-28-10 on CIFAR-10
# WRN-28-10 on CIFAR-100
2
50
Default, Ir=0.1 Default, r=0.05 Ty = 50, Tra = 1 20 +> = 100, te 1 40 = Ty = 200, Try = 4 = = 15 PR T0= Trt =2 = 30 o o 8 10 8 20 F F 10 0 i} 50 100 150 200 50 100 150 200 Epochs Epochs WRN-28-10 on CIFAR-10 WRN-28-10 on CIFAR-100 5 21 20.5 45 Test error (%) ES Test error (%) oS a 19 3.5 18.5 3 18 50 100 150 200 50 100 150 200 Epochs Epochs WRN-28-20 on CIFAR-10 WRN-28-20 on CIFAR-100 5 21 y 20.5 45 20 Test error (%) ES Test error (%) oS a 19 3.5 18.5 3 18 50 100 150 200 50 100 150 200 Epochs Epochs
Figure 2: Test errors on CIFAR-10 (left column) and CIFAR-100 (right column) datasets. Note that for SGDR we only plot the recommended solutions. The top and middle rows show the same results on WRN-28-10, with the middle row zooming into the good performance region of low test error. The bottom row shows performance with a wider network, WRN-28-20. The results of the default learning rate schedules of Zagoruyko & Komodakis (2016) with η0 = 0.1 and η0 = 0.05 are depicted by the blue and red lines, respectively. The schedules of ηt used in SGDR are shown with i) restarts every T0 = 50 epochs (green line); ii) restarts every T0 = 100 epochs (black line); iii) restarts every T0 = 200 epochs (gray line); iv) restarts with doubling (Tmult = 2) periods of restarts starting from the ï¬rst epoch (T0 = 1, dark green line); and v) restarts with doubling (Tmult = 2) periods of restarts starting from the tenth epoch (T0 = 10, magenta line).
The schedule of ηt used by Zagoruyko & Komodakis (2016) is depicted by the blue line in Figure 1. The same schedule but with η0 = 0.05 is depicted by the red line. The schedule of ηt used in SGDR is also shown in Figure 1, with two initial learning rates T0 and two restart doubling periods.
5
Published as a conference paper at ICLR 2017
original-ResNet (He et al., 2015) stoc-depth (Huang et al., 2016c) pre-act-ResNet (He et al., 2016) WRN (Zagoruyko & Komodakis, 2016) depth-k 110 1202 110 1202 110 164 1001 16-8 28-10 28-10 28-10 28-10 28-10 28-10 28-10 28-10 28-10 28-20 28-20 28-20 28-20 28-20 28-20 28-20 # runs # params 1.7M mean of 5 10.2M mean of 5 1 run 1.7M 1 run 10.2M med. of 5 1.7M 1.7M med. of 5 10.2M med. of 5 11.0M 36.5M 36.5M 1 run 1 run 1 run 36.5M med. of 5 36.5M med. of 5 36.5M med. of 5 36.5M med. of 5 36.5M med. of 5 36.5M med. of 5 36.5M med. of 5 145.8M med. of 2 145.8M med. of 2 145.8M med. of 2 145.8M med. of 2 145.8M med. of 2 145.8M med. of 2 145.8M med. of 2 CIFAR-10 CIFAR-100 6.43 7.93 5.23 4.91 6.37 5.46 4.62 4.81 4.17 n/a 25.16 27.82 24.58 n/a n/a 24.33 22.71 22.07 20.50 20.04 4.24 4.13 4.17 4.07 3.86 4.09 4.03 4.08 3.96 4.01 3.77 3.66 3.91 3.74 20.33 20.21 19.99 19.87 19.98 19.74 19.58 19.53 19.67 19.28 19.24 19.69 18.90 18.70
Table 1: Test errors of different methods on CIFAR-10 and CIFAR-100 with moderate data aug- mentation (ï¬ip/translation). In the second column k is a widening factor for WRNs. Note that the computational and memory resources used to train all WRN-28-10 are the same. In all other cases they are different, but WRNs are usually faster than original ResNets to achieve the same accuracy (e.g., up to a factor of 8 according to Zagoruyko & Komodakis (2016)). Bold text is used only to highlight better results and is not based on statistical tests (too few runs).
4.2 SINGLE-MODEL RESULTS
Table 1 shows that our experiments reproduce the results given by Zagoruyko & Komodakis (2016) for WRN-28-10 both on CIFAR-10 and CIFAR-100. These âdefaultâ experiments with η0 = 0.1 and η0 = 0.05 correspond to the blue and red lines in Figure 2. The results for η0 = 0.05 show better performance, and therefore we use η0 = 0.05 in our later experiments.
SGDR with T0 = 50, T0 = 100 and T0 = 200 for Tmult = 1 perform warm restarts every 50, 100 and 200 epochs, respectively. A single run of SGD with the schedule given by eq. (5) for T0 = 200 shows the best results suggesting that the original schedule of WRNs might be suboptimal w.r.t. the test error in these settings. However, the same setting with T0 = 200 leads to the worst anytime performance except for the very last epochs.
SGDR with T0 = 1, Tmult = 2 and T0 = 10, Tmult = 2 performs its ï¬rst restart after 1 and 10 epochs, respectively. Then, it doubles the maximum number of epochs for every new restart. The main purpose of this doubling is to reach good test error as soon as possible, i.e., achieve good anytime performance. Figure 2 shows that this is achieved and test errors around 4% on CIFAR-10 and around 20% on CIFAR-100 can be obtained about 2-4 times faster than with the default schedule used by Zagoruyko & Komodakis (2016).
6
Published as a conference paper at ICLR 2017
Median test error (%) of ensembles on CIFAR-10 39 38 37 36 35 34 33 3.2 8 16 (N) FS (M) Number of snapshots per run 174.03% 3.63% 1 2 3 4 Number of runs
Median test error (%) of ensembles on CIFAR-100 1/19.57% 18.16% Number of snapshots per run (M) nN 2 nN a aS 1 2 8 16 3 4 Number of runs (N)
39 38 37 36 35 34 33 3.2 8 16 (N) FS (M) 1/19.57% 18.16% Number of snapshots per run Number of snapshots per run (M) nN 2 nN a aS 174.03% 3.63% 1 2 3 4 1 2 8 16 3 4 Number of runs Number of runs (N)
Figure 3: Test errors of ensemble models built from N runs of SGDR on WRN-28-10 with M model snapshots per run made at epochs 150, 70 and 30 (right before warm restarts of SGDR as suggested by Huang et al. (2016a)). When M =1 (respectively, M =2), we aggregate probabilities of softmax layers of snapshot models at epoch index 150 (respectively, at epoch indexes 150 and 70).
N = 1 run of WRN-28-10 with M = 1 snapshot (median of 16 runs) N = 1 run of WRN-28-10 with M = 3 snapshots per run N = 3 runs of WRN-28-10 with M = 3 snapshots per run N = 16 runs of WRN-28-10 with M = 3 snapshots per run 4.03 3.51 3.25 3.14 19.57 17.75 16.64 16.21
# CIFAR-10 CIFAR-100
Table 2: Test errors of ensemble models on CIFAR-10 and CIFAR-100 datasets.
Since SGDR achieves good performance faster, it may allow us to train larger networks. We there- fore investigated whether results on CIFAR-10 and CIFAR-100 can be further improved by making WRNs two times wider, i.e., by training WRN-28-20 instead of WRN-28-10. Table 1 shows that the results indeed improved, by about 0.25% on CIFAR-10 and by about 0.5-1.0% on CIFAR-100. While network architecture WRN-28-20 requires roughly three-four times more computation than WRN-28-10, the aggressive learning rate reduction of SGDR nevertheless allowed us to achieve a better error rate in the same time on WRN-28-20 as we spent on 200 epochs of training on WRN- 28-10. Speciï¬cally, Figure 2 (right middle and right bottom) show that after only 50 epochs, SGDR (even without restarts, using T0 = 50, Tmult = 1) achieved an error rate below 19% (whereas none of the other learning methods performed better than 19.5% on WRN-28-10). We therefore have hope that â by enabling researchers to test new architectures faster â SGDRâs good anytime performance may also lead to improvements of the state of the art.
In a ï¬nal experiment for SGDR by itself, Figure 7 in the appendix compares SGDR and the de- fault schedule with respect to training and test performance. As the ï¬gure shows, SGDR optimizes training loss faster than the standard default schedule until about epoch 120. After this, the default schedule overï¬ts, as can be seen by an increase of the test error both on CIFAR-10 and CIFAR-100 (see, e.g., the right middle plot of Figure 7). In contrast, we only witnessed very mild overï¬tting for SGDR.
4.3 ENSEMBLE RESULTS
Our initial arXiv report on SGDR (Loshchilov & Hutter, 2016) inspired a follow-up study by Huang et al. (2016a) in which the authors suggest to take M snapshots of the models obtained by SGDR (in their paper referred to as cyclical learning rate schedule and cosine annealing cycles) right before M last restarts and to use those to build an ensemble, thereby obtaining ensembles âfor freeâ (in contrast to having to perform multiple independent runs). The authors demonstrated new state-of-
7
Published as a conference paper at ICLR 2017
the-art results on CIFAR datasets by making ensembles of DenseNet models (Huang et al., 2016b). Here, we investigate whether their conclusions hold for WRNs used in our study. We used WRN- 28-10 trained by SGDR with T0 = 10, Tmult = 2 as our baseline model.
Figure 3 and Table 2 aggregate the results of our study. The original test error of 4.03% on CIFAR-10 and 19.57% on CIFAR-100 (median of 16 runs) can be improved to 3.51% on CIFAR-10 and 17.75% on CIFAR-100 when M = 3 snapshots are taken at epochs 30, 70 and 150: when the learning rate of SGDR with T0 = 10, Tmult = 2 is scheduled to achieve 0 (see Figure 1) and the models are used with uniform weights to build an ensemble. To achieve the same result, one would have to aggregate N = 3 models obtained at epoch 150 of N = 3 independent runs (see N = 3, M = 1 in Figure 3). Thus, the aggregation from snapshots provides a 3-fold speedup in these settings because additional (M > 1-th) snapshots from a single SGDR run are computationally free. Interestingly, aggregation of models from independent runs (when N > 1 and M = 1) does not scale up as well as from M > 1 snapshots of independent runs when the same number of models is considered: the case of N = 3 and M = 3 provides better performance than the cases of M = 1 with N = 18 and N = 21. Not only the number of snapshots M per run but also their origin is crucial. Thus, naively building ensembles from models obtained at last epochs only (i.e., M = 3 snapshots at epochs 148, 149, 150) did not improve the results (i.e., the baseline of M = 1 snapshot at 150) thereby conï¬rming the conclusion of Huang et al. (2016a) that snapshots of SGDR provide a useful diversity of predictions for ensembles.
Three runs (N = 3) of SGDR with M = 3 snapshots per run are sufï¬cient to greatly improve the results to 3.25% on CIFAR-10 and 16.64% on CIFAR-100 outperforming the results of Huang et al. (2016a). By increasing N to 16 one can achieve 3.14% on CIFAR-10 and 16.21% on CIFAR-100. We believe that these results could be further improved by considering better baseline models than WRN-28-10 (e.g., WRN-28-20).
4.4 EXPERIMENTS ON A DATASET OF EEG RECORDINGS
To demonstrate the generality of SGDR, we also considered a very different domain: a dataset of electroencephalographic (EEG) recordings of brain activity for classiï¬cation of actual right and left hand and foot movements of 14 subjects with roughly 1000 trials per subject (Schirrmeister et al., 2017). The best classiï¬cation results obtained with the original pipeline based on convolutional neu- ral networks designed by Schirrmeister et al. (2017) were used as our reference. First, we compared the baseline learning rate schedule with different settings of the total number of epochs and initial learning rates (see Figure 4). When 30 epochs were considered, we dropped the learning rate by a factor of 10 at epoch indexes 10, 15 and 20. As expected, with more epochs used and a similar (budget proportional) schedule better results can be achieved. Alternatively, one can consider SGDR and get a similar ï¬nal performance while having a better anytime performance without deï¬ning the total budget of epochs beforehand.
Similarly to our results on the CIFAR datasets, our experiments with the EEG data conï¬rm that snapshots are useful and the median reference error (about 9%) can be improved i) by 1-2% when model snapshots of a single run are considered, and ii) by 2-3% when model snapshots from both hyperparameter settings are considered. The latter would correspond to N = 2 in Section (4.3).
4.5 PRELIMINARY EXPERIMENTS ON A DOWNSAMPLED IMAGENET DATASET
In order to additionally validate our SGDR on a larger dataset, we constructed a downsampled version of the ImageNet dataset [P. Chrabaszcz, I. Loshchilov and F. Hutter. A Downsampled Variant of ImageNet as an Alternative to the CIFAR datasets., in preparation]. In contrast to earlier attempts (Pouransari & Ghili, 2015), our downsampled ImageNet contains exactly the same images from 1000 classes as the original ImageNet but resized with box downsampling to 32 à 32 pixels. Thus, this dataset is substantially harder than the original ImageNet dataset because the average number of pixels per image is now two orders of magnitude smaller. The new dataset is also more difï¬cult than the CIFAR datasets because more classes are used and the relevant objects to be classiï¬ed often cover only a tiny subspace of the image and not most of the image as in the CIFAR datasets.
We benchmarked SGD with momentum with the default learning rate schedule, SGDR with T0 = 1, Tmult = 2 and SGDR with T0 = 10, Tmult = 2 on WRN-28-10, all trained with 4 settings of
8
Published as a conference paper at ICLR 2017
Median Results on 14 datasets, Ir=0.025 is baseline n, =30 ep baseline n, =60 ep baseline n, =120 ep v So baseline Nyp=240 baseline n, =480 ep Test Error - Reference Error (%) â sGpR -2 10' 10° 10° Epochs
Median Results on 14 datasets, Ir=0.05 is baseline n, =30 ep baseline n, =60 ep baseline n, =120 ep Nv o baseline Nyp=240 baseline n, =480 ep Test Error - Reference Error (%) o ââ sGpR 4 2 te) -2 10' 10° 10° Epochs
is is baseline n, =30 ep baseline n, =60 ep baseline n, =120 ep baseline n, =30 ep baseline n, =60 ep baseline n, =120 ep v Nv So o baseline Nyp=240 baseline n, =480 ep baseline Nyp=240 baseline n, =480 ep Test Error - Reference Error (%) Test Error - Reference Error (%) o â sGpR ââ sGpR 4 2 te) -2 -2 10' 10° 10° 10' 10° 10° Epochs Epochs Median Results on 14 datasets Mean Results on 14 datasets 3 3 i) Test Error - Reference Error (%) é oO Test Error : Reference Error (%) ° 2 2 a 2 3 3 a 2 3 10 10 10 10 10 10 Epochs Epochs
Median Results on 14 datasets 3 Test Error - Reference Error (%) é oO 2 a 2 3 10 10 10 Epochs
Mean Results on 14 datasets 3 i) Test Error : Reference Error (%) ° 2 3 a 2 3 10 10 10 Epochs
Figure 4: (Top) Improvements obtained by the baseline learning rate schedule and SGDR w.r.t. the best known reference classiï¬cation error on a dataset of electroencephalographic (EEG) recordings of brain activity for classiï¬cation of actual right and left hand and foot movements of 14 subjects with roughly 1000 trials per subject. Both considered approaches were tested with the initial learn- ing rate lr = 0.025 (Top-Left) and lr = 0.05 (Top-Right). Note that the baseline approach is considered with different settings of the total number of epochs: 30, 60, . . ., 480. (Bottom) SGDR with lr = 0.025 and lr = 0.05 without and with M model snapshots taken at the last M = nr/2 restarts, where nr is the total number of restarts.
the initial learning rate ηi max: 0.050, 0.025, 0.01 and 0.005. We used the same data augmentation procedure as for the CIFAR datasets. Similarly to the results on the CIFAR datasets, Figure 5 shows that SGDR demonstrates better anytime performance. SGDR with T0 = 10, Tmult = 2, ηi max = 0.01 achieves top-1 error of 39.24% and top-5 error of 17.17% matching the original results by AlexNets (40.7% and 18.2%, respectively) obtained on the original ImageNet with full-size images of ca. 50 times more pixels per image (Krizhevsky et al., 2012b). Interestingly, when the dataset is permuted only within 10 subgroups each formed from 100 classes, SGDR also demonstrates better results (see Figure 8 in the Supplementary Material). An interpretation of this might be that while the initial learning rate seems to be very important, SGDR reduces the problem of improper selection of the latter by scanning / annealing from the initial learning rate to 0.
Clearly, longer runs (more than 40 epochs considered in this preliminary experiment) and hyperpa- rameter tuning of learning rates, regularization and other hyperparameters shall further improve the results.
9
Published as a conference paper at ICLR 2017
WRN-28-10 on downsampled 32x32 ImageNet WRN-28-10 on downsampled 32x32 ImageNet 60 Default Default SGDR T= 1, Typ =2 SGDR Ty = 1, Typ = 2 55 SGDR T, = 10,7, SGDR T, = 10, T,., £50 & o GB 2 45 ire) b & ° F 40 35
# o GB 2
~~
# ° F
Figure 5: Top-1 and Top-5 test errors obtained by SGD with momentum with the default learning rate schedule, SGDR with T0 = 1, Tmult = 2 and SGDR with T0 = 10, Tmult = 2 on WRN-28-10 trained on a version of ImageNet, with all images from all 1000 classes downsampled to 32 Ã 32 pixels. The same baseline data augmentation as for the CIFAR datasets is used. Four settings of the initial learning rate are considered: 0.050, 0.025, 0.01 and 0.005.
# 5 DISCUSSION
Our results suggest that even without any restarts the proposed aggressive learning rate schedule given by eq. (5) is competitive w.r.t. the default schedule when training WRNs on the CIFAR- 10 (e.g., for T0 = 200, Tmult = 1) and CIFAR-100 datasets. In practice, the proposed schedule requires only two hyper-parameters to be deï¬ned: the initial learning rate and the total number of epochs.
We found that the anytime performance of SGDR remain similar when shorter epochs are considered (see section 8.1 in the Supplemenary Material).
One should not suppose that the parameter values used in this study and many other works with (Residual) Neural Networks are selected to demonstrate the fastest decrease of the training error. Instead, the best validation or / and test errors are in focus. Notably, the validation error is rarely used when training Residual Neural Networks because the recommendation is deï¬ned by the ï¬nal solution (in our approach, the ï¬nal solution of each run). One could use the validation error to determine the optimal initial learning rate and then run on the whole dataset; this could further improve results.
The main purpose of our proposed warm restart scheme for SGD is to improve its anytime perfor- mance. While we mentioned that restarts can be useful to deal with multi-modal functions, we do not claim that we observe any effect related to multi-modality. As we noted earlier, one could decrease ηi max and ηi min at every new warm restart to control the amount of divergence. If new restarts are worse than the old ones w.r.t. validation error, then one might also consider going back to the last best solution and perform a new restart with adjusted hyperparameters.
Our results reproduce the ï¬nding by Huang et al. (2016a) that intermediate models generated by SGDR can be used to build efï¬cient ensembles at no cost. This ï¬nding makes SGDR especially attractive for scenarios when ensemble building is considered.
# 6 CONCLUSION
In this paper, we investigated a simple warm restart mechanism for SGD to accelerate the training of DNNs. Our SGDR simulates warm restarts by scheduling the learning rate to achieve competitive results on CIFAR-10 and CIFAR-100 roughly two to four times faster. We also achieved new state- of-the-art results with SGDR, mainly by using even wider WRNs and ensembles of snapshots from
10
Published as a conference paper at ICLR 2017
SGDRâs trajectory. Future empirical studies should also consider the SVHN, ImageNet and MS COCO datasets, for which Residual Neural Networks showed the best results so far. Our preliminary results on a dataset of EEG recordings suggest that SGDR delivers better and better results as we carry out more restarts and use more model snapshots. The results on our downsampled ImageNet dataset suggest that SGDR might also reduce the problem of learning rate selection because the annealing and restarts of SGDR scan / consider a range of learning rate values. Future work should consider warm restarts for other popular training algorithms such as AdaDelta (Zeiler, 2012) and Adam (Kingma & Ba, 2014).
Alternative network structures should be also considered; e.g., soon after our initial arXiv report (Loshchilov & Hutter, 2016), Zhang et al. (2016); Huang et al. (2016b); Han et al. (2016) reported that WRNs models can be replaced by more memory-efï¬cient models. Thus, it should be tested whether our results for individual models and ensembles can be further improved by using their networks instead of WRNs. Deep compression methods (Han et al., 2015) can be used to reduce the time and memory costs of DNNs and their ensembles.
# 7 ACKNOWLEDGMENTS
This work was supported by the German Research Foundation (DFG), under the BrainLinksBrain- Tools Cluster of Excellence (grant number EXC 1086). We thank Gao Huang, Kilian Quirin Wein- berger, Jost Tobias Springenberg, Mark Schmidt and three anonymous reviewers for their helpful comments and suggestions. We thank Robin Tibor Schirrmeister for providing his pipeline for the EEG experiments and helping integrating SGDR.
# REFERENCES
Antoine Bordes, L´eon Bottou, and Patrick Gallinari. Sgd-qn: Careful quasi-newton stochastic gra- dient descent. The Journal of Machine Learning Research, 10:1737â1754, 2009.
Anna Choromanska, Mikael Henaff, Michael Mathieu, G´erard Ben Arous, and Yann LeCun. The loss surface of multilayer networks. arXiv preprint arXiv:1412.0233, 2014.
Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex op- timization. In Advances in Neural Information Processing Systems, pp. 2933â2941, 2014.
Yann N Dauphin, Harm de Vries, Junyoung Chung, and Yoshua Bengio. Rmsprop and equilibrated adaptive learning rates for non-convex optimization. arXiv preprint arXiv:1502.04390, 2015.
L. Deng, G. Hinton, and B. Kingsbury. New types of deep neural network learning for speech recognition and related applications: An overview. In Proc. of ICASSPâ13, 2013.
J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In Proc. of ICMLâ14, 2014.
Reeves Fletcher and Colin M Reeves. Function minimization by conjugate gradients. The computer journal, 7(2):149â154, 1964.
Kenji Fukumizu and Shun-ichi Amari. Local minima and plateaus in hierarchical structures of multilayer perceptrons. Neural Networks, 13(3):317â327, 2000.
Dongyoon Han, Jiwhan Kim, and Junmo Kim. Deep pyramidal residual networks. arXiv preprint arXiv:1610.02915, 2016.
Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
Nikolaus Hansen. Benchmarking a BI-population CMA-ES on the BBOB-2009 function testbed. In Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computa- tion Conference: Late Breaking Papers, pp. 2389â2396. ACM, 2009.
11
Published as a conference paper at ICLR 2017
Nikolaus Hansen and Stefan Kern. Evaluating the cma evolution strategy on multimodal test functions. In International Conference on Parallel Problem Solving from Nature, pp. 282â291. Springer, 2004.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. arXiv preprint arXiv:1512.03385, 2015.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016.
Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, and Kilian Q. Weinberger. Snapshot ensembles: Train 1, get m for free. ICLR 2017 submission, 2016a.
Gao Huang, Zhuang Liu, and Kilian Q Weinberger. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016b.
Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochas- tic depth. arXiv preprint arXiv:1603.09382, 2016c.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Proc. of NIPSâ12, pp. 1097â1105, 2012a.
Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convo- lutional neural networks. In Advances in neural information processing systems, pp. 1097â1105, 2012b.
Dong C Liu and Jorge Nocedal. On the limited memory bfgs method for large scale optimization. Mathematical programming, 45(1-3):503â528, 1989.
Ilya Loshchilov and Frank Hutter. SGDR: Stochastic Gradient Descent with Restarts. arXiv preprint arXiv:1608.03983, 2016.
Ilya Loshchilov, Marc Schoenauer, and Michele Sebag. Alternative restart strategies for CMA-ES. In International Conference on Parallel Problem Solving from Nature, pp. 296â305. Springer, 2012.
Yurii Nesterov. A method of solving a convex programming problem with convergence rate o (1/k2). In Soviet Mathematics Doklady, volume 27, pp. 372â376, 1983.
Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Science & Business Media, 2013.
Brendan OâDonoghue and Emmanuel Candes. Adaptive restart for accelerated gradient schemes. arXiv preprint arXiv:1204.3982, 2012.
Hadi Pouransari and Saman Ghili. Tiny imagenet visual recognition challenge. CS231 course at STANFORD, 2015.
Michael James David Powell. Restart procedures for the conjugate gradient method. Mathematical programming, 12(1):241â254, 1977.
Mike Preuss. Niching the CMA-ES via nearest-better clustering. In Proceedings of the 12th annual conference companion on Genetic and evolutionary computation, pp. 1711â1718. ACM, 2010.
Mike Preuss. Niching methods and multimodal optimization performance. In Multimodal Optimiza- tion by Means of Evolutionary Algorithms, pp. 115â137. Springer, 2015.
Raymond Ros. Benchmarking the bfgs algorithm on the bbob-2009 function testbed. In Proceed- ings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Con- ference: Late Breaking Papers, pp. 2409â2414. ACM, 2009.
12
Published as a conference paper at ICLR 2017
Robin Tibor Schirrmeister, Jost Tobias Springenberg, Lukas Dominique Josef Fiederer, Martin Glasstetter, Katharina Eggensperger, Michael Tangermann, Frank Hutter, Wolfram Burgard, and Tonio Ball. Deep learning with convolutional neural networks for brain mapping and decoding of movement-related information from the human eeg. arXiv preprint arXiv:1703.05051, 2017.
Leslie N Smith. No more pesky learning rate guessing games. arXiv preprint arXiv:1506.01186, 2015.
Leslie N Smith. Cyclical arXiv:1506.01186v3, 2016. learning rates for training neural networks. arXiv preprint
Tianbao Yang and Qihang Lin. Stochastic subgradient methods with linear convergence for polyhe- dral convex optimization. arXiv preprint arXiv:1510.01444, 2015.
Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
Matthew D Zeiler. Adadelta: An adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
K. Zhang, M. Sun, T. X. Han, X. Yuan, L. Guo, and T. Liu. Residual Networks of Residual Net- works: Multilevel Residual Networks. ArXiv e-prints, August 2016.
13
Published as a conference paper at ICLR 2017
# 8 SUPPLEMENTARY MATERIAL
CIFAR-10
30 Default ââ SGDR 25 R 8 Test error (%) a 0 20 40 60 80 100 Epochs
Figure 6: The median results of 5 runs for the best learning rate settings considered for WRN-28-1.
50K VS 100K EXAMPLES PER EPOCH
Our data augmentation procedure code is inherited from the Lasagne Recipe code for ResNets where ï¬ipped images are added to the training set. This doubles the number of training examples per epoch and thus might impact the results because hyperparameter values deï¬ned as a function of epoch index have a different meaning. While our experimental results given in Table 1 reproduced the results obtained by Zagoruyko & Komodakis (2016), here we test whether SGDR still makes sense for WRN-28-1 (i.e., ResNet with 28 layers) where one epoch corresponds to 50k training examples. We investigate different learning rate values for the default learning rate schedule (4 values out of [0.01, 0.025, 0.05, 0.1]) and SGDR (3 values out of [0.025, 0.05, 0.1]). In line with the results given in the main paper, Figure 6 suggests that SGDR is competitive in terms of anytime performance.
14
Published as a conference paper at ICLR 2017
# loss
# WRN-28-10 on CIFAR-10
# loss
# WRN-28-10 on CIFAR-100
âG-â Default, r=0.1 Default, Ir=0.05 = T= 50, T= 1 T y= 100, Ty = 0, Tutt * T + Traut = 2 0.8 ° & 1 1 0.6 So a 0.4 S ES 0.2 o wD ° Training crossâentropy + regularization Training crossâentropy + regularization 50 100 150 200 50 100 150 200 Epochs Epochs WRN-28-10 on CIFAR-10 WRN-28-10 on CIFAR-100 ° o Test cross-entropy loss Test cross-entropy loss ° & 0.7 50 100 150 200 50 100 150 200 Epochs Epochs WRN-28-10 on CIFAR-10 WRN-28-10 on CIFAR-100 5 1 21 20.5 s a ny iS} Test error (%) ES Test error (%) oS a 19 3.5 18.5 3 18 50 100 150 200 50 100 150 200 Epochs Epochs
Figure 7: Training cross-entropy + regularization loss (top row), test loss (middle row) and test error (bottom row) on CIFAR-10 (left column) and CIFAR-100 (right column).
15
Published as a conference paper at ICLR 2017
WRN-28-10 on downsampled 32x32 ImageNet
90 85 âO-â Default, Ir=0.050 80}:| ER Default, Ir=0.015 Default, Ir=0.005 SGDR, Ir=0.050 757] ââ¬- scor, 'r=0.015 âfâ SEDR, Ir=0.005 5 10 15 20 25 30 35 40 Epochs Top-5 test error (%) 70
Figure 8: Top-5 test errors obtained by SGD with momentum with the default learning rate schedule and SGDR with T0 = 1, Tmult = 2 on WRN-28-10 trained on a version of ImageNet, with all images from all 1000 classes downsampled to 32 Ã 32 pixels. The same baseline data augmentation as for the CIFAR datasets is used. Three settings of the initial learning rate are considered: 0.050, 0.015 and 0.005. In contrast to the experiments described in the main paper, here, the dataset is permuted only within 10 subgroups each formed from 100 classes which makes good generalization much harder to achieve for both algorithms. An interpretation of SGDR results given here might be that while the initial learning rate seems to be very important, SGDR reduces the problem of improper selection of the latter by scanning / annealing from the initial learning rate to 0.
16 | {
"id": "1703.05051"
} |
1607.07086 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | 7 1 0 2
r a M 3 ] G L . s c [
3 v 6 8 0 7 0 . 7 0 6 1 : v i X r a
Published as a conference paper at ICLR 2017
AN ACTOR-CRITIC ALGORITHM FOR SEQUENCE PREDICTION
Dzmitry Bahdanau Philemon Brakel Kelvin Xu Anirudh Goyal Universit´e de Montr´eal
Ryan Lowe Joelle Pineauâ McGill University
# Aaron Courvilleâ Universit´e de Montr´eal
Yoshua Bengioâ Universit´e de Montr´eal
# ABSTRACT
We present an approach to training neural networks to generate sequences using actor-critic methods from reinforcement learning (RL). Current log-likelihood training methods are limited by the discrepancy between their training and testing modes, as models must generate tokens conditioned on their previous guesses rather than the ground-truth tokens. We address this problem by introducing a critic network that is trained to predict the value of an output token, given the policy of an actor network. This results in a training procedure that is much closer to the test phase, and allows us to directly optimize for a task-speciï¬c score such as BLEU. Crucially, since we leverage these techniques in the supervised learning setting rather than the traditional RL setting, we condition the critic network on the ground-truth output. We show that our method leads to improved performance on both a synthetic task, and for German-English machine translation. Our analysis paves the way for such methods to be applied in natural language generation tasks, such as machine translation, caption generation, and dialogue modelling.
# INTRODUCTION
In many important applications of machine learning, the task is to develop a system that produces a sequence of discrete tokens given an input. Recent work has shown that recurrent neural networks (RNNs) can deliver excellent performance in many such tasks when trained to predict the next output token given the input and previous tokens. This approach has been applied successfully in machine translation (Sutskever et al., 2014; Bahdanau et al., 2015), caption generation (Kiros et al., 2014; Donahue et al., 2015; Vinyals et al., 2015; Xu et al., 2015; Karpathy & Fei-Fei, 2015), and speech recognition (Chorowski et al., 2015; Chan et al., 2015).
The standard way to train RNNs to generate sequences is to maximize the log-likelihood of the âcorrectâ token given a history of the previous âcorrectâ ones, an approach often called teacher forcing. At evaluation time, the output sequence is often produced by an approximate search for the most likely candidate according to the learned distribution. During this search, the model is conditioned on its own guesses, which may be incorrect and thus lead to a compounding of errors (Bengio et al., 2015). This can become especially problematic for longer sequences. Due to this discrepancy between training and testing conditions, it has been shown that maximum likelihood training can be suboptimal (Bengio et al., 2015; Ranzato et al., 2015). In these works, the authors argue that the network should be trained to continue generating correctly given the outputs already produced by the model, rather than the ground-truth reference outputs from the data. This gives rise to the challenging problem of determining the target for the next network output. Bengio et al. (2015) use the token k from the ground-truth answer as the target for the network at step k, whereas Ranzato et al. (2015) rely on the REINFORCE algorithm (Williams, 1992) to decide whether or not the tokens
# âCIFAR Senior Fellow â CIFAR Fellow
1
Published as a conference paper at ICLR 2017
from a sampled prediction lead to a high task-speciï¬c score, such as BLEU (Papineni et al., 2002) or ROUGE (Lin & Hovy, 2003).
In this work, we propose and study an alternative procedure for training sequence prediction networks that aims to directly improve their test time metrics (which are typically not the log-likelihood). In particular, we train an additional network called the critic to output the value of each token, which we deï¬ne as the expected task-speciï¬c score that the network will receive if it outputs the token and continues to sample outputs according to its probability distribution. Furthermore, we show how the predicted values can be used to train the main sequence prediction network, which we refer to as the actor. The theoretical foundation of our method is that, under the assumption that the critic computes exact values, the expression that we use to train the actor is an unbiased estimate of the gradient of the expected task-speciï¬c score.
Our approach draws inspiration and borrows the terminology from the ï¬eld of reinforcement learning (RL) (Sutton & Barto, 1998), in particular from the actor-critic approach (Sutton, 1984; Sutton et al., 1999; Barto et al., 1983). RL studies the problem of acting efï¬ciently based only on weak supervision in the form of a reward given for some of the agentâs actions. In our case, the reward is analogous to the task-speciï¬c score associated with a prediction. However, the tasks we consider are those of supervised learning, and we make use of this crucial difference by allowing the critic to use the ground-truth answer as an input. In other words, the critic has access to a sequence of expert actions that are known to lead to high (or even optimal) returns. To train the critic, we adapt the temporal difference methods from the RL literature (Sutton, 1988) to our setup. While RL methods with non-linear function approximators are not new (Tesauro, 1994; Miller et al., 1995), they have recently surged in popularity, giving rise to the ï¬eld of âdeep RLâ (Mnih et al., 2015). We show that some of the techniques recently developed in deep RL, such as having a target network, may also be beneï¬cial for sequence prediction.
The contributions of the paper can be summarized as follows: 1) we describe how RL methodology like the actor-critic approach can be applied to supervised learning problems with structured outputs; and 2) we investigate the performance and behavior of the new method on both a synthetic task and a real-world task of machine translation, demonstrating the improvements over maximum-likelihood and REINFORCE brought by the actor-critic training.
# 2 BACKGROUND
We consider the problem of learning to produce an output sequence Y = (y1, . . . , yT ), yt â A given an input X, where A is the alphabet of output tokens. We will often use notation Yf ...l to refer to subsequences of the form (yf , . . . , yl). Two sets of input-output pairs (X, Y ) are assumed to be available for both training and testing. The trained predictor h is evaluated by computing the average task-speciï¬c score R( ËY , Y ) on the test set, where ËY = h(X) is the prediction. To simplify the formulas we always use T to denote the length of an output sequence, ignoring the fact that the output sequences may have different length.
Recurrent neural networks A recurrent neural network (RNN) produces a sequence of state vectors (s1, . . . , sT ) given a sequence of input vectors (e1, . . . , eT ) by starting from an initial s0 state and applying T times the transition function f : st = f (stâ1, et). Popular choices for the mapping f are the Long Short-Term Memory (Hochreiter & Schmidhuber, 1997) and the Gated Recurrent Units (Cho et al., 2014), the latter of which we use for our models.
To build a probabilistic model for sequence generation with an RNN, one adds a stochastic output layer g (typically a softmax for discrete outputs) that generates outputs yt â A and can feed these outputs back by replacing them with their embedding e(yt):
yt â¼ g(stâ1) st = f (stâ1, e(yt)). (1) (2)
Thus, the RNN deï¬nes a probability distribution p(yt|y1, . . . , ytâ1) of the next output token yt given the previous tokens (y1, . . . , ytâ1). Upon adding a special end-of-sequence token â
to the alphabet A, the RNN can deï¬ne the distribution p(Y ) over all possible sequences as p(Y ) = p(y1)p(y2|y1) . . . p(yT |y1, . . . , yT â1)p(â
|y1, . . . , yT ).
2
Published as a conference paper at ICLR 2017
RNNs for sequence prediction To use RNNs for sequence prediction, they must be augmented to generate Y conditioned on an input X. The simplest way to do this is to start with an initial state s0 = s0(X) (Sutskever et al., 2014; Cho et al., 2014). Alternatively, one can encode X as a variable-length sequence of vectors (h1, . . . , hL) and condition the RNN on this sequence using an attention mechanism. In our models, the sequence of vectors is produced by either a bidirectional RNN (Schuster & Paliwal, 1997) or a convolutional encoder (Rush et al., 2015).
We use a soft attention mechanism (Bahdanau et al., 2015) that computes a weighted sum of a sequence of vectors. The attention weights determine the relative importance of each vector. More formally, we consider the following equations for RNNs with attention:
yt â¼ g(stâ1, ctâ1) st = f (stâ1, ctâ1, e(yt)) αt = β(st, (h1, . . . , hL))
ye ~ 9(Stâ1, Ce-1) ®
8. = f (St-1, Ce-1, e(M)) ®
4 = (se, (ha, ---, hx) °
L a= > Oey, hy © j=l
where β is the attention mechanism that produces the attention weights αt and ct is the context vector, or âglimpseâ, for time step t. The attention weights are computed by an MLP that takes as input the current RNN state and each individual vector to focus on. The weights are typically (as in our work) constrained to be positive and sum to 1 by using the softmax function.
A conditioned RNN can be trained for sequence prediction by gradient ascent on the log-likelihood log p(Y |X) for the input-output pairs (X, Y ) from the training set. To produce a prediction ËY for a test input sequence X, an approximate beam search for the maximum of p(·|X) is usually conducted. During this search the probabilities p(·|Ëy1, . . . , Ëytâ1) are considered, where the previous tokens Ëy1, . . . , Ëytâ1 comprise a candidate beginning of the prediction ËY .
Value functions We view the conditioned RNN as a stochastic policy that generates actions and receives the task score (e.g., BLEU score) as the return. We furthermore consider the case when the return R is partially received at the intermediate steps in the form of rewards r;: RY, Y)= a re(Ges Yt; Y). This is more general than the case of receiving the full return at the end of the sequence, as we can simply define all rewards other than ry to be zero. Receiving intermediate rewards may ease the learning for the critic, and we use reward shaping as explained in Section] Given the policy, possible actions and reward function, the value represents the expected future return as a function of the current state of the system, which in our case is uniquely defined by the sequence of actions taken so far, Yi. We define the value of an unfinished prediction âit as follows:
T VM XY) =| E Ye re (Gei Vie ¥)- Verret X) Sy
We deï¬ne the value of a candidate next token a for an unï¬nished prediction ËY1...tâ1 as the expected future return after generating token a:
T Q(a;¥1..1-1, X,Y) = E (neren) + > rie Ficetien.n¥)) : Yeqi..r~p(.[¥1...t-14,X) ott
We will refer to the candidate next tokens as actions. For notational simplicity, we henceforth drop X and Y from the signature of p, V , Q, R and rt, assuming it is clear from the context which of X and Y is meant. We will also use V without arguments for the expected reward of a random prediction.
3
(3) (4) (5)
Published as a conference paper at ICLR 2017
Algorithm 1 Actor-Critic Training for Sequence Prediction Require: A critic ËQ(a; ËY1...t, Y ) and an actor p(a| ËY1...t, X) with weights Ï and θ respectively. 1: Initialize delayed actor p 2: while Not Converged do 3: 4: 5:
and target critic Qâ with same weights: 6â = 6, 6â = ¢.
Receive a random example (X, Y ). Generate a sequence of actions ËY from p Compute targets for the critic
.
ge = Tees Via, Y) + > P (al¥4..1 X)Q" (a V4.4, Y) acA
Update the critic weights Ï using the gradient
i 2 dé (= (QG: Y1-1,Y) â at) + rots) t=1 2 where C;, = > (Quen _ a Wel Fie) b a
7:
Update actor weights θ using the following gradient estimate
ee ey ee uaF wa) t=1aeA T dp(ye|M1...4-1, X) + LL > 70 t=1
8:
Update delayed actor and target critic, with constants yg < 1, yy « 1 OY = 700+ (1â70)0', & = b+ (1-46)¢'
# 9: end while
Algorithm 2 Complete Actor-Critic Algorithm for Sequence Prediction 1: Initialize critic ËQ(a; ËY1...t, Y ) and actor p(a| ËY1...t, X) with random weights Ï and θ respectively.
2: Pre-train the actor to predict yt+1 given Y1...t by maximizing log p(yt+1|Y1...t, X). 3: Pre-train the critic to estimate Q by running Algorithm 1 with ï¬xed actor. 4: Run Algorithm 1.
4
Published as a conference paper at ICLR 2017
# 3 ACTOR-CRITIC FOR SEQUENCE PREDICTION
Let θ be the parameters of the conditioned RNN, which we will also refer to as the actor. Our training algorithm is based on the following way of rewriting the gradient of the expected return dV dθ :
d dp( Yn A v. Sp cae) Pal) O(a: ¥ 4). ©) do Punthix) & 144
This equality is known in RL under the names policy gradient theorem (Sutton et al., 1999) and stochastic actor-critic (Sutton, 1984). 1 Note that we use the probability rather than the log probability in this formula (which is more typical in RL applications) as we are summing over actions rather than taking an expectation. Intuitively, this equality corresponds to increasing the probability of actions that give high values, and decreasing the probability of actions that give low values. Since this gradient expression is an expectation, it is trivial to build an unbiased estimate for it:
an
yey Maia) alt. = Qa: Yi 11) (8) k=1t=1acA
where ËY k are M random samples from p( ËY ). By replacing Q with a parameteric estimate ËQ one can obtain a biased estimate with relatively low variance. The parameteric estimate ËQ is called the critic. The above formula is similar in spirit to the REINFORCE learning rule that Ranzato et al. (2015) use in the same context:
av f Hee r(GtlVt -1) a 7 > dn EYE 1) â b(X)],, )
# a
where the scalar b,(X) is called baseline or control variate. The difference is that in REINFORCE the inner sum over all actions is replaced by its 1-sample estimate, namely Peer VO (g,; Y1...t-1), where the log probability aloe rtie|---) Ce apie.) is intro- duced to correct for the sampling of y,. Furthermore, instead of the value Q(; Y1...4-1), REIN- FORCE uses the cumulative reward ean Tr(Gr3 Yi..7-1) following the action yj, which again can be seen as a 1-sample estimate of Q. Due to these simplifications and the potential high variance in the cumulative reward, the REINFORCE gradient estimator has very high variance. In order to improve upon it, we consider the actor-critic estimate from Equation[8| which has a lower variance at the cost of significant bias, since the critic is not perfect and trained simultaneously with the actor. The success depends on our ability to control the bias by designing the critic network and using an appropriate training criterion for it.
To implement the critic, we propose to use a separate RNN parameterized by Ï. The critic RNN is run in parallel with the actor, consumes the tokens Ëyt that the actor outputs and produces the estimates ËQ(a; ËY1...t) for all a â A. A key difference between the critic and the actor is that the correct answer Y is given to the critic as an input, similarly to how the actor is conditioned on X. Indeed, the return R( ËY , Y ) is a deterministic function of Y , and we argue that using Y to compute ËQ should be of great help. We can do this because the values are only required during training and we do not use the critic at test time. We also experimented with providing the actor states st as additional inputs to the critic. See Figure 1 for a visual representation of our actor-critic architecture.
Temporal-difference learning A crucial component of our approach is policy evaluation, that is the training of the critic to produce useful estimates of Q. With a naive Monte-Carlo method, one could use the future return yw 7 (Gri Yi.r-1) as a target to OG: Yi.t-1)s and use the critic parameters @ to minimize the square error between these two values. However, like with REINFORCE, using such a target yields to very high variance which quickly grows with the number of steps T. We use a temporal difference (TD) method for policy evaluation (Sutton) /T988). Namely, we use the right-hand side gq, = 1i(§; Yi..tâ1) + ae P(AIM1...4)Q(4;M%...1) of the Bellman acA equation as the target for the left-hand Q( i; Y1...4-1).
1We also provide a simple self-contained proof of Equation (7) in Supplementary Material.
5
Published as a conference paper at ICLR 2017
Actor Critic Q pe Q1,Q2,-°: ,Qr ° Decoder "SCS s«éDeecoder SG, Yas Or im actor states @1,%2,°°° XL Y1,Y2,°°* YT
Figure 1: Both the actor and the critic are encoder-decoder networks. The actor receives an input sequence X and produces samples ËY which are evaluated by the critic. The critic takes in the ground-truth sequence Y as input to the encoder, and takes the input summary (calculated using an attention mechanism) and the actorâs prediction Ëyt as input at time step t of the decoder. The values Q1, Q2, · · · , QT computed by the critic are used to approximate the gradient of the expected returns with respect to the parameters of the actor. This gradient is used to train the actor to optimize these expected task speciï¬c returns (e.g., BLEU score). The critic may also receive the hidden state activations of the actor as input.
Applying deep RL techniques It has been shown in the RL literature that if Q is non-linear (like in our case), the TD policy evaluation might diverge (Tsitsiklis & Van Roy} |1997). Previous work has shown that this problem can be alleviated by using an additional target network Qâ to compute 4, Which is updated less often and/or more slowly than Q. Similarly to (Lillicrap et al.||2015), we update the parameters ¢â of the target critic by linearly interpolating them with the parameters of the trained one. Attempts to remove the target network by propagating the gradient through q resulted in a lower square error (Q(g1; Yi...r) - a). but the resulting Q values proved very unreliable as training signals for the actor.
The fact that both actor and critic use outputs of each other for training creates a potentially dangerous feedback loop. To address this, we sample predictions from a delayed actor (Lillicrap et al., 2015), whose weights are slowly updated to follow the actor that is actually trained.
Dealing with large action spaces One of the challenges of our work is that the action space is very large (as is typically the case in NLP tasks with large vocabularies). This can be alleviated by putting constraints on the critic values for actions that are rarely sampled. We found experimentally that shrinking the values of these rare actions is necessary for the algorithm to converge. Speciï¬cally, we add a term Ct for every step t to the criticâs optimization objective which drives all value predictions of the critic closer to their mean:
2
2 a= (a Yaa) â a Yo0Â¥i 0) (10) b a
This corresponds to penalizing the variance of the outputs of the critic. Without this penalty the values of rare actions can be severely overestimated, which biases the gradient estimates and can cause divergence. A similar trick was used in the context of learning simple algorithms with Q-learning (Zaremba et al., 2015).
Reward shaping While we are ultimately interested in the maximization of the score of a complete prediction, simply awarding this score at the last step provides a very sparse training signal for the critic. For this reason we use potential-based reward shaping with potentials Φ( ËY1...t) = R( ËY1...t) for incomplete sequences and Φ( ËY ) = 0 for complete ones (Ng et al., 1999). Namely, for a predicted sequence ËY we compute score values for all preï¬xes to obtain the sequence of scores (R( ËY1...1), R( ËY1...2), . . . , R( ËY1...T )). The difference between the consecutive pairs of scores is then used as the reward at each step: rt(Ëyt; ËY1...tâ1) = R( ËY1...t) â R( ËY1...tâ1). Using the shaped reward rt instead of awarding the whole score R at the last step does not change the optimal policy (Ng et al., 1999).
Putting it all together Algorithm 1 describes the proposed method in detail. We consider adding the weighted log-likelihood gradient to the actorâs gradient estimate. This is in line with the prior work
6
Published as a conference paper at ICLR 2017
by (Ranzato et al., 2015) and (Shen et al., 2015). It is also motivated by our preliminary experiments that showed that using the actor-critic estimate alone can lead to an early determinization of the policy and vanishing gradients (also discussed in Section 6). Starting training with a randomly initialized actor and critic would be problematic, because neither the actor nor the critic would provide adequate training signals for one another. The actor would sample completely random predictions that receive very little reward, thus providing a very weak training signal for the critic. A random critic would be similarly useless for training the actor. Motivated by these considerations, we pre-train the actor using standard log-likelihood training. Furthermore, we pre-train the critic by feeding it samples from the pre-trained actor, while the actorâs parameters are frozen. The complete training procedure including pre-training is described by Algorithm 2.
# 4 RELATED WORK
In other recent RL-inspired work on sequence prediction, Ranzato et al. (2015) trained a translation model by gradually transitioning from maximum likelihood learning into optimizing BLEU or ROUGE scores using the REINFORCE algorithm. However, REINFORCE is known to have very high variance and does not exploit the availability of the ground-truth like the critic network does. The approach also relies on a curriculum learning scheme. Standard value-based RL algorithms like SARSA and OLPOMDP have also been applied to structured prediction (Maes et al., 2009). Again, these systems do not use the ground-truth for value prediction.
Imitation learning has also been applied to structured prediction (Vlachos, 2012). Methods of this type include the SEARN (Daum´e Iii et al., 2009) and DAGGER (Ross et al., 2010) algorithms. These methods rely on an expert policy to provide action sequences that the policy learns to imitate. Unfortunately, itâs not always easy or even possible to construct an expert policy for a task-speciï¬c score. In our approach, the critic plays a role that is similar to the expert policy, but is learned without requiring prior knowledge about the task-speciï¬c score. The recently proposed âscheduled samplingâ (Bengio et al., 2015) can also be seen as imitation learning. In this method, ground-truth tokens are occasionally replaced by samples from the model itself during training. A limitation is that the token k for the ground-truth answer is used as the target at step k, which might not always be the optimal strategy.
There are also approaches that aim to approximate the gradient of the expected score. One such approach is âDirect Loss Minimizationâ (Hazan et al., 2010) in which the inference procedure is adapted to take both the model likelihood and task-speciï¬c score into account. Another popular approach is to replace the domain over which the task score expectation is deï¬ned with a small subset of it, as is done in Minimum (Bayes) Risk Training (Goel & Byrne, 2000; Shen et al., 2015; Och, 2003). This small subset is typically an n-best list or a sample (like in REINFORCE) that may or may not include the ground-truth as well. None of these methods provide intermediate targets for the actor during training, and Shen et al. (2015) report that as many as 100 samples were required for the best results.
Another recently proposed method is to optimize a global sequence cost with respect to the selection and pruning behavior of the beam search procedure itself (Wiseman & Rush, 2016). This method follows the more general strategy called âlearning as search optimizationâ (Daum´e III & Marcu, 2005). This is an interesting alternative to our approach; however, it is designed speciï¬cally for the precise inference procedure involved.
# 5 EXPERIMENTS
To validate our approach, we performed two sets of experiments 2. First, we trained the proposed model to recover strings of natural text from their corrupted versions. Speciï¬cally, we consider each character in a natural language corpus and with some probability replace it with a random character. We call this synthetic task spelling correction. A desirable property of this synthetic task is that data is essentially inï¬nite and overï¬tting is no concern. Our second series of experiments is done on the task of automatic machine translation using different models and datasets.
2 The source code is available at https://github.com/rizar/actor-critic-public
7
Published as a conference paper at ICLR 2017
In addition to maximum likelihood and actor-critic training we implemented two versions of the REINFORCE gradient estimator. In the ï¬rst version, we use a linear baseline network that takes the actor states as input, exactly as in (Ranzato et al., 2015). We also propose a novel extension of REINFORCE that leverages the extra information available in the ground-truth output Y . Speciï¬cally, we use the ËQ estimates produced by the critic network as the baseline for the REINFORCE algorithm. The motivation behind this approach is that using the ground-truth output should produce a better baseline that lowers the variance of REINFORCE, resulting in higher task-speciï¬c scores. We refer to this method as REINFORCE-critic.
5.1 SPELLING CORRECTION
We use text from the One Billion Word dataset for the spelling correction task (Chelba et al., 2013), which has pre-deï¬ned training and testing sets. The training data was abundant, and we never used any example twice. We evaluate trained models on a section of the test data that comprises 6075 sentences. To speed up experiments, we clipped all sentences to the ï¬rst 10 or 30 characters.
For the spelling correction actor network, we use an RNN with 100 Gated Recurrent Units (GRU) and a bidirectional GRU network for the encoder. We use the same attention mechanism as proposed in (Bahdanau et al., 2015), which effectively makes our actor network a smaller version of the model used in that work. For the critic network, we employed a model with the same architecture as the actor.
We use character error rate (CER) to measure performance on the spelling task, which we deï¬ne as the ratio between the total of Levenshtein distances between predictions and ground-truth outputs and the total length of the ground-truth outputs. This is a corpus-level metric for which a lower value is better. We use it as the return by negating per-sentence ratios. At the evaluation time greedy search is used to extract predictions from the model.
We use the ADAM optimizer (Kingma & Ba, 2015) to train all the networks with the parame- ters recommended in the original paper, with the exception of the scale parameter α. The latter is ï¬rst set to 10â3 and then annealed to 10â4 for log-likelihood training. For the pre-training stage of the actor-critic, we use α = 10â3 and decrease it to 10â4 for the joint actor-critic train- ing. We pretrain the actor until its score on the development set stops improving. We pretrain the critic until its TD error stabilizes3. We used M = 1 sample for both actor-critic and REIN- FORCE. For exact hyperparameter settings we refer the reader to Appendix A.
We start REINFORCE training from a pretrained actor, but we do not use the curriculum learning employed in MIXER. The critic is trained in the same way for both REINFORCE and actor- critic, including the pretraining stage. We re- port results obtained with the reward shaping de- scribed in Section 3, as we found that it slightly improves REINFORCE performance.
Table 1 presents our results on the spelling cor- rection task. We observe an improvement in CER over log-likelihood training for all four settings considered. Without simultaneous log- likelihood training, actor-critic training results in a better CER than REINFORCE-critic in three
40 â Lt valid â Ut yalid~ 3 a valtd . â RFC valid -- LL train sok 1 . Ss" LE trains => AC train 15} Epochs == RF train =-- RF-C valid BLEU 25 20
Figure 2: Progress of log-likelihood (LL), RE- INFORCE (RF) and actor-critic (AC) training in terms of BLEU score on the training (train) and val- idation (valid) datasets. LL* stands for the anneal- ing phase of log-likelihood training. The curves start from the epoch of log-likelihood pretraining from which the parameters were initialized.
3A typical behaviour for TD error was to grow at ï¬rst and then start decreasing slowly. We found that stopping pretraining shortly after TD error stops growing leads to good results.
8
Published as a conference paper at ICLR 2017
Table 1: Character error rate of different methods on the spelling correction task. In the table L is the length of input strings, η is the probability of replacing a character with a random one. LL stands for the log-likelihood training, AC and RF-C and for the actor-critic and the REINFORCE-critic respectively, AC+LL and RF-C+LL for the combinations of AC and RF-C with LL.
Character Error Rate AC LL 17.24 17.81 17.31 18.4 35.89 38.12 37.0 40.87
L = 10, η = 0.3 L = 30, η = 0.3 L = 10, η = 0.5 L = 30, η = 0.5 RF-C AC+LL RF-C+LL 17.82 18.16 35.84 37.6 16.65 17.1 34.6 36.36 16.97 17.47 35 36.6
Table 2: Our IWSLT 2014 machine translation results with a convolutional encoder compared to the previous work by Ranzato et al. Please see 1 for an explanation of abbreviations. The asterisk identiï¬es results from (Ranzato et al., 2015). The numbers reported with ⤠were approximately read from Figure 6 of (Ranzato et al., 2015)
Decoding method greedy search beam search LL* MIXER* 17.74 ⤠20.3 20.73 ⤠21.9 RF 20.92 21.35 RF-C 22.24 22.58 AC 21.66 22.45
out of four settings. In the fourth case, actor-critic and REINFORCE-critic have similar performance. Adding the log-likelihood gradient with a cofï¬cient λLL = 0.1 helps both of the methods, but actor-critic still retains a margin of improvement over REINFORCE-critic.
5.2 MACHINE TRANSLATION
For our ï¬rst translation experiment, we use data from the German-English machine translation track of the IWSLT 2014 evaluation campaign (Cettolo et al., 2014), as used in Ranzato et al. (2015), and closely follow the pre-processing described in that work. The training data comprises about 153,000 German-English sentence pairs. In addition we considered a larger WMT14 English-French dataset Cho et al. (2014) with more than 12 million examples. For further information about the data we refer the reader to Appendix B.
The return is deï¬ned as a smoothed and rescaled version of the BLEU score. Speciï¬cally, we start all n-gram counts from 1 instead of 0, and multiply the resulting score by the length of the ground-truth translation. Smoothing is a common practice when sentence-level BLEU score is considered, and it has been used to apply REINFORCE in similar settings (Ranzato et al., 2015).
IWSLT 2014 with a convolutional encoder In our ï¬rst experiment we use a convolutional encoder in the actor to make our results more comparable with Ranzato et al. (2015). For the same reason, we use 256 hidden units in the networks. For the critic, we replaced the convolutional network with a bidirectional GRU network. For training this model we mostly used the same hyperparameter values as in the spelling correction experiments, with a few differences highlighted in Appendix A. For decoding we used greedy search and beam search with a beam size of 10. We found that penalizing candidate sentences that are too short was required to obtain the best results. Similarly to (Hannun et al., 2014), we subtracted ÏT from the negative log-likelihood of each candidate sentence, where T is the candidateâs length, and Ï is a hyperparameter tuned on the validation set.
The results are summarized in Table 2. We report a signiï¬cant improvement of 2.3 BLEU points over the log-likelihood baseline when greedy search is used for decoding. Surprisingly, the best performing method is REINFORCE with critic, with an additional 0.6 BLEU point advantage over the actor-critic. When beam-search is used, the ranking of the compared approaches is the same, but the margin between the proposed methods and log-likelihood training becomes smaller. The ï¬nal performances of the actor-critic and the REINFORCE-critic with greedy search are also 0.7 and 1.3 BLEU points respectively better than what Ranzato et al. (2015) report for their MIXER approach. This comparison should be treated with caution, because our log-likelihood baseline is 1.6 BLEU
9
Published as a conference paper at ICLR 2017
Table 3: Our IWSLT 2014 machine translation results with a bidirectional recurrent encoder compared to the previous work. Please see Table 1 for an explanation of abbreviations. The asterisk identiï¬es results from (Wiseman & Rush, 2016).
# Model
greedy search beam search LL* 22.53 23.87 BSO* 23.83 25.48 LL 25.82 27.56 RF-C RF-C+LL 27.42 27.75 27.7 28.3 AC 27.27 27.75 AC+LL 27.49 28.53
Table 4: Our WMT 14 machine translation results compared to the previous work. Please see Table 1 for an explanation of abbreviations. The apostrophy and the asterisk identify results from (Bahdanau et al., 2015) and (Shen et al., 2015) respectively.
Decoding method greedy search beam search LLâ n/a 28.45 LL* MRT * n/a 29.88 n/a 31.3 Model LL 29.33 30.71 AC+LL RF-C+LL 30.85 31.13 29.83 30.37
points stronger than its equivalent from (Ranzato et al., 2015). The performance of REINFORCE with a simple baseline matches the score reported for MIXER in Ranzato et al. (2015).
To better understand the IWSLT 2014 results we provide the learning curves for the considered approaches in Figure 2. We can clearly see that the training methods that use generated predictions have a strong regularization effect â that is, better progress on the validation set in exchange for slower or negative progress on the training set. The effect is stronger for both REINFORCE varieties, especially for the one without a critic. The actor-critic training does a much better job of ï¬tting the training set than REINFORCE and is the only method except log-likelihood that shows a clear overï¬tting, which is a healthy behaviour for such a small dataset.
In addition, we performed an ablation study. We found that using a target network was crucial; while the joint actor-critic training was still progressing with γθ = 0.1, with γθ = 1.0 it did not work at all. Similarly important was the value penalty described in Equation (10). We found that good values of the λ coefï¬cient were in the range [10â3, 10â6]. Other techniques, such as reward shaping and a delayed actor, brought moderate performance gains. We refer the reader to Appendix A for more details.
IWSLT 2014 with a bidirectional GRU encoder In order to compare our results with those reported by Wiseman & Rush (2016) we repeated our IWSLT 2014 investigation with a different encoder, a bidirectional RNN with 256 GRU units. In this round of experiments we also tried to used combined training objectives in the same way as in our spelling correction experiments. The results are summarized in Table 3. One can see that the actor-critic training, especially its AC+LL version, yields signiï¬cant improvements (1.7 with greedy search and 1.0 with beam search) upon the pure log-likelihood training, which are comparable to those brought by Beam Search Optimization (BSO), even though our log-likelihood baseline is much stronger. In this round of experiments actor-critic and REINFORCE-critic performed on par.
WMT 14 Finally we report our results on a very popular large WMT14 English-French dataset (Cho et al., 2014) in Table 4. Our model closely follows the achitecture from (Bahdanau et al., 2015), however we achieved a higher baseline performance by annealing the learning rate α and penalizing output sequences that were too short during beam search. The actor-critic training brings a signiï¬cant 1.5 BLEU improvement with greedy search and a noticeable 0.4 BLEU improvement with beam search. In previous work Shen et al. (2015) report a higher improvement of 1.4 BLEU with beam search, however they use 100 samples for each training example, whereas we use just one. We note that in this experiment, which is perhaps the most realistic settings, the actor-critic enjoys a signiï¬cant advantage over the REINFORCE-critic.
10
Published as a conference paper at ICLR 2017
# 6 DISCUSSION
We proposed an actor-critic approach to sequence prediction. Our method takes the task objective into account during training and uses the ground-truth output to aid the critic in its prediction of intermediate targets for the actor. We showed that our method leads to signiï¬cant improvements over maximum likelihood training on both a synthetic task and a machine translation benchmark. Compared to REINFORCE training on machine translation, actor-critic ï¬ts the training data much faster, although in some of our experiments we were able to signiï¬cantly reduce the gap in the training speed and achieve a better test error using our critic network as the baseline for REINFORCE.
One interesting observation we made from the machine translation results is that the training methods that use generated predictions have a strong regularization effect. Our understanding is that condi- tioning on the sampled outputs effectively increases the diversity of training data. This phenomenon makes it harder to judge whether the actor-critic training meets our expectations, because a noisier gradient estimate yielded a better test set performance. We argue that the spelling correction results obtained on a virtually inï¬nite dataset in conjuction with better machine translation performance on the large WMT 14 dataset provide convincing evidence that the actor-training can be effective. In future work we will consider larger machine translation datasets.
We ran into several optimization issues. The critic would sometimes assign very high values to actions with a very low probability according to the actor. We were able to resolve this by penalizing the criticâs variance. Additionally, the actor would sometimes have trouble to adapt to the demands of the critic. We noticed that the action distribution tends to saturate and become deterministic, causing the gradient to vanish. We found that combining an RL training objective with log-likelihood can help, but in general we think this issue deserves further investigation. For example, one can look for suitable training criteria that have a well-behaved gradient even when the policy has little or no stochasticity.
In a concurrent work Wu et al. (2016) show that a version of REINFORCE with the baseline computed using multiple samples can improve performance of a very strong machine translation system. This result, and our REINFORCE-critic experiments, suggest that often the variance of REINFORCE can be reduced enough to make its application practical. That said, we would like to emphasize that this paper attacks the problem of gradient estimation from a very different angle as it aims for low-variance but potentially high-bias estimates. The idea of using the ground-truth output that we proposed is an absolutely necessary ï¬rst step in this direction. Future work could focus on further reducing the bias of the actor-critic estimate, for example, by using a multi-sample training criterion for the critic.
# ACKNOWLEDGMENTS
We thank the developers of Theano (Theano Development Team, 2016) and Blocks (van Merri¨enboer et al., 2015) for their great work. We thank NSERC, Compute Canada, Calcul Queb´ec, Canada Research Chairs, CIFAR, CHISTERA project M2CR (PCIN-2015-226) and Samsung Institute of Advanced Techonology for their ï¬nancial support.
# REFERENCES
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In Proceedings of the ICLR 2015, 2015.
Andrew G Barto, Richard S Sutton, and Charles W Anderson. Neuronlike adaptive elements that can solve difï¬cult learning control problems. Systems, Man and Cybernetics, IEEE Transactions on, (5):834â846, 1983.
Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. arXiv preprint arXiv:1506.03099, 2015.
Mauro Cettolo, Jan Niehues, Sebastian St¨uker, Luisa Bentivogli, and Marcello Federico. Report on the 11th iwslt evaluation campaign. In Proc. of IWSLT, 2014.
William Chan, Navdeep Jaitly, Quoc V Le, and Oriol Vinyals. Listen, attend and spell. arXiv preprint arXiv:1508.01211, 2015.
11
Published as a conference paper at ICLR 2017
Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005, 2013.
Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, KyungHyun Cho, and Yoshua Bengio. Attention-based models for speech recognition. CoRR, abs/1506.07503, 2015. URL http: //arxiv.org/abs/1506.07503.
Hal Daum´e III and Daniel Marcu. Learning as search optimization: Approximate large margin methods for structured prediction. In Proceedings of the 22nd international conference on Machine learning, pp. 169â176. ACM, 2005.
Hal Daum´e Iii, John Langford, and Daniel Marcu. Search-based structured prediction. Machine learning, 75(3):297â325, 2009.
Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venu- gopalan, Kate Saenko, and Trevor Darrell. Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2625â2634, 2015.
Vaibhava Goel and William J Byrne. Minimum bayes-risk automatic speech recognition. Computer Speech & Language, 14(2):115â135, 2000.
Awni Y Hannun, Andrew L Maas, Daniel Jurafsky, and Andrew Y Ng. First-pass large vocabulary continuous speech recognition using bi-directional recurrent dnns. arXiv preprint arXiv:1408.2873, 2014.
Tamir Hazan, Joseph Keshet, and David A McAllester. Direct loss minimization for structured prediction. In Advances in Neural Information Processing Systems, pp. 1594â1602, 2010.
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735â1780, 1997.
Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3128â 3137, 2015.
Diederik P Kingma and Jimmy Ba. A method for stochastic optimization. In International Conference on Learning Representation, 2015.
Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539, 2014.
Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
Chin-Yew Lin and Eduard Hovy. Automatic evaluation of summaries using n-gram co-occurrence statistics. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1, pp. 71â78. Association for Computational Linguistics, 2003.
Francis Maes, Ludovic Denoyer, and Patrick Gallinari. Structured prediction with reinforcement learning. Machine learning, 77(2-3):271â301, 2009.
W Thomas Miller, Paul J Werbos, and Richard S Sutton. Neural networks for control. MIT press, 1995.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
12
Published as a conference paper at ICLR 2017
Andrew Y Ng, Daishi Harada, and Stuart Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In ICML, volume 99, pp. 278â287, 1999.
Franz Josef Och. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1, pp. 160â167. Association for Computational Linguistics, 2003.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311â318. Association for Computational Linguistics, 2002.
MarcâAurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732, 2015.
St´ephane Ross, Geoffrey J Gordon, and J Andrew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. arXiv preprint arXiv:1011.0686, 2010.
Alexander M Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685, 2015.
Mike Schuster and Kuldip K Paliwal. Bidirectional recurrent neural networks. Signal Processing, IEEE Transactions on, 45(11):2673â2681, 1997.
Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. Minimum risk training for neural machine translation. arXiv preprint arXiv:1512.02433, 2015.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pp. 3104â3112, 2014.
Richard S Sutton. Learning to predict by the methods of temporal differences. Machine learning, 3 (1):9â44, 1988.
Richard S Sutton and Andrew G Barto. Introduction to reinforcement learning, volume 135. MIT Press Cambridge, 1998.
Richard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gradient methods for reinforcement learning with function approximation. In NIPS, volume 99, pp. 1057â 1063, 1999.
Richard Stuart Sutton. Temporal credit assignment in reinforcement learning. 1984.
Gerald Tesauro. Td-gammon, a self-teaching backgammon program, achieves master-level play. Neural computation, 6(2):215â219, 1994.
Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. URL http://arxiv.org/abs/ 1605.02688.
John N Tsitsiklis and Benjamin Van Roy. An analysis of temporal-difference learning with function approximation. Automatic Control, IEEE Transactions on, 42(5):674â690, 1997.
Bart van Merri¨enboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde- Farley, Jan Chorowski, and Yoshua Bengio. Blocks and fuel: Frameworks for deep learning. arXiv:1506.00619 [cs, stat], June 2015.
Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3156â3164, 2015.
Andreas Vlachos. An investigation of imitation learning algorithms for structured prediction. In EWRL, pp. 143â154. Citeseer, 2012.
Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
13
Published as a conference paper at ICLR 2017
Sam Wiseman and Alexander M Rush. Sequence-to-sequence learning as beam-search optimization. arXiv preprint arXiv:1606.02960, 2016.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Googleâs neural machine translation sys- tem: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pp. 2048â2057, 2015.
Wojciech Zaremba, Tomas Mikolov, Armand Joulin, and Rob Fergus. Learning simple algorithms from examples. arXiv preprint arXiv:1511.07275, 2015.
14
Published as a conference paper at ICLR 2017
Table 5: Results of an ablation study. We tried varying the actor update speed γθ, the critic update speed γÏ, the value penalty coefï¬cient λ, whether or not reward shaping is used, whether or not temporal difference (TD) learning is used for the critic. Reported are the best training and validation BLEU score obtained in the course of the ï¬rst 10 training epochs. Some of the validation scores would still improve with longer training. Greedy search was used for decoding.
0.001 0.001 10â3 baseline yes yes 33.73 23.16 with different Î³Ï 0.001 0.001 0.001 0.01 0.1 1 10â3 10â3 10â3 yes yes yes yes yes yes 33.52 32.63 9.59 23.03 22.80 8.14 with different γθ 1 0.001 10â3 yes yes 32.9 22.88 without reward shaping 0.001 0.001 10â3 no yes 32.74 22.61 without temporal difference learning 0.001 0.001 10â3 yes no 23.2 16.36 with different λ 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 3 â 10â3 10â4 10â6 10â8 0 yes yes yes yes yes yes yes yes yes yes 32.4 34.10 35.00 33.6 27.41 22.48 23.15 23.10 22.72 20.55
# A HYPERPARAMETERS
For machine translation experiments the variance penalty coefï¬cient λ was set to 10â4, and the delay coefï¬cients γθ and Î³Ï were both set to 10â4. For REINFORCE with the critic we did not use a delayed actor, i.e. γθ was set to 1. For the spelling correction task we used the same γθ and Î³Ï but a different λ = 10â3. When we used a combined training criterion, the weight of the log-likelihood gradient λLL was always 0.1. All initial weights were sampled from a centered uniform distribution with width 0.1.
In some of our experiments we provided the actor states as additional inputs to the critic. Speciï¬cally, we did so in our spelling correction experiments and in our WMT 14 machine translation study. All the other results were obtained without this technique.
For decoding with beam search we substracted the length of a candidate times Ï from the log- likelihood cost. The exact value of Ï was selected on the validation set and was equal to 0.8 for models trained by log-likelihood and REINFORCE and to 1.0 for models trained by actor-critic and REINFORCE-critic.
For some of the hyperparameters we performed an ablation study. The results are reported in Table 5.
# B DATA
For the IWSLT 2014 data the sizes of validation and tests set were 6,969 and 6,750, respectively. We limited the number of words in the English and German vocabularies to the 22,822 and 32,009 most frequent words, respectively, and replaced all other words with a special token. The maximum sentence length in our dataset was 50. For WMT14 we used vocabularies of 30,000 words for both English and French, and the maximum sentence length was also 50.
15
Published as a conference paper at ICLR 2017
# C GENERATED Q-VALUES
In Table C we provide an example of value predictions that the critic outputs for candidate next words. One can see that the critic has indeed learnt to assign larger values for the appropriate next words. While the critic does not always produce sensible estimates and can often predict a high return for irrelevant rare words, this is greatly reduced using the variance penalty term from Equation (10).
Figure 3: The best 3 words according to the critic at intermediate steps of generating a translation. The numbers in parentheses are the value predictions ËQ. The German original is â¨uber eine davon will ich hier erz¨ahlen .â The reference translation is âand thereâs one I want to talk aboutâ.
Words with largest ËQ and(6.623) there(6.200) but(5.967) that(6.197) one(5.668) 's(5.467) that(5.408) one(5.118) i(5.002) that(4.796) i(4.629) ,(4.139) want(5.008) i(4.160) 't(3.361) to(4.729) want(3.497) going(3.396) talk(3.717) you(2.407) to(2.133) about(1.209) that(0.989) talk(0.924) about(0.706) .(0.660) right(0.653) .(0.498) ?(0.291) â(0.285) .(0.195) there(0.175) know(0.087) .(0.168) â
(-0.093) ?(-0.173)
16
Published as a conference paper at ICLR 2017
# D PROOF OF EQUATION (7)
ave d , W = yey, RY) = ar [p(1)P(Gal) -.-PGr|ti ---Grâ1)| RW) = STP pn) POD (5 oh JRO) = de t=1 y oS T HAY 1..t- ~ De PF e-1) PO y(Figacr iF) ros Fie) = 14. Your T=1 dp(Gr\Â¥1.t-1) 4-1) (Yj SY wv yy eee lye T rie; Y1..t-1) + Ss P(Yia1.71M1...t) Ss r+ (Gr3Vi.r-1) Yi Tv r=t4+1 SS E > PAM) OG: % 44) = fay V1. 1-1 ~P(Â¥s...2-1) 2A do E yy lel) wl) Q(a Y1..1-1) Yrp(Â¥) t=1aeA
# T
t=1
17 | {
"id": "1512.02433"
} |
1607.06450 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | 6 1 0 2
l u J 1 2 ] L M . t a t s [ 1 v 0 5 4 6 0 . 7 0 6 1 : v i X r a
# Layer Normalization
# Jimmy Lei Ba University of Toronto jimmy@psi.toronto.edu
Jamie Ryan Kiros University of Toronto rkiros@cs.toronto.edu
Geoffrey E. Hinton University of Toronto and Google Inc. hinton@cs.toronto.edu
# Abstract
Training state-of-the-art, deep neural networks is computationally expensive. One way to reduce the training time is to normalize the activities of the neurons. A recently introduced technique called batch normalization uses the distribution of the summed input to a neuron over a mini-batch of training cases to compute a mean and variance which are then used to normalize the summed input to that neuron on each training case. This signiï¬cantly reduces the training time in feed- forward neural networks. However, the effect of batch normalization is dependent on the mini-batch size and it is not obvious how to apply it to recurrent neural net- works. In this paper, we transpose batch normalization into layer normalization by computing the mean and variance used for normalization from all of the summed inputs to the neurons in a layer on a single training case. Like batch normalization, we also give each neuron its own adaptive bias and gain which are applied after the normalization but before the non-linearity. Unlike batch normalization, layer normalization performs exactly the same computation at training and test times. It is also straightforward to apply to recurrent neural networks by computing the normalization statistics separately at each time step. Layer normalization is very effective at stabilizing the hidden state dynamics in recurrent networks. Empiri- cally, we show that layer normalization can substantially reduce the training time compared with previously published techniques.
# 1 Introduction
Deep neural networks trained with some version of Stochastic Gradient Descent have been shown to substantially outperform previous approaches on various supervised learning tasks in computer vision [Krizhevsky et al., 2012] and speech processing [Hinton et al., 2012]. But state-of-the-art deep neural networks often require many days of training. It is possible to speed-up the learning by computing gradients for different subsets of the training cases on different machines or splitting the neural network itself over many machines [Dean et al., 2012], but this can require a lot of com- munication and complex software. It also tends to lead to rapidly diminishing returns as the degree of parallelization increases. An orthogonal approach is to modify the computations performed in the forward pass of the neural net to make learning easier. Recently, batch normalization [Ioffe and Szegedy, 2015] has been proposed to reduce training time by including additional normalization stages in deep neural networks. The normalization standardizes each summed input using its mean and its standard deviation across the training data. Feedforward neural networks trained using batch normalization converge faster even with simple SGD. In addition to training time improvement, the stochasticity from the batch statistics serves as a regularizer during training.
Despite its simplicity, batch normalization requires running averages of the summed input statis- tics. In feed-forward networks with ï¬xed depth, it is straightforward to store the statistics separately for each hidden layer. However, the summed inputs to the recurrent neurons in a recurrent neu- ral network (RNN) often vary with the length of the sequence so applying batch normalization to RNNs appears to require different statistics for different time-steps. Furthermore, batch normaliza-
tion cannot be applied to online learning tasks or to extremely large distributed models where the minibatches have to be small.
This paper introduces layer normalization, a simple normalization method to improve the training speed for various neural network models. Unlike batch normalization, the proposed method directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. We show that layer normalization works well for RNNs and improves both the training time and the generalization performance of several existing RNN models.
# 2 Background
A feed-forward neural network is a non-linear mapping from a input pattern x to an output vector y. Consider the lth hidden layer in a deep feed-forward, neural network, and let al be the vector representation of the summed inputs to the neurons in that layer. The summed inputs are computed through a linear projection with the weight matrix W l and the bottom-up inputs hl given as follows:
# T
(1) i is the incoming weights to the ith hidden i is the scalar bias parameter. The parameters in the neural network are learnt using
where f (·) is an element-wise non-linear function and wl units and bl gradient-based optimization algorithms with the gradients being computed by back-propagation.
One of the challenges of deep learning is that the gradients with respect to the weights in one layer are highly dependent on the outputs of the neurons in the previous layer especially if these outputs change in a highly correlated way. Batch normalization [Ioffe and Szegedy, 2015] was proposed to reduce such undesirable âcovariate shiftâ. The method normalizes the summed inputs to each hidden unit over the training cases. Speciï¬cally, for the ith summed input in the lth layer, the batch normalization method rescales the summed inputs according to their variances under the distribution of the data
1_ 9 l l l l 2 a= Se (aâm) w= Bla] ot VB [lat âa)"] 2)
i is normalized summed inputs to the ith hidden unit in the lth layer and gi is a gain parame- where ¯al ter scaling the normalized activation before the non-linear activation function. Note the expectation is under the whole training data distribution. It is typically impractical to compute the expectations in Eq. (2) exactly, since it would require forward passes through the whole training dataset with the current set of weights. Instead, µ and Ï are estimated using the empirical samples from the current mini-batch. This puts constraints on the size of a mini-batch and it is hard to apply to recurrent neural networks.
# 3 Layer normalization
We now consider the layer normalization method which is designed to overcome the drawbacks of batch normalization.
Notice that changes in the output of one layer will tend to cause highly correlated changes in the summed inputs to the next layer, especially with ReLU units whose outputs can change by a lot. This suggests the âcovariate shiftâ problem can be reduced by ï¬xing the mean and the variance of the summed inputs within each layer. We, thus, compute the layer normalization statistics over all the hidden units in the same layer as follows:
H 1 1 l l w=aydoa, a=, A (3)
where H denotes the number of hidden units in a layer. The difference between Eq. (2) and Eq. (3) is that under layer normalization, all the hidden units in a layer share the same normalization terms µ and Ï, but different training cases have different normalization terms. Unlike batch normalization, layer normaliztion does not impose any constraint on the size of a mini-batch and it can be used in the pure online regime with batch size 1.
2
# 3.1 Layer normalized recurrent neural networks
The recent sequence to sequence models [Sutskever et al., 2014] utilize compact recurrent neural networks to solve sequential prediction problems in natural language processing. It is common among the NLP tasks to have different sentence lengths for different training cases. This is easy to deal with in an RNN because the same weights are used at every time-step. But when we apply batch normalization to an RNN in the obvious way, we need to to compute and store separate statistics for each time step in a sequence. This is problematic if a test sequence is longer than any of the training sequences. Layer normalization does not have such problem because its normalization terms depend only on the summed inputs to a layer at the current time-step. It also has only one set of gain and bias parameters shared over all time-steps.
In a standard RNN, the summed inputs in the recurrent layer are computed from the current input xt and previous vector of hidden states htâ1 which are computed as at = Whhhtâ1 + Wxhxt. The layer normalized recurrent layer re-centers and re-scales its activations using the extra normalization terms similar to Eq. (3):
hâ f(2 © (aâ =n") 4 ot
where W),, is the recurrent hidden to hidden weights and W,,;, are the bottom up input to hidden weights. © is the element-wise multiplication between two vectors. b and g are defined as the bias and gain parameters of the same dimension as hâ.
In a standard RNN, there is a tendency for the average magnitude of the summed inputs to the recur- rent units to either grow or shrink at every time-step, leading to exploding or vanishing gradients. In a layer normalized RNN, the normalization terms make it invariant to re-scaling all of the summed inputs to a layer, which results in much more stable hidden-to-hidden dynamics.
# 4 Related work
Batch normalization has been previously extended to recurrent neural networks [Laurent et al., 2015, Amodei et al., 2015, Cooijmans et al., 2016]. The previous work [Cooijmans et al., 2016] suggests the best performance of recurrent batch normalization is obtained by keeping independent normal- ization statistics for each time-step. The authors show that initializing the gain parameter in the recurrent batch normalization layer to 0.1 makes signiï¬cant difference in the ï¬nal performance of the model. Our work is also related to weight normalization [Salimans and Kingma, 2016]. In weight normalization, instead of the variance, the L2 norm of the incoming weights is used to normalize the summed inputs to a neuron. Applying either weight normalization or batch normal- ization using expected statistics is equivalent to have a different parameterization of the original feed-forward neural network. Re-parameterization in the ReLU network was studied in the Path- normalized SGD [Neyshabur et al., 2015]. Our proposed layer normalization method, however, is not a re-parameterization of the original neural network. The layer normalized model, thus, has different invariance properties than the other methods, that we will study in the following section.
# 5 Analysis
In this section, we investigate the invariance properties of different normalization schemes.
# Invariance under weights and data transformations
The proposed layer normalization is related to batch normalization and weight normalization. Al- though, their normalization scalars are computed differently, these methods can be summarized as normalizing the summed inputs ai to a neuron through the two scalars µ and Ï. They also learn an adaptive bias b and gain g for each neuron after the normalization.
hi = f ( gi Ïi (ai â µi) + bi) (5)
Note that for layer normalization and batch normalization, 1 and o is computed according to Eq. and[3| In weight normalization, pz is 0, and o = ||wl2.
3
Weight matrix Weight matrix Weight vector re-centering re-scaling re-scaling Dataset re-scaling Dataset re-centering Single training case re-scaling Batch norm Weight norm Layer norm Invariant Invariant Invariant No No Invariant Invariant No Invariant Invariant No No No No Invariant
# Invariant Invariant No Table 1: Invariance properties under the normalization methods.
Table 1 highlights the following invariance results for three normalization methods.
Weight re-scaling and re-centering: First, observe that under batch normalization and weight normalization, any re-scaling to the incoming weights w; of a single neuron has no effect on the normalized summed inputs to a neuron. To be precise, under batch and weight normalization, if the weight vector is scaled by 6, the two scalar jz and o will also be scaled by 6. The normalized summed inputs stays the same before and after scaling. So the batch and weight normalization are invariant to the re-scaling of the weights. Layer normalization, on the other hand, is not invariant to the individual scaling of the single weight vectors. Instead, layer normalization is invariant to scaling of the entire weight matrix and invariant to a shift to all of the incoming weights in the weight matrix. Let there be two sets of model parameters 0, 6â whose weight matrices W and Wâ differ by a scaling factor 6 and all of the incoming weights in Wâ are also shifted by a constant vector Â¥, that is Wâ = 6W + 1+'. Under layer normalization, the two models effectively compute the same output:
Sy ( hâ =f(5 (W'xâ 1â) +b) =f (S ((6W +1y ")x â nwâ) +b) =e (Wx âp) +b) =h. 6)
Notice that if normalization is only applied to the input before the weights, the model will not be invariant to re-scaling and re-centering of the weights.
Data re-scaling and re-centering: We can show that all the normalization methods are invariant to re-scaling the dataset by verifying that the summed inputs of neurons stays constant under the changes. Furthermore, layer normalization is invariant to re-scaling of individual training cases, because the normalization scalars jz and o in Eq. (3) only depend on the current input data. Let xâ be a new data point obtained by re-scaling x by 6. Then we have,
2 (wi xâ âp') +b) = Gi Ty _ 50 (dw; x â 5p) + b:) = hi. (7)
It is easy to see re-scaling individual data points does not change the modelâs prediction under layer normalization. Similar to the re-centering of the weight matrix in layer normalization, we can also show that batch normalization is invariant to re-centering of the dataset.
# 5.2 Geometry of parameter space during learning
We have investigated the invariance of the modelâs prediction under re-centering and re-scaling of the parameters. Learning, however, can behave very differently under different parameterizations, even though the models express the same underlying function. In this section, we analyze learning behavior through the geometry and the manifold of the parameter space. We show that the normal- ization scalar Ï can implicitly reduce learning rate and makes learning more stable.
# 5.2.1 Riemannian metric
The learnable parameters in a statistical model form a smooth manifold that consists of all possible input-output relations of the model. For models whose output is a probability distribution, a natural way to measure the separation of two points on this manifold is the Kullback-Leibler divergence between their model output distributions. Under the KL divergence metric, the parameter space is a Riemannian manifold.
The curvature of a Riemannian manifold is entirely captured by its Riemannian metric, whose quadratic form is denoted as ds2. That is the inï¬nitesimal distance in the tangent space at a point in the parameter space. Intuitively, it measures the changes in the model output from the parameter space along a tangent direction. The Riemannian metric under KL was previously studied [Amari, 1998] and was shown to be well approximated under second order Taylor expansion using the Fisher
4
information matrix:
ds? = Dru [Ply |x: Ply |x: 6+ 8)] © 557 F(O)6, (8)
F (θ) = E xâ¼P (x),yâ¼P (y | x) â log P (y | x; θ) âθ â log P (y | x; θ) âθ , (9)
where, δ is a small change to the parameters. The Riemannian metric above presents a geometric view of parameter spaces. The following analysis of the Riemannian metric provides some insight into how normalization methods could help in training neural networks.
# 5.2.2 The geometry of normalized generalized linear models
We focus our geometric analysis on the generalized linear model. The results from the following analysis can be easily applied to understand deep neural networks with block-diagonal approxima- tion to the Fisher information matrix, where each block corresponds to the parameters for a single neuron.
A generalized linear model (GLM) can be regarded as parameterizing an output distribution from the exponential family using a weight vector w and bias scalar b. To be consistent with the previous sections, the log likelihood of the GLM can be written using the summed inputs a as the following:
log P (y | x; w, b) = (a + b)y â η(a + b) Ï + c(y, Ï), (10)
Ely |x] = f(a +6) = f(w'x +5), Varly|x] = of'(a +d), (1)
where, f(-) is the transfer function that is the analog of the non-linearity in neural networks, fâ(-) is the derivative of the transfer function, 7(-) is a real valued function and c(-) is the log parti- tion function. ¢ is a constant that scales the output variance. Assume a H-dimensional output vector y = [y1,Y2,°"* , YH] is modeled using H independent GLMs and log P(y|x; W,b) = yt log P(y: |x; wi, bi). Let W be the weight matrix whose rows are the weight vectors of the individual GLMs, b denote the bias vector of length H and vec(-) denote the Kronecker vector op- erator. The Fisher information matrix for the multi-dimensional GLM with respect to its parameters 6 = [w] ,b1,--- ,wiy,bH]' = vec([W, b]") is simply the expected Kronecker product of the data features and the output covariance matrix:
+ Covly | x] @ ie 7] ; (12) F(0) =
We obtain normalized GLMs by applying the normalization methods to the summed inputs a in the original model through 4: and o. Without loss of generality, we denote Fâ as the Fisher infor- mation matrix under the normalized multi-dimensional GLM with the additional gain parameters 6 = vec([W, b, g]"):
Fi: Fin Covk Ix LH yiyT we x eH) _ - ov[yi, yy |X âu FO=]): 5 2 |, By EO a xi 3 1 ante x~P(x Fur --> Fur x} wecnl ca ects te) (13) On, a â [yy OO; â i . 14 NES * Ow; Oo; Ow; (4)
Implicit learning rate reduction through the growth of the weight vector: Notice that, com- paring to standard GLM, the block ¯Fij along the weight vector wi direction is scaled by the gain parameters and the normalization scalar Ïi. If the norm of the weight vector wi grows twice as large, even though the modelâs output remains the same, the Fisher information matrix will be different. The curvature along the wi direction will change by a factor of 1 2 because the Ïi will also be twice as large. As a result, for the same parameter update in the normalized model, the norm of the weight vector effectively controls the learning rate for the weight vector. During learning, it is harder to change the orientation of the weight vector with large norm. The normalization methods, therefore,
5
(a) Recall@1 (b) Recall@5 (c) Recall@10
Figure 1: Recall@K curves using order-embeddings with and without layer normalization.
MSCOCO Caption Retrieval Image Retrieval Model Sym [Vendrov et al., 2016] OE [Vendrov et al., 2016] OE (ours) OE + LN R@1 R@5 R@10 Mean r R@1 R@5 R@10 Mean r 45.4 46.7 46.6 48.5 88.7 88.9 89.1 89.8 5.8 5.7 5.2 5.1 36.3 37.9 37.8 38.9 85.8 85.9 85.7 86.3 79.3 80.6 73.6 74.3 9.0 8.1 7.9 7.6
Table 2: Average results across 5 test splits for caption and image retrieval. R@K is Recall@K (high is good). Mean r is the mean rank (low is good). Sym corresponds to the symmetric baseline while OE indicates order-embeddings.
have an implicit âearly stoppingâ effect on the weight vectors and help to stabilize learning towards convergence.
Learning the magnitude of incoming weights: In normalized models, the magnitude of the incom- ing weights is explicitly parameterized by the gain parameters. We compare how the model output changes between updating the gain parameters in the normalized GLM and updating the magnitude of the equivalent weights under original parameterization during learning. The direction along the gain parameters in ¯F captures the geometry for the magnitude of the incoming weights. We show that Riemannian metric along the magnitude of the incoming weights for the standard GLM is scaled by the norm of its input, whereas learning the gain parameters for the batch normalized and layer normalized models depends only on the magnitude of the prediction error. Learning the magnitude of incoming weights in the normalized model is therefore, more robust to the scaling of the input and its parameters than in the standard model. See Appendix for detailed derivations.
# 6 Experimental results
We perform experiments with layer normalization on 6 tasks, with a focus on recurrent neural net- works: image-sentence ranking, question-answering, contextual language modelling, generative modelling, handwriting sequence generation and MNIST classiï¬cation. Unless otherwise noted, the default initialization of layer normalization is to set the adaptive gains to 1 and the biases to 0 in the experiments.
# 6.1 Order embeddings of images and language
In this experiment, we apply layer normalization to the recently proposed order-embeddings model of Vendrov et al. [2016] for learning a joint embedding space of images and sentences. We follow the same experimental protocol as Vendrov et al. [2016] and modify their publicly available code to incorporate layer normalization 1 which utilizes Theano [Team et al., 2016]. Images and sen- tences from the Microsoft COCO dataset [Lin et al., 2014] are embedded into a common vector space, where a GRU [Cho et al., 2014] is used to encode sentences and the outputs of a pre-trained VGG ConvNet [Simonyan and Zisserman, 2015] (10-crop) are used to encode images. The order- embedding model represents images and sentences as a 2-level partial ordering and replaces the cosine similarity scoring function used in Kiros et al. [2014] with an asymmetric one.
# 1https://github.com/ivendrov/order-embedding
6
Attentive reader â_LSTM â BN-LSTM â _ BN-everywhere LN-LSTM ° ea S a validation error rate ° Nu 2° uu Ss & 100 200 300 400 500 600 700 800 training steps (thousands)
Figure 2: Validation curves for the attentive reader model. BN results are taken from [Cooijmans et al., 2016].
We trained two models: the baseline order-embedding model as well as the same model with layer normalization applied to the GRU. After every 300 iterations, we compute Recall@K (R@K) values on a held out validation set and save the model whenever R@K improves. The best performing models are then evaluated on 5 separate test sets, each containing 1000 images and 5000 captions, for which the mean results are reported. Both models use Adam [Kingma and Ba, 2014] with the same initial hyperparameters and both models are trained using the same architectural choices as used in Vendrov et al. [2016]. We refer the reader to the appendix for a description of how layer normalization is applied to GRU.
Figure 1 illustrates the validation curves of the models, with and without layer normalization. We plot R@1, R@5 and R@10 for the image retrieval task. We observe that layer normalization offers a per-iteration speedup across all metrics and converges to its best validation model in 60% of the time it takes the baseline model to do so. In Table 2, the test set results are reported from which we observe that layer normalization also results in improved generalization over the original model. The results we report are state-of-the-art for RNN embedding models, with only the structure-preserving model of Wang et al. [2016] reporting better results on this task. However, they evaluate under different conditions (1 test set instead of the mean over 5) and are thus not directly comparable.
# 6.2 Teaching machines to read and comprehend
In order to compare layer normalization to the recently proposed recurrent batch normalization [Cooijmans et al., 2016], we train an unidirectional attentive reader model on the CNN corpus both introduced by Hermann et al. [2015]. This is a question-answering task where a query description about a passage must be answered by ï¬lling in a blank. The data is anonymized such that entities are given randomized tokens to prevent degenerate solutions, which are consistently permuted dur- ing training and evaluation. We follow the same experimental protocol as Cooijmans et al. [2016] and modify their public code to incorporate layer normalization 2 which uses Theano [Team et al., 2016]. We obtained the pre-processed dataset used by Cooijmans et al. [2016] which differs from the original experiments of Hermann et al. [2015] in that each passage is limited to 4 sentences. In Cooijmans et al. [2016], two variants of recurrent batch normalization are used: one where BN is only applied to the LSTM while the other applies BN everywhere throughout the model. In our experiment, we only apply layer normalization within the LSTM.
The results of this experiment are shown in Figure 2. We observe that layer normalization not only trains faster but converges to a better validation result over both the baseline and BN variants. In Cooijmans et al. [2016], it is argued that the scale parameter in BN must be carefully chosen and is set to 0.1 in their experiments. We experimented with layer normalization for both 1.0 and 0.1 scale initialization and found that the former model performed signiï¬cantly better. This demonstrates that layer normalization is not sensitive to the initial scale in the same way that recurrent BN is. 3
# 6.3 Skip-thought vectors
Skip-thoughts [Kiros et al., 2015] is a generalization of the skip-gram model [Mikolov et al., 2013] for learning unsupervised distributed sentence representations. Given contiguous text, a sentence is
2https://github.com/cooijmanstim/Attentive_reader/tree/bn 3We only produce results on the validation set, as in the case of Cooijmans et al. [2016]
7
(a) SICK(r) (b) SICK(MSE) (c) MR (d) CR (e) SUBJ (f) MPQA
Figure 3: Performance of skip-thought vectors with and without layer normalization on downstream tasks as a function of training iterations. The original lines are the reported results in [Kiros et al., 2015]. Plots with error use 10-fold cross validation. Best seen in color.
Method SICK(r) SICK(Ï) SICK(MSE) MR CR SUBJ MPQA Original [Kiros et al., 2015] 0.848 0.778 0.287 75.5 79.3 92.1 86.9 Ours Ours + LN Ours + LN â 0.842 0.854 0.858 0.767 0.785 0.788 0.298 0.277 0.270 77.3 79.5 79.4 81.8 82.6 83.1 92.6 93.4 93.7 87.9 89.0 89.3
Table 3: Skip-thoughts results. The ï¬rst two evaluation columns indicate Pearson and Spearman cor- relation, the third is mean squared error and the remaining indicate classiï¬cation accuracy. Higher is better for all evaluations except MSE. Our models were trained for 1M iterations with the exception of (â ) which was trained for 1 month (approximately 1.7M iterations)
encoded with a encoder RNN and decoder RNNs are used to predict the surrounding sentences. Kiros et al. [2015] showed that this model could produce generic sentence representations that perform well on several tasks without being ï¬ne-tuned. However, training this model is time- consuming, requiring several days of training in order to produce meaningful results.
In this experiment we determine to what effect layer normalization can speed up training. Using the publicly available code of Kiros et al. [2015] 4, we train two models on the BookCorpus dataset [Zhu et al., 2015]: one with and one without layer normalization. These experiments are performed with Theano [Team et al., 2016]. We adhere to the experimental setup used in Kiros et al. [2015], training a 2400-dimensional sentence encoder with the same hyperparameters. Given the size of the states used, it is conceivable layer normalization would produce slower per-iteration updates than without. However, we found that provided CNMeM 5 is used, there was no signiï¬cant difference between the two models. We checkpoint both models after every 50,000 iterations and evaluate their performance on ï¬ve tasks: semantic-relatedness (SICK) [Marelli et al., 2014], movie review sentiment (MR) [Pang and Lee, 2005], customer product reviews (CR) [Hu and Liu, 2004], subjectivity/objectivity classiï¬cation (SUBJ) [Pang and Lee, 2004] and opinion polarity (MPQA) [Wiebe et al., 2005]. We plot the performance of both models for each checkpoint on all tasks to determine whether the performance rate can be improved with LN.
The experimental results are illustrated in Figure 3. We observe that applying layer normalization results both in speedup over the baseline as well as better ï¬nal results after 1M iterations are per- formed as shown in Table 3. We also let the model with layer normalization train for a total of a month, resulting in further performance gains across all but one task. We note that the performance
4https://github.com/ryankiros/skip-thoughts 5https://github.com/NVIDIA/cnmem
8
0 + T T -100} Nae â Baseline test -200 ~ Baseline train â300 â LN test â400 LN train â500/ â600} -700/ â800} -900 1 Negative Log Likelihood : : 0° 107 10? 10? Updates x 200
Figure 5: Handwriting sequence generation model negative log likelihood with and without layer normalization. The models are trained with mini-batch size of 8 and sequence length of 500,
differences between the original reported results and ours are likely due to the fact that the publicly available code does not condition at each timestep of the decoder, where the original model does.
# 6.4 Modeling binarized MNIST using DRAW
100 Baseline â WN 95 : dt on 80,5 ae B00 Epoch 90 85 Test Variational Bound
We also experimented with the generative modeling on the MNIST dataset. Deep Recurrent Attention Writer (DRAW) [Gregor et al., 2015] has previously achieved the state-of-the- art performance on modeling the distribution of MNIST dig- its. The model uses a differential attention mechanism and a recurrent neural network to sequentially generate pieces of an image. We evaluate the effect of layer normalization on a DRAW model using 64 glimpses and 256 LSTM hidden units. The model is trained with the default setting of Adam [Kingma and Ba, 2014] optimizer and the minibatch size of 128. Previous publications on binarized MNIST have used various training protocols to generate their datasets. In this experiment, we used the ï¬xed binarization from Larochelle and Murray [2011]. The dataset has been split into 50,000 training, 10,000 validation and 10,000 test images.
Figure 4: DRAW model test nega- tive log likelihood with and without layer normalization.
Figure 4 shows the test variational bound for the ï¬rst 100 epoch. It highlights the speedup beneï¬t of applying layer nor- malization that the layer normalized DRAW converges almost twice as fast than the baseline model. After 200 epoches, the baseline model converges to a variational log likelihood of 82.36 nats on the test data and the layer normalization model obtains 82.09 nats.
# 6.5 Handwriting sequence generation
The previous experiments mostly examine RNNs on NLP tasks whose lengths are in the range of 10 to 40. To show the effectiveness of layer normalization on longer sequences, we performed hand- writing generation tasks using the IAM Online Handwriting Database [Liwicki and Bunke, 2005]. IAM-OnDB consists of handwritten lines collected from 221 different writers. When given the input character string, the goal is to predict a sequence of x and y pen co-ordinates of the corresponding handwriting line on the whiteboard. There are, in total, 12179 handwriting line sequences. The input string is typically more than 25 characters and the average handwriting line has a length around 700.
We used the same model architecture as in Section (5.2) of Graves [2013]. The model architecture consists of three hidden layers of 400 LSTM cells, which produce 20 bivariate Gaussian mixture components at the output layer, and a size 3 input layer. The character sequence was encoded with one-hot vectors, and hence the window vectors were size 57. A mixture of 10 Gaussian functions was used for the window parameters, requiring a size 30 parameter vector. The total number of weights was increased to approximately 3.7M. The model is trained using mini-batches of size 8 and the Adam [Kingma and Ba, 2014] optimizer.
The combination of small mini-batch size and very long sequences makes it important to have very stable hidden dynamics. Figure 5 shows that layer normalization converges to a comparable log likelihood as the baseline model but is much faster.
9
10? re; ] Train NLL â BatchNorm bz128 10° â layerNorm bz4 | Baseline bz128 Baseline bz4 â LayerNorm bz128 v0 â _ BatchNorm bz4|1 0 i0 Ea) 30 a0 50 60 Test Er. oo10 âLayerNorm bz4 | | â Baseline bz4 â LayerNorm bz128 â BatchNorm bz4 0.005 0.005, 0 10 2 30 a0 30 60 0 5 20 30 40 30 Ca Epoch Epoch
Figure 6: Permutation invariant MNIST 784-1000-1000-10 model negative log likelihood and test error with layer normalization and batch normalization. (Left) The models are trained with batch- size of 128. (Right) The models are trained with batch-size of 4.
# 6.6 Permutation invariant MNIST
In addition to RNNs, we investigated layer normalization in feed-forward networks. We show how layer normalization compares with batch normalization on the well-studied permutation invariant MNIST classiï¬cation problem. From the previous analysis, layer normalization is invariant to input re-scaling which is desirable for the internal hidden layers. But this is unnecessary for the logit outputs where the prediction conï¬dence is determined by the scale of the logits. We only apply layer normalization to the fully-connected hidden layers that excludes the last softmax layer.
All the models were trained using 55000 training data points and the Adam [Kingma and Ba, 2014] optimizer. For the smaller batch-size, the variance term for batch normalization is computed using the unbiased estimator. The experimental results from Figure 6 highlight that layer normalization is robust to the batch-sizes and exhibits a faster training convergence comparing to batch normalization that is applied to all layers.
# 6.7 Convolutional Networks
We have also experimented with convolutional neural networks. In our preliminary experiments, we observed that layer normalization offers a speedup over the baseline model without normalization, but batch normalization outperforms the other methods. With fully connected layers, all the hidden units in a layer tend to make similar contributions to the ï¬nal prediction and re-centering and re- scaling the summed inputs to a layer works well. However, the assumption of similar contributions is no longer true for convolutional neural networks. The large number of the hidden units whose receptive ï¬elds lie near the boundary of the image are rarely turned on and thus have very different statistics from the rest of the hidden units within the same layer. We think further research is needed to make layer normalization work well in ConvNets.
# 7 Conclusion
In this paper, we introduced layer normalization to speed-up the training of neural networks. We provided a theoretical analysis that compared the invariance properties of layer normalization with batch normalization and weight normalization. We showed that layer normalization is invariant to per training-case feature shifting and scaling.
Empirically, we showed that recurrent neural networks beneï¬t the most from the proposed method especially for long sequences and small mini-batches.
# Acknowledgments
This research was funded by grants from NSERC, CFI, and Google.
10
# References
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In NIPS, 2012.
Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE, 2012.
Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior, Paul Tucker, Ke Yang, Quoc V Le, et al. Large scale distributed deep networks. In NIPS, 2012.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. ICML, 2015.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. Advances in neural information processing systems, pages 3104â3112, 2014. In
C´esar Laurent, Gabriel Pereyra, Phil´emon Brakel, Ying Zhang, and Yoshua Bengio. Batch normalized recurrent neural networks. arXiv preprint arXiv:1510.01378, 2015.
Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, et al. Deep speech 2: End-to-end speech recognition in english and mandarin. arXiv preprint arXiv:1512.02595, 2015.
Tim Cooijmans, Nicolas Ballas, C´esar Laurent, and Aaron Courville. Recurrent batch normalization. arXiv preprint arXiv:1603.09025, 2016.
Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to accelerate train- ing of deep neural networks. arXiv preprint arXiv:1602.07868, 2016.
Behnam Neyshabur, Ruslan R Salakhutdinov, and Nati Srebro. Path-sgd: Path-normalized optimization in deep neural networks. In Advances in Neural Information Processing Systems, pages 2413â2421, 2015.
Shun-Ichi Amari. Natural gradient works efï¬ciently in learning. Neural computation, 1998.
Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. Order-embeddings of images and language. ICLR, 2016.
The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Fr´ed´eric Bastien, Justin Bayer, Anatoly Belikov, et al. Theano: A python framework for fast computation of mathematical expressions. arXiv preprint arXiv:1605.02688, 2016.
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. ECCV, 2014.
Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. EMNLP, 2014.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. ICLR, 2015.
Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. Unifying visual-semantic embeddings with multi- modal neural language models. arXiv preprint arXiv:1411.2539, 2014.
D. Kingma and J. L. Ba. Adam: a method for stochastic optimization. ICLR, 2014. arXiv:1412.6980.
Liwei Wang, Yin Li, and Svetlana Lazebnik. Learning deep structure-preserving image-text embeddings. CVPR, 2016.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In NIPS, 2015.
Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Skip-thought vectors. In NIPS, 2015.
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efï¬cient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In ICCV, 2015.
Marco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. SemEval-2014, 2014.
11
Bo Pang and Lillian Lee. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In ACL, pages 115â124, 2005.
Minqing Hu and Bing Liu. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, 2004.
Bo Pang and Lillian Lee. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In ACL, 2004.
Janyce Wiebe, Theresa Wilson, and Claire Cardie. Annotating expressions of opinions and emotions in lan- guage. Language resources and evaluation, 2005.
K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. DRAW: a recurrent neural network for image generation. arXiv:1502.04623, 2015.
Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In AISTATS, volume 6, page 622, 2011.
Marcus Liwicki and Horst Bunke. Iam-ondb-an on-line english sentence database acquired from handwritten text on a whiteboard. In ICDAR, 2005.
Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
12
# Supplementary Material
# Application of layer normalization to each experiment
This section describes how layer normalization is applied to each of the papersâ experiments. For notation convenience, we deï¬ne layer normalization as a function mapping LN : RD â RD with two set of adaptive parameters, gains α and biases β:
LN(z:0, 3) = (@ â ) SatB, (15)
12 wpe o=, (16)
where, zi is the ith element of the vector z.
# Teaching machines to read and comprehend and handwriting sequence generation
The basic LSTM equations used for these experiment are given by:
ft it ot gt
= Whhtâ1 + Wxxt + b (17)
ce, = o(f) © câ1 + o(iz) © tanh(g,) (18)
ce, = o(f) © câ1 + o(iz) © tanh(g,) hy = o(0;) © tanh(c;)
hy = o(0;) © tanh(c;) (19)
The version that incorporates layer normalization is modiï¬ed as follows:
ft it ot gt
= LN (Whhtâ1; α1, β1) + LN (Wxxt; α2, β2) + b (20)
c, = o(f;) © cy-1 + o(is) © tanh(gy) hy = o(0;) © tanh(LN(c;; a3, 83))
c, = o(f;) © cy-1 + o(is) © tanh(gy) (21)
hy = o(0;) © tanh(LN(c;; a3, 83)) (22)
where αi, βi are the additive and multiplicative parameters, respectively. Each αi is initialized to a vector of zeros and each βi is initialized to a vector of ones.
# Order embeddings and skip-thoughts
These experiments utilize a variant of gated recurrent unit which is deï¬ned as follows:
(") ry h, = tanh(Wx, + o(r) © (Uby_-1)) h, = (1âo(z:))he-1 + o(z:)hy Why_1 + Wax:
= Whhtâ1 + Wxxt (23)
(24)
(25)
Layer normalization is applied as follows:
â â¢~ GS Ne ll LIN(W)ky-1; a1, 81) + LN(W2X1; 2, 32) h, = tanh(LN (Wx;; a3, 33) + o(r.) © LN(Uhy_1; a4, 81)) hy, = (1âo(2:))hy-1 + o(z1) hy
just as before, αi is initialized to a vector of zeros and each βi is initialized to a vector of ones.
13
(18) (19)
(21) (22)
(26)
(27)
(28)
# Modeling binarized MNIST using DRAW
The layer norm is only applied to the output of the LSTM hidden states in this experiment:
The version that incorporates layer normalization is modiï¬ed as follows:
ft it ot gt
= Whhtâ1 + Wxxt + b (29)
ce, = o(f) © câ1 + o(iz) © tanh(g,) (30) h, = o(o,) © tanh(LN(c;; 0, 8) Gl) where a, 3 are the additive and multiplicative parameters, respectively. @ is initialized to a vector of zeros and (3 is initialized to a vector of ones.
# Learning the magnitude of incoming weights
We now compare how gradient descent updates changing magnitude of the equivalent weights be- tween the normalized GLM and original parameterization. The magnitude of the weights are ex- plicitly parameterized using the gain parameter in the normalized model. Assume there is a gradient update that changes norm of the weight vectors by δg. We can project the gradient updates to the weight vector for the normal GLM. The KL metric, ie how much the gradient update changes the model prediction, for the normalized model depends only on the magnitude of the prediction error. Speciï¬cally,
under batch normalization:
1 = 1 Cc ds? = 3 ver([0, 0, 5g] ")" F(vec([W, b, g]") vec([0,0, d4]") = 550 ew [ees] 5g: (32)
Under layer normalization:
1 _ ds? =5 vec((0,0,6,)") F(wee((W, b,]") vee((0,0,64)") Cov(yr, y1 |x) OS =p" es Cov(yn, yar |x) SE MGnâ 1 1 =, E : me, : 5g BO xP) (au =1)(a1 =p) (an =n)? Cov(yr, 41 |X) oe Cov(yn, ya |x) #5
Under weight normalization:
1 _ dsâ =5 vec([0, 0, 5j]') "F(vec([W, b, g] ") vec([0, 0, 5g] ") Lal Cov(ys, I) ro Cov(n Yu 1%) Tttattea Te =5'â : . : 6g. G4 29 6? xxP(x) , , , OD) 2 Cov(yi, ys |X) 7 Cov (yi, yt |) rate won feleo|=
Cov(yi, ys |X) 7 Cov (yi, yt |) rate won feleo|= Whereas, the KL metric in the standard GLM is related to its activities a; = w; x, that is depended on both its current weights and input data. We Project the gradient updates to the gain parameter 6,1 of the iââ neuron to its weight vector as Jy; + â Twrls in the standard GLM model:
tol") F (lw) , bi, wf ,by]") vec 0,54; 2,0] 5 vee ([8gi 951 Ne? |) (wi! ,w; ,d;]) vee([5gi oi ,0)') Tok ne Ilwill2 Toile ne I[evy|2 bgi5g5 aja; = E |Cov(y;, y; |x) â+~2_ (35) BE aay [CVU Fa falhoyle
The batch normalized and layer normalized models are therefore more robust to the scaling of the input and its parameters than the standard model.
14
(32)
(33) | {
"id": "1605.02688"
} |
1607.01759 | Bag of Tricks for Efficient Text Classification | This paper explores a simple and efficient baseline for text classification.
Our experiments show that our fast text classifier fastText is often on par
with deep learning classifiers in terms of accuracy, and many orders of
magnitude faster for training and evaluation. We can train fastText on more
than one billion words in less than ten minutes using a standard multicore~CPU,
and classify half a million sentences among~312K classes in less than a minute. | http://arxiv.org/pdf/1607.01759 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov | cs.CL | null | null | cs.CL | 20160706 | 20160809 | 2016
6 1 0 2
g u A 9
arXiv:1607.01759v3 [esCL]
# ] L C . s c [
3 v 9 5 7 1 0 . 7 0 6 1 : v i X r a
# Bag of Tricks for Efï¬cient Text Classiï¬cation
# Armand Joulin Edouard Grave Piotr Bojanowski Tomas Mikolov
Facebook AI Research {ajoulin,egrave,bojanowski,tmikolov}@fb.com
# Abstract
This paper explores a simple and efï¬cient Our ex- baseline for text classiï¬cation. periments show that our text classi- fast ï¬er fastText is often on par with deep learning classiï¬ers in terms of accuracy, and many orders of magnitude faster for training and evaluation. We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute.
In this work, we explore ways to scale these baselines to very large corpus with a large output space, in the context of text classiï¬cation. Inspired by the recent work in efï¬cient word representation learning (Mikolov et al., 2013; Levy et al., 2015), we show that linear models with a rank constraint and a fast loss approximation can train on a billion words within ten minutes, while achieving perfor- mance on par with the state-of-the-art. We evalu- ate the quality of our approach fastText1 on two different tasks, namely tag prediction and sentiment analysis.
# 1 Introduction
# 2 Model architecture
Text classiï¬cation is an important task in Natural Language Processing with many applications, such as web search, information retrieval, ranking and classiï¬cation (Deerwester et al., 1990; document Pang and Lee, 2008). Recently, models based on neural networks have become increasingly popular Zhang and LeCun, 2015; Conneau et al., 2016). While these models achieve very good performance in practice, they tend to be relatively slow both at train and test time, limiting their use on very large datasets.
of- are text for ten considered as (Joachims, 1998; problems classiï¬cation Fan et al., 2008). McCallum and Nigam, 1998; Despite their simplicity, they often obtain state- of-the-art performances if the right features are used (Wang and Manning, 2012). They also have the potential to scale to very large cor- pus (Agarwal et al., 2014).
A simple and efï¬cient baseline for sentence classiï¬cation is to represent sentences as bag of words (BoW) and train a linear classiï¬er, e.g., a logistic regression or an SVM (Joachims, 1998; Fan et al., 2008). However, linear classiï¬ers do not share parameters among features and classes. This possibly limits their generalization in the context of large output space where some classes have very few examples. Common solutions to this problem are to factorize the linear clas- (Schutze, 1992; siï¬er Mikolov et al., 2013) use multilayer neural (Collobert and Weston, 2008; networks Zhang et al., 2015).
Figure 1 shows a simple linear model with rank constraint. The ï¬rst weight matrix A is a look-up table over the words. The word representations are then averaged into a text representation, which is in turn fed to a linear classiï¬er. The text representa-
# 1https://github.com/facebookresearch/fastText
output hidden x1 x2 . . . xN â1 xN
Figure 1: Model architecture of fastText for a sentence with N ngram features x1, . . . , xN . The features are embedded and averaged to form the hidden variable.
tion is an hidden variable which can be potentially be reused. This architecture is similar to the cbow model of Mikolov et al. (2013), where the middle word is replaced by a label. We use the softmax function f to compute the probability distribution over the predeï¬ned classes. For a set of N doc- uments, this leads to minimizing the negative log- likelihood over the classes:
â 1 N N X n=1 yn log(f (BAxn)),
where xn is the normalized bag of features of the n- th document, yn the label, A and B the weight matri- ces. This model is trained asynchronously on mul- tiple CPUs using stochastic gradient descent and a linearly decaying learning rate.
# 2.1 Hierarchical softmax
When the number of classes is large, computing the linear classiï¬er is computationally expensive. More precisely, the computational complexity is O(kh) where k is the number of classes and h the di- mension of the text representation. In order to im- prove our running time, we use a hierarchical soft- max (Goodman, 2001) based on the Huffman cod- ing tree (Mikolov et al., 2013). During training, the computational complexity drops to O(h log2(k)).
The hierarchical softmax is also advantageous at test time when searching for the most likely class. Each node is associated with a probability that is the probability of the path from the root to that node. If the node is at depth l + 1 with parents n1, . . . , nl, its probability is
l P (nl+1) = Y i=1 P (ni).
This means that the probability of a node is always lower than the one of its parent. Exploring the tree with a depth ï¬rst search and tracking the maximum probability among the leaves allows us to discard any branch associated with a small probability. In practice, we observe a reduction of the complexity to O(h log2(k)) at test time. This approach is fur- ther extended to compute the T -top targets at the cost of O(log(T )), using a binary heap.
# 2.2 N-gram features
Bag of words is invariant to word order but taking explicitly this order into account is often computa- tionally very expensive. Instead, we use a bag of n-grams as additional features to capture some par- tial information about the local word order. This is very efï¬cient in practice while achieving compa- rable results to methods that explicitly use the or- der (Wang and Manning, 2012).
We maintain a fast and memory efï¬cient mapping of the n-grams by using the hashing trick (Weinberger et al., 2009) with the same hash- ing function as in Mikolov et al. (2011) and 10M bins if we only used bigrams, and 100M otherwise.
# 3 Experiments
We evaluate fastText on two different tasks. First, we compare it to existing text classifers on the problem of sentiment analysis. Then, we evaluate its capacity to scale to large output space on a tag prediction dataset. Note that our model could be im- plemented with the Vowpal Wabbit library,2 but we observe in practice, that our tailored implementation is at least 2-5Ã faster.
# 3.1 Sentiment analysis
the Datasets protocol same 8 the n-grams of Zhang et al. (2015). We report from Zhang et al. (2015), and TFIDF baselines level convolutional as well as model (char-CNN) of Zhang and LeCun (2015), the character based convolution recurrent net- work (char-CRNN) of (Xiao and Cho, 2016) and the very deep convolutional network (VDCNN) We also compare of Conneau et al. (2016).
2Using the options --nn, --ngrams and --log multi
Model AG Sogou DBP Yelp P. Yelp F. Yah. A. Amz. F. Amz. P. BoW (Zhang et al., 2015) ngrams (Zhang et al., 2015) ngrams TFIDF (Zhang et al., 2015) char-CNN (Zhang and LeCun, 2015) char-CRNN (Xiao and Cho, 2016) VDCNN (Conneau et al., 2016) 88.8 92.0 92.4 87.2 91.4 91.3 92.9 97.1 97.2 95.1 95.2 96.8 96.6 98.6 98.7 98.3 98.6 98.7 92.2 95.6 95.4 94.7 94.5 95.7 58.0 56.3 54.8 62.0 61.8 64.7 68.9 68.5 68.5 71.2 71.7 73.4 54.6 54.3 52.4 59.5 59.2 63.0 90.4 92.0 91.5 94.5 94.1 95.7 fastText, h = 10 fastText, h = 10, bigram 91.5 92.5 93.9 96.8 98.1 98.6 93.8 95.7 60.4 63.9 72.0 72.3 55.8 60.2 91.2 94.6
Table 1: Test accuracy [%] on sentiment datasets. FastText has been run with the same parameters for all the datasets. It has 10 hidden units and we evaluate it with and without bigrams. For char-CNN, we show the best reported numbers without data augmentation.
Zhang and LeCun (2015) Conneau et al. (2016) fastText small char-CNN big char-CNN depth=9 depth=17 depth=29 AG Sogou DBpedia Yelp P. Yelp F. Yah. A. Amz. F. Amz. P. 1h - 2h - - 8h 2d 2d 3h - 5h - - 1d 5d 5d 24m 25m 27m 28m 29m 1h 2h45 2h45 37m 41m 44m 43m 45m 1h33 4h20 4h25 51m 56m 1h 1h09 1h12 2h 7h 7h 1s 7s 2s 3s 4s 5s 9s 10s
Table 2: Training time for a single epoch on sentiment analysis datasets compared to char-CNN and VDCNN.
following their evaluation to Tang et al. (2015) protocol. We report their main baselines as well as their two approaches based on recurrent networks (Conv-GRNN and LSTM-GRNN).
Results. We present the results in Figure 1. We use 10 hidden units and run fastText for 5 epochs with a learning rate selected on a valida- tion set from {0.05, 0.1, 0.25, 0.5}. On this task, adding bigram information improves the perfor- mance by 1-4%. Overall our accuracy is slightly better than char-CNN and char-CRNN and, a bit worse than VDCNN. Note that we can increase the accuracy slightly by using more n-grams, for example with trigrams, the performance on Sogou goes up to 97.1%. Finally, Figure 3 shows that our method is competitive with the methods pre- sented in Tang et al. (2015). We tune the hyper- parameters on the validation set and observe that using n-grams up to 5 leads to the best perfor- mance. Unlike Tang et al. (2015), fastText does not use pre-trained word embeddings, which can be explained the 1% difference in accuracy.
Model Yelpâ13 Yelpâ14 Yelpâ15 IMDB 59.8 SVM+TF 59.7 CNN Conv-GRNN 63.7 LSTM-GRNN 65.1 61.8 61.0 65.5 67.1 62.4 61.5 66.0 67.6 40.5 37.5 42.5 45.3 fastText 64.2 66.2 66.6 45.2
Table 3: Comparision with Tang et al. (2015). The hyper- parameters are chosen on the validation set. We report the test accuracy.
Training time. Both char-CNN and VDCNN are trained on a NVIDIA Tesla K40 GPU, while our models are trained on a CPU using 20 threads. Ta- ble 2 shows that methods using convolutions are sev- eral orders of magnitude slower than fastText. While it is possible to have a 10Ã speed up for char-CNN by using more recent CUDA implemen- tations of convolutions, fastText takes less than a minute to train on these datasets. The GRNNs method of Tang et al. (2015) takes around 12 hours per epoch on CPU with a single thread. Our speed-
Input Prediction Tags taiyoucon 2011 digitals: individuals digital pho- tos from the anime convention taiyoucon 2011 in mesa, arizona. if you know the model and/or the character, please comment. #cosplay #24mm #anime #animeconvention #arizona #canon #con #convention #cos #cosplay #costume #mesa #play #taiyou #taiyoucon 2012 twin cities pride 2012 twin cities pride pa- rade #minneapolis #2012twincitiesprideparade neapolis #mn #usa #min- beagle enjoys the snowfall #snow #2007 #beagle #hillsboro #january #maddison #maddy #oregon #snow christmas #christmas #cameraphone #mobile euclid avenue #newyorkcity #cleveland #euclidavenue
Table 4: Examples from the validation set of YFCC100M dataset obtained with fastText with 200 hidden units and bigrams. We show a few correct and incorrect tag predictions.
up compared to neural network based methods in- creases with the size of the dataset, going up to at least a 15,000Ã speed-up.
# 3.2 Tag prediction
Dataset and baselines. To test scalability of our approach, further evaluation is carried on (Thomee et al., 2016) the YFCC100M dataset which consists of almost 100M images with cap- tions, titles and tags. We focus on predicting the tags according to the title and caption (we do not use the images). We remove the words and tags occurring less than 100 times and split the data into a train, validation and test set. The train set contains 91,188,648 examples (1.5B tokens). The validation has 930,497 examples and the test set 543,424. The vocabulary size is 297,141 and there are 312,116 unique tags. We will release a script that recreates this dataset so that our numbers could be reproduced. We report precision at 1.
We consider a frequency-based baseline which tag. We also com- predicts the most frequent pare with Tagspace (Weston et al., 2014), which is a tag prediction model similar to ours, but based on the Wsabie model of Weston et al. (2011). While the Tagspace model is described using convolutions, we consider the linear version, which achieves com- parable performance but is much faster.
Model prec@1 Running time Train Test Freq. baseline Tagspace, h = 50 Tagspace, h = 200 2.2 30.1 35.6 - 3h8 5h32 - 6h 15h fastText, h = 50 31.2 fastText, h = 50, bigram 36.7 fastText, h = 200 41.1 fastText, h = 200, bigram 46.1 6m40 7m47 10m34 13m38 48s 50s 1m29 1m37
Table 5: Prec@1 on the test set for tag prediction on YFCC100M. We also report the training time and test time. Test time is reported for a single thread, while training uses 20 threads for both models.
and 200. Both models achieve a similar perfor- mance with a small hidden layer, but adding bi- grams gives us a signiï¬cant boost in accuracy. At test time, Tagspace needs to compute the scores for all the classes which makes it relatively slow, while our fast inference gives a signiï¬cant speed-up when the number of classes is large (more than 300K here). Overall, we are more than an order of mag- nitude faster to obtain model with a better quality. The speedup of the test phase is even more signiï¬- cant (a 600à speedup). Table 4 shows some quali- tative examples.
Results and training time. Table 5 presents a comparison of fastText and the baselines. We run fastText for 5 epochs and compare it to Tagspace for two sizes of the hidden layer, i.e., 50
# 4 Discussion and conclusion
In this work, we propose a simple baseline method for text classiï¬cation. Unlike unsupervisedly trained word vectors from word2vec, our word features can
be averaged together to form good sentence repre- sentations. In several tasks, fastText obtains per- formance on par with recently proposed methods in- spired by deep learning, while being much faster. Although deep neural networks have in theory much higher representational power than shallow models, it is not clear if simple text classiï¬cation problems such as sentiment analysis are the right ones to eval- uate them. We will publish our code so that the research community can easily build on top of our work.
Acknowledgement. We thank Gabriel Synnaeve, Herv´e G´egou, Jason Weston and L´eon Bottou for their help and comments. We also thank Alexis Con- neau, Duyu Tang and Zichao Zhang for providing us with information about their methods.
# References
[Agarwal et al.2014] Alekh Agarwal, Olivier Chapelle, Miroslav Dud´ık, and John Langford. 2014. A reliable effective terascale linear learning system. JMLR. [Collobert and Weston2008] Ronan Collobert and Jason Weston. 2008. A uniï¬ed architecture for natural lan- guage processing: Deep neural networks with multi- task learning. In ICML.
[Conneau et al.2016] Alexis Conneau, Holger Schwenk, Lo¨ıc Barrault, and Yann Lecun. 2016. Very deep con- volutional networks for natural language processing. arXiv preprint arXiv:1606.01781.
[Deerwester et al.1990] Scott Deerwester, Susan T Du- mais, George W Furnas, Thomas K Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American society for informa- tion science.
[Fan et al.2008] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. 2008. Li- blinear: A library for large linear classiï¬cation. JMLR. [Goodman2001] Joshua Goodman. 2001. Classes for fast
maximum entropy training. In ICASSP.
[Joachims1998] Thorsten Joachims. 1998. Text catego- rization with support vector machines: Learning with many relevant features. Springer.
[Kim2014] Yoon Kim. 2014. Convolutional neural net- works for sentence classiï¬cation. In EMNLP.
[Levy et al.2015] Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. TACL.
[McCallum and Nigam1998] Andrew McCallum and Ka- mal Nigam. 1998. A comparison of event models for
naive bayes text classiï¬cation. In AAAI workshop on learning for text categorization.
[Mikolov et al.2011] Tom´aËs Mikolov, Anoop Deoras, Daniel Povey, Luk´aËs Burget, and Jan ËCernock`y. 2011. Strategies for training large scale neural network lan- guage models. In Workshop on Automatic Speech Recognition and Understanding. IEEE.
[Mikolov et al.2013] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efï¬cient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
2008. Opinion mining and sentiment analysis. Foundations and trends in information retrieval.
[Schutze1992] Hinrich Schutze. 1992. Dimensions of meaning. In Supercomputing.
[Tang et al.2015] Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neural network for sentiment classiï¬cation. In EMNLP. [Thomee et al.2016] Bart Thomee, David A Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Dou- 2016. glas Poland, Damian Borth, and Li-Jia Li. Yfcc100m: The new data in multimedia research. vol- ume 59, pages 64â73. ACM.
[Wang and Manning2012] Sida Wang and Christopher D Manning. 2012. Baselines and bigrams: Simple, good sentiment and topic classiï¬cation. In ACL.
[Weinberger et al.2009] Kilian Weinberger, Anirban Das- gupta, John Langford, Alex Smola, and Josh Atten- berg. 2009. Feature hashing for large scale multitask learning. In ICML.
[Weston et al.2011] Jason Weston, Samy Bengio, and Nicolas Usunier. 2011. Wsabie: Scaling up to large vocabulary image annotation. In IJCAI.
[Weston et al.2014] Jason Weston, Sumit Chopra, and Keith Adams. 2014. #tagspace: Semantic embed- dings from hashtags. In EMNLP.
[Xiao and Cho2016] Yijun Xiao and Kyunghyun Cho. 2016. Efï¬cient character-level document classiï¬cation by combining convolution and recurrent layers. arXiv preprint arXiv:1602.00367.
[Zhang and LeCun2015] Xiang Zhang and Yann LeCun. 2015. Text understanding from scratch. arXiv preprint arXiv:1502.01710.
[Zhang et al.2015] Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classiï¬cation. In NIPS. | {
"id": "1606.01781"
} |
1607.00036 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | 7 1 0 2
r a M 7 1 ] G L . s c [
2 v 6 3 0 0 0 . 7 0 6 1 : v i X r a
# Dynamic Neural Turing Machine with Continuous and Discrete Addressing Schemes Caglar Gulcehre1, Sarath Chandar1, Kyunghyun Cho2, Yoshua Bengio1 1University of Montreal, name.lastname@umontreal.ca 2New York University, name.lastname@nyu.edu
Keywords: neural networks, memory, neural Turing machines, natural language processing
# Abstract
We extend neural Turing machine (NTM) model into a dynamic neural Turing ma- chine (D-NTM) by introducing a trainable memory addressing scheme. This address- ing scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing strategies including both linear and nonlinear ones. We implement the D-NTM with both continuous, differentiable and discrete, non-differentiable read/write mechanisms. We investigate the mechanisms and effects of learning to read and write into a memory through experiments on Facebook bAbI tasks using both a feedforward and GRU- controller. The D-NTM is evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM baselines. We have done extensive analysis of our model and different variations of NTM on bAbI task. We also provide further experimental results on sequential pMNIST, Stanford Natural Language Inference, associative recall and copy tasks.
# Introduction
1
Designing of general-purpose learning algorithms is one of the long-standing goals of artiï¬cial intelligence. Despite the success of deep learning in this area (see, e.g., (Good- fellow et al., 2016)) there are still a set of complex tasks that are not well addressed by conventional neural network based models. Those tasks often require a neural network to be equipped with an explicit, external memory in which a larger, potentially un- bounded, set of facts need to be stored. They include, but are not limited to, episodic question-answering (Weston et al., 2015b; Hermann et al., 2015; Hill et al., 2015), com- pact algorithms (Zaremba et al., 2015), dialogue (Serban et al., 2016; Vinyals and Le, 2015) and video caption generation (Yao et al., 2015).
1
Recently two promising approaches that are based on neural networks for this type of tasks have been proposed. Memory networks (Weston et al., 2015b) explicitly store all the facts, or information, available for each episode in an external memory (as con- tinuous vectors) and use the attention-based mechanism to index them when returning an output. On the other hand, neural Turing machines (NTM, (Graves et al., 2014)) read each fact in an episode and decides whether to read, write the fact or do both to the external, differentiable memory.
A crucial difference between these two models is that the memory network does not have a mechanism to modify the content of the external memory, while the NTM does. In practice, this leads to easier learning in the memory network, which in turn resulted in that it being used more in realistic tasks (Bordes et al., 2015; Dodge et al., 2015). On the contrary, the NTM has mainly been tested on a series of small-scale, carefully-crafted tasks such as copy and associative recall. However, NTM is more expressive, precisely because it can store and modify the internal state of the network as it processes an episode and we were able to use it without any modiï¬cations on the model for different tasks.
The original NTM supports two modes of addressing (which can be used simulta- neously.) They are content-based and location-based addressing. We notice that the location-based strategy is based on linear addressing. The distance between each pair of consecutive memory cells is ï¬xed to a constant. We address this limitation, in this paper, by introducing a learnable address vector for each memory cell of the NTM with least recently used memory addressing mechanism, and we call this variant a dynamic neural Turing machine (D-NTM).
We evaluate the proposed D-NTM on the full set of Facebook bAbI task (We- ston et al., 2015b) using either continuous, differentiable attention or discrete, non- differentiable attention (Zaremba and Sutskever, 2015) as an addressing strategy. Our experiments reveal that it is possible to use the discrete, non-differentiable attention mechanism, and in fact, the D-NTM with the discrete attention and GRU controller outperforms the one with the continuous attention. We also provide results on sequen- tial pMNIST, Stanford Natural Language Inference (SNLI) task and algorithmic tasks proposed by (Graves et al., 2014) in order to investigate the ability of our model when dealing with long-term dependencies.
We summarize our contributions in this paper as below,
⢠We propose a variation of neural Turing machine called a dynamic neural Turing machine (D-NTM) which employs a learnable and location-based addressing.
⢠We demonstrate the application of neural Turing machines on more natural and less toyish tasks, episodic question-answering, natural language entailment, digit classiï¬cation from the pixes besides the toy tasks. We provide a detailed analysis of our model on the bAbI task.
⢠We propose to use the discrete attention mechanism and empirically show that, it can outperform the continuous attention based addressing for episodic QA task.
⢠We propose a curriculum strategy for our model with the feedforward controller and discrete attention that improves our results signiï¬cantly.
2
In this paper, we avoid doing architecture engineering for each task we work on and focus on pure modelâs overall performance on each without task-speciï¬c modiï¬cations on the model. In that respect, we mainly compare our model against similar models such as NTM and LSTM without task-speciï¬c modiï¬cations. This helps us to better understand the modelâs failures.
The remainder of this article is organized as follows. In Section 2, we describe the architecture of Dynamic Neural Turing Machine (D-NTM). In Section 3, we describe the proposed addressing mechanism for D-NTM. Section 4 explains the training pro- cedure. In Section 5, we brieï¬y discuss some related models. In Section 6, we report results on episodic question answering task. In Section 7, 8, and 9 we discuss the re- sults in sequential MNIST, SNLI, and algorithmic toy tasks respectively. Section 10 concludes the article.
# 2 Dynamic Neural Turing Machine
The proposed dynamic neural Turing machine (D-NTM) extends the neural Turing ma- chine (NTM, (Graves et al., 2014)) which has a modular design. The D-NTM consists of two main modules: a controller, and a memory. The controller, which is often imple- mented as a recurrent neural network, issues a command to the memory so as to read, write to and erase a subset of memory cells.
# 2.1 Memory
D-NTM consists of an external memory Mt, where each memory cell i in Mt[i] is partitioned into two parts: a trainable address vector At[i] â R1Ãda and a content vector Ct[i] â R1Ãdc.
Mt[i] = [At[i]; Ct[i]] .
Memory Mt consists of N such memory cells and hence represented by a rectangular matrix Mt â RN Ã(dc+da):
Mt = [At; Ct] . The ï¬rst part At â RN Ãda is a learnable address matrix, and the second Ct â RN Ãdc a content matrix. The address part At is considered a model parameter that is updated during training. During inference, the address part is not overwritten by the controller and remains constant. On the other hand, the content part Ct is both read and written by the controller both during training and inference. At the beginning of each episode, the content part of the memory is refreshed to be an all-zero matrix, C0 = 0. This introduction of the learnable address portion for each memory cell allows the model to learn sophisticated location-based addressing strategies.
# 2.2 Controller
At each timestep t, the controller (1) receives an input value xt, (2) addresses and reads the memory and creates the content vector rt, (3) erases/writes a portion of the memory, (4) updates its own hidden state ht, and (5) outputs a value yt (if needed.) In this
3
paper, we use both a gated recurrent unit (GRU, (Cho et al., 2014)) and a feedforward- controller to implement the controller such that for a GRU controller
ht = GRU(xt, htâ1, rt) (1)
and for a feedforward-controller
ht = Ï(xt, rt). (2)
# 2.3 Model Operation
At each timestep t, the controller receives an input value xt. Then it generates the read weights wr t , the content vector read from the memory rt â R(da+dc)Ã1 is computed as
ry = (M,) wy,
(3)
The hidden state of the controller (ht) is conditioned on the memory content vector rt and based on this current hidden state of the controller. The model predicts the output label yt for the input.
The controller also updates the memory by erasing the old content and writing a new content into the memory. The controller computes three vectors: erase vector et â RdcÃ1, write weights ww t â RN Ã1, and candidate memory content vector ¯ct â RdcÃ1. These vectors are used to modify the memory. Erase vector is computed by a simple MLP which is conditioned on the hidden state of the controller ht. The candidate memory content vector ¯ct is computed based on the current hidden state of the controller ht â RdhÃ1 and the input of the controller which is scaled by a scalar gate αt. The αt is a function of the hidden state and the input of the controller.
a= f (i, Xz), (4)
# αt = f (ht, xt), ¯ct = ReLU(Wmht + αtWxxt).
C, = ReLU(W,,h; + a; W--Xt). (5)
where Wm and Wx are trainable matrices and ReLU is the rectiï¬ed linear activation function (Nair and Hinton, 2010). Given the erase, write and candidate memory content vectors (et, ww t , and ¯ct respectively), the memory matrix is updated by,
C,[j] = (1 â erw;?[9]) © Craly] + wP Ue. (6)
where the index j in Ct[j] denotes the j-th row of the content matrix Ct of the memory matrix Mt.
No Operation (NOP) As found in (Joulin and Mikolov, 2015), an additional NOP operation can be useful for the controller not to access the memory only once in a while. We model this situation by designating one memory cell as a NOP cell to which the controller should access when it does not need to read or write into the memory. Because reading from or writing into this memory cell is completely ignored.
We illustrate and elaborate more on the read and write operations of the D-NTM in Figure 1.
t are the most crucial parts of the model since the controller decide where to read from and write into the memory by using those. We elaborate this in the next section.
4
(4) (5)
Story Controller Memory : Address 1 Content - Address 2 Content Fact t-1 J++ }e( } Address 3 Content 4 Address 4, Content aaa = ââ | [Ant Address 5 Content on re Address 6 Content Fact t O-O-O- Question O-O-O- ââ_1__ address Z| Contd Reader | ââ Content
Figure 1: A graphical illustration of the proposed dynamic neural Turing machine with the recurrent-controller. The controller receives the fact as a continuous vector encoded by a recurrent neural network, computes the read and write weights for addressing the memory. If the D-NTM automatically detects that a query has been received, it returns an answer and terminates.
# 3 Addressing Mechanism
Each of the address vectors (both read and write) is computed in similar ways. First, the controller computes a key vector:
k, = Wh, + by,
Both for the read and the write operations, kt â R(da+dc)Ã1. Wk â R(da+dc)ÃN and bk â R(da+dc)Ã1 are the learnable weight matrix and bias respectively of kt. Also, the sharpening factor βt â R ⥠1 is computed as follows:
B= softplus(uj hâ +bg) +1. (7)
where uβ and bβ are the parameters of the sharpening factor βt and softplus is deï¬ned as follows:
softplus(x) = log(exp(x) + 1) (8)
Given the key kt and sharpening factor βt, the logits for the address weights are then computed by,
zt[i] = βtS (kt, Mt[i]) (9)
where the similarity function is basically the cosine distance where it is deï¬ned as S (x, y) â R and 1 ⥠S (x, y) ⥠â1,
x-y S09) ~ Teil
⬠is a small positive value to avoid division by zero. We have used ⬠= le â 7 in all our experiments. The address weight generation which we have described in this section is same with the content based addressing mechanism proposed in (Graves et al., 2014).
5
# 3.1 Dynamic Least Recently Used Addressing
We introduce a memory addressing operation that can learn to put more emphasis on the least recently used (LRU) memory locations. As observed in (Santoro et al., 2016; Rae et al., 2016), we ï¬nd it easier to learn the write operations with the use of LRU addressing.
To learn a LRU based addressing, ï¬rst we compute the exponentially moving av- erages of the logits (zt) as vt, where it can be computed as vt = 0.1vtâ1 + 0.9zt. We rescale the accumulated vt with γt, such that the controller adjusts the inï¬uence of how much previously written memory locations should effect the attention weights of a particular time-step. Next, we subtract vt from zt in order to reduce the weights of previously read or written memory locations. γt is a shallow MLP with a scalar output and it is conditioned on the hidden state of the controller. γt is parametrized with the parameters uγ and bγ,
"= sigmoid(u] hy +b,), w; = softmax(z; â Â¥Vi-1)-
(10)
(11)
This addressing method increases the weights of the least recently used rows of the memory. The magnitude of the inï¬uence of the least-recently used memory locations is being learned and adjusted with γt. Our LRU addressing is dynamic due to the modelâs ability to switch between pure content-based addressing and LRU. During the training, we do not backpropagate through vt. Due to the dynamic nature of this addressing mechanism, it can be used for both read and write operations. If needed, the model will automatically learn to disable LRU while reading from the memory.
The address vector deï¬ned in Equation (11) is a continuous vector. This makes the addressing operation differentiable and we refer to such a D-NTM as continuous D-NTM.
# 3.2 Discrete Addressing
By deï¬nition in Eq. (11), every element in the address vector wt is positive and sums up to one. In other words, we can treat this vector as the probabilities of a categorical distribution C(wt) with dim(wt) choices:
p[j] = wt[j],
where wt[j] is the j-th element of wt. We can readily sample from this categorical distribution and form an one-hot vector Ëwt such that
Ëwt[k] = I(k = j),
where j â¼ C(w), and I is an indicator function. If we use Ëwt instead of wt, then we will read and write from only one memory cell at a time. This makes the addressing operation non-differentiable and we refer to such a D-NTM as discrete D-NTM. In discrete D-NTM we sample the one-hot vector during training. Once training is over, we switch to a deterministic strategy. We simply choose an element of wt with the largest value to be the index of the target memory cell, such that
Ëwt[k] = I(k = argmax(wt)).
6
# 3.3 Multi-step Addressing
At each time-step, controller may require more than one-step for accessing to the mem- ory. The original NTM addresses this by implementing multiple sets of read, erase and write heads. In this paper, we explore an option of allowing each head to operate more than once at each timestep, similar to the multi-hop mechanism from the end-to-end memory network (Sukhbaatar et al., 2015).
# 4 Training D-NTM
Once the proposed D-NTM is executed, it returns the output distribution p(y(n)|x(n) for the nth example that is parameterized with θ. We deï¬ne our cost function as the neg- ative log-likelihood:
N 1 n n n C00) = Hy Lowry iaâ, 8), (12)
where θ is a set of all the parameters of the model.
Continuous D-NTM, just like the original NTM, is fully end-to-end differentiable and hence we can compute the gradient of this cost function by using backpropagation and learn the parameters of the model with a gradient-based optimization algorithm, such as stochastic gradient descent, to train it end-to-end. However, in discrete D- NTM, we use sampling-based strategy for all the heads during training. This clearly makes the use of backpropagation infeasible to compute the gradient, as the sampling procedure is not differentiable.
# 4.1 Training discrete D-NTM
To train discrete D-NTM, we use REINFORCE (Williams, 1992) together with the three variance reduction techniquesâglobal baseline, input-dependent baseline and variance normalizationâ suggested in (Mnih and Gregor, 2014).
Let us deï¬ne R(x) = log p(y|x1, . . . , xT ; θ) as a reward. We ï¬rst center and re- scale the reward by,
~ R(x) -l ¢) = BO? Vor+eâ¬
where b and Ï is running average and standard deviation of R. We can further center it for each input x separately, i.e.,
¯R(x) = ËR(x) â b(x),
where b(x) is computed by a baseline network which takes as input x and predicts its estimated reward. The baseline network is trained to minimize the Huber loss (Huber, 1964) between the true reward ËR(x) and the predicted reward b(x). This is also called as input based baseline (IBB) which is introduced in (Mnih and Gregor, 2014).
7
We use the Huber loss to learn the baseline b(x) which is deï¬ned by,
Hδ(z) = z2 for |z| ⤠δ, δ(2|z| â δ), otherwise,
due to its robustness where z would be ¯R(x) in this case. As a further measure to reduce the variance, we regularize the negative entropy of all those category distributions to facilitate a better exploration during training (Xu et al., 2015).
Then, the cost function for each training example is approximated as in Equation (13). In this equation, we write the terms related to compute the REINFORCE gradients that includes terms for the entropy regularization on the action space, the likelihood- ratio term to compute the REINFORCE gradients both for the read and the write heads.
C"(0) = â log p(y|xur, Wi.7, Wy) J -»y R(x â)(log p(w x17) + log p(w? |X1-r) j=l a w'|x7) +H(w 4 |[Xur))- (13)
where J is the number of addressing steps, λH is the entropy regularization coefï¬- cient, and H denotes the entropy.
# 4.2 Curriculum Learning for the Discrete Attention
Training discrete attention with feedforward controller and REINFORCE is challeng- ing. We propose to use a curriculum strategy for training with the discrete attention in order to tackle this problem. For each minibatch, the controller stochastically decides to choose either to use the discrete or continuous weights based on the random variable Ïn with probability pn where n stands for the number of k minibatch updates such that we only update pn every k minibatch updates. Ïn is a Bernoulli random variable which is sampled with probability of pn, Ïn â¼ Bernoulli(pn). The model will either use the discrete or the continuous-attention based on the Ïn. We start the training procedure with p0 = 1 and during the training pn is annealed to 0 by setting pn = p0â
We can rewrite the weights wt as in Equation (14), where it is expressed as the combination of continuous attention weights ¯wt and discrete attention weights Ëwt with Ït being a binary variable that chooses to use one of them,
wt = Ïn ¯wt + (1 â Ïn) Ëwt. (14)
By using this curriculum learning strategy, at the beginning of the training, the model learns to use the memory mainly with the continuous attention. As we anneal the pt, the model will rely more on the discrete attention.
8
# 4.3 Regularizing D-NTM
If the controller of D-NTM is a recurrent neural network, we ï¬nd it to be important to regularize the training of the D-NTM so as to avoid suboptimal solutions in which the D-NTM ignores the memory and works as a simple recurrent neural network.
Read-Write Consistency Regularizer One such suboptimal solution we have ob- served in our preliminary experiments with the proposed D-NTM is that the D-NTM uses the address part A of the memory matrix simply as an additional weight matrix, rather than as a means to accessing the content part C. We found that this pathologi- cal case can be effectively avoided by encouraging the read head to point to a memory cell which has also been pointed by the write head. This can be implemented as the following regularization term:
T t y 1 WwW T Rrw(w",w") = SOIL = (Dwi wi? (15) © t= t=1
In the equations above, ww t is the write and wr t is the read weights.
Next Input Prediction as Regularization Temporal structure is a strong signal that should be exploited by the controller based on a recurrent neural network. We exploit this structure by letting the controller predict the input in the future. We maximize the predictability of the next input by the controller during training. This is equivalent to minimizing the following regularizer:
T Rprea(W) = > log p(Xt41|X, wi, Ww, e:, Mi; 0) t=0
where xt is the current input and xt+1 is the input at the next timestep. We ï¬nd this regularizer to be effective in our preliminary experiments and use it for bAbI tasks.
# 5 Related Work
A recurrent neural network (RNN), which is used as a controller in the proposed D- NTM, has an implicit memory in the form of recurring hidden states. Even with this implicit memory, a vanilla RNN is however known to have difï¬culties in storing in- formation for long time-spans (Bengio et al., 1994; Hochreiter, 1991). Long short-term memory (LSTM, (Hochreiter and Schmidhuber, 1997)) and gated recurrent units (GRU, (Cho et al., 2014)) have been found to address this issue. However all these models based solely on RNNs have been found to be limited when they are used to solve, e.g., algorithmic tasks and episodic question-answering.
In addition to the ï¬nite random access memory of the neural Turing machine, based on which the D-NTM is designed, other data structures have been proposed as external memory for neural networks. In (Sun et al., 1997; Grefenstette et al., 2015; Joulin and Mikolov, 2015), a continuous, differentiable stack was proposed. In (Zaremba et al.,
9
2015; Zaremba and Sutskever, 2015), grid and tape storage are used. These approaches differ from the NTM in that their memory is unbounded and can grow indeï¬nitely. On the other hand, they are often not randomly accessible. Zhang et al. (2015) proposed a variation of NTM that has a structured memory and they have shown experiments on copy and associative recall tasks with this model.
In parallel to our work (Yang, 2016) and (Graves et al., 2016) proposed new memory access mechanisms to improve NTM type of models. (Graves et al., 2016) reported superior results on a diverse set of algorithmic learning tasks.
Memory networks (Weston et al., 2015b) form another family of neural networks with external memory. In this class of neural networks, information is stored explicitly as it is (in the form of its continuous representation) in the memory, without being erased or modiï¬ed during an episode. Memory networks and their variants have been applied to various tasks successfully (Sukhbaatar et al., 2015; Bordes et al., 2015; Dodge et al., 2015; Xiong et al., 2016; Chandar et al., 2016). Miller et al. (2016) have also indepen- dently proposed the idea of having separate key and value vectors for memory networks. A similar addressing mechanism is also explored in (Reed and de Freitas, 2016) in the context of learning program traces.
Another related family of models is the attention-based neural networks. Neural networks with continuous or discrete attention over an input have shown promising results on a variety of challenging tasks, including machine translation (Bahdanau et al., 2015; Luong et al., 2015), speech recognition (Chorowski et al., 2015), machine reading comprehension (Hermann et al., 2015) and image caption generation (Xu et al., 2015). The latter two, the memory network and attention-based networks, are however clearly distinguishable from the D-NTM by the fact that they do not modify the content of the memory.
# 6 Experiments on Episodic Question-Answering
In this section, we evaluate the proposed D-NTM on the synthetic episodic question- answering task called Facebook bAbI (Weston et al., 2015a). We use the version of the dataset that contains 10k training examples per sub-task provided by Facebook.1 For each episode, the D-NTM reads a sequence of factual sentences followed by a question, all of which are given as natural language sentences. The D-NTM is expected to store and retrieve relevant information in the memory in order to answer the question based on the presented facts.
# 6.1 Model and Training Details
We use the same hyperparameters for all the tasks for a given model. We use a recurrent neural network with GRU units to encode a variable-length fact into a ï¬xed-size vec- tor representation. This allows the D-NTM to exploit the word ordering in each fact, unlike when facts are encoded as bag-of-words vectors. We experiment with both a recurrent and feedforward neural network as the controller that generates the read and
# 1 https://research.facebook.com/researchers/1543934539189348
10
write weights. The controller has 180 units. We train our feedforward controller using noisy-tanh activation function (Gulcehre et al., 2016) since we were experiencing train- ing difï¬culties with sigmoid and tanh activation functions. We use both single-step and three-steps addressing with our GRU controller. The memory contains 120 memory cells. Each memory cell consists of a 16-dimensional address part and 28-dimensional content part.
We set aside a random 10% of the training examples as a validation set for each sub-task and use it for early-stopping and hyperparameter search. We train one D-NTM for each sub-task, using Adam (Kingma and Ba, 2014) with its learning rate set to 0.003 and 0.007 respectively for GRU and feedforward controller. The size of each minibatch is 160, and each minibatch is constructed uniform-randomly from the training set.
# 6.2 Goals
The goal of this experiment is three-fold. First, we present for the ï¬rst time the per- formance of a memory-based network that can both read and write dynamically on the Facebook bAbI tasks2. We aim to understand whether a model that has to learn to write an incoming fact to the memory, rather than storing it as it is, is able to work well, and to do so, we compare both the original NTM and proposed D-NTM against an LSTM-RNN.
Second, we investigate the effect of having to learn how to write. The fact that the NTM needs to learn to write likely has adverse effect on the overall performance, when compared to, for instance, end-to-end memory networks (MemN2N, (Sukhbaatar et al., 2015)) and dynamic memory network (DMN+, (Xiong et al., 2016)) both of which simply store the incoming facts as they are. We quantify this effect in this experiment. Lastly, we show the effect of the proposed learnable addressing scheme.
We further explore the effect of using a feedforward controller instead of the GRU controller. In addition to the explicit memory, the GRU controller can use its own internal hidden state as the memory. On the other hand, the feedforward controller must solely rely on the explicit memory, as it is the only memory available.
# 6.3 Results and Analysis
In Table 1, we ï¬rst observe that the NTMs are indeed capable of solving this type of episodic question-answering better than the vanilla LSTM-RNN. Although the avail- ability of explicit memory in the NTM has already suggested this result, we note that this is the ï¬rst time neural Turing machines have been used in this speciï¬c task.
All the variants of NTM with the GRU controller outperform the vanilla LSTM- RNN. However, not all of them perform equally well. First, it is clear that the proposed dynamic NTM (D-NTM) using the GRU controller outperforms the original NTM with the GRU controller (NTM, CBA only NTM vs. continuous D-NTM, Discrete D-NTM). As discussed earlier, the learnable addressing scheme of the D-NTM allows the con- troller to access the memory slots by location in a potentially nonlinear way. We expect
2Similar experiments were done in the recently published (Graves et al., 2016), but D-NTM results
for bAbI tasks were already available in arxiv by that time.
11
Task 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Avg.Err. LSTM 0.00 81.90 83.10 0.20 1.20 51.80 24.90 34.10 20.20 30.10 10.30 23.40 6.10 81.00 78.70 51.90 50.10 6.80 90.30 2.10 36.41 MemN2N 0.00 0.30 2.10 0.00 0.80 0.10 2.00 0.90 0.30 0.00 0.10 0.00 0.00 0.10 0.00 51.80 18.60 5.30 2.30 0.00 4.24 DMN+ 0.00 0.30 1.10 0.00 0.50 0.00 2.40 0.00 0.00 0.00 0.00 0.00 0.00 0.20 0.00 45.30 4.20 2.10 0.00 0.00 2.81 1-step LBAâ NTM 16.30 57.08 74.16 0.00 1.46 23.33 21.67 25.76 24.79 41.46 18.96 25.83 6.67 58.54 36.46 71.15 43.75 3.96 75.89 1.25 31.42 1-step CBA NTM 16.88 55.70 55.00 0.00 20.41 21.04 21.67 21.05 24.17 33.13 31.88 30.00 5.63 59.17 42.30 71.15 43.75 47.50 71.51 0.00 33.60 1-step Soft D-NTM 5.41 58.54 74.58 0.00 1.66 40.20 19.16 12.58 36.66 52.29 31.45 7.70 5.62 60.00 36.87 49.16 17.91 3.95 73.74 2.70 29.51 1-step Discrete D-NTM 6.66 56.04 72.08 0.00 1.04 44.79 19.58 18.46 34.37 50.83 4.16 6.66 2.29 63.75 39.27 51.35 16.04 3.54 64.63 3.12 27.93 3-steps LBAâ NTM 0.00 61.67 83.54 0.00 0.83 48.13 7.92 25.38 37.80 56.25 3.96 28.75 5.83 61.88 35.62 46.15 43.75 47.50 61.56 0.40 32.85 3-steps CBA NTM 0.00 59.38 65.21 0.00 1.46 54.80 37.70 8.82 0.00 23.75 0.28 23.75 83.13 57.71 21.88 50.00 56.25 47.50 63.65 0.00 32.76 3-steps Soft D-NTM 0.00 46.66 47.08 0.00 1.25 20.62 7.29 11.02 39.37 20.00 30.62 5.41 7.91 58.12 36.04 46.04 21.25 6.87 75.88 3.33 24.24 3-steps Discrete D-NTM 0.00 62.29 41.45 0.00 1.45 11.04 5.62 0.74 32.50 20.83 16.87 4.58 5.00 60.20 40.26 45.41 9.16 1.66 76.66 0.00 21.79
Table 1: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples with the GRU and feedforward controller. FF stands for the experiments that are conducted with feedforward controller. Let us, note that LBAâ refers to NTM that uses both LBA and CBA. In this table, we compare multi-step vs single-step address- ing, original NTM with location based+content based addressing vs only content based addressing, and discrete vs continuous addressing on bAbI.
it to help with tasks that have non-trivial access patterns, and as anticipated, we see a large gain with the D-NTM over the original NTM in the tasks of, for instance, 12 - Conjunction and 17 - Positional Reasoning.
Among the recurrent variants of the proposed D-NTM, we notice signiï¬cant im- provements by using discrete addressing over using continuous addressing. We con- jecture that this is due to certain types of tasks that require precise/sharp retrieval of a stored fact, in which case continuous addressing is in disadvantage over discrete ad- dressing. This is evident from the observation that the D-NTM with discrete addressing signiï¬cantly outperforms that with continuous addressing in the tasks of 8 - Lists/Sets and 11 - Basic Coreference. Furthermore, this is in line with an earlier observation in (Xu et al., 2015), where discrete addressing was found to generalize better in the task of image caption generation.
In Table 2, we also observe that the D-NTM with the feedforward controller and discrete attention performs worse than LSTM and D-NTM with continuous-attention. However, when the proposed curriculum strategy from Sec. 3.2 is used, the average test error drops from 68.30 to 37.79.
We empirically found training of the feedforward controller more difï¬cult than that of the recurrent controller. We train our feedforward controller based models four times longer (in terms of the number of updates) than the recurrent controller based ones in order to ensure that they are converged for most of the tasks. On the other hand, the models trained with the GRU controller overï¬t on bAbI tasks very quickly. For example, on tasks 3 and 16 the feedforward controller based model underï¬ts (i.e., high training loss) at the end of the training, whereas with the same number of units the model with the GRU controller can overï¬t on those tasks after 3,000 updates only.
We notice a signiï¬cant performance gap, when our results are compared to the vari- ants of the memory network (Weston et al., 2015b) (MemN2N and DMN+). We at-
12
tribute this gap to the difï¬culty in learning to manipulate and store a complex input.
Graves et al. (2016) also has also reported results with differentiable neural com- puter (DNC) and NTM on bAbI dataset. However their experimental setup is different from the setup we use in this paper. This makes the comparisons between more difï¬- cult. The main differences broadly are, as the input representations to the controller, they used the embedding representation of each word whereas we have used the rep- resentation obtained with GRU for each fact. Secondly, they report only joint training results. However, we have only trained our models on the individual tasks separately. However, despite the differences in terms of architecture in DNC paper (see Table 1), the mean results of their NTM results is very close to ours 28.5% with std of +/- 2.9 which we obtain 31.4% error.
Task 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Avg.Err. FF Soft D-NTM 4.38 27.5 71.25 0.00 1.67 1.46 6.04 1.70 0.63 19.80 0.00 6.25 7.5 17.5 0.0 49.65 1.25 0.24 39.47 0.0 12.81 FF Discrete D-NTM 81.67 76.67 79.38 78.65 83.13 48.76 54.79 69.75 39.17 56.25 78.96 82.5 75.0 78.75 71.42 71.46 43.75 48.13 71.46 76.56 68.30 FF Discreteâ D-NTM 14.79 76.67 70.83 44.06 17.71 48.13 23.54 35.62 14.38 56.25 39.58 32.08 18.54 24.79 39.73 71.15 43.75 2.92 71.56 9.79 37.79
Table 2: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples with feedforward controller.
# 6.4 Visualization of Discrete Attention
We visualize the attention of D-NTM with GRU controller with discrete attention in Figure 2. From this example, we can see that D-NTM has learned to ï¬nd the correct supporting fact even without any supervision for the particular story in the visualization.
# 6.5 Learning Curves for the Recurrent Controller
In Figure 3, we compare the learning curves of the continuous and discrete attention D-NTM model with recurrent controller on Task 1. Surprisingly, the discrete attention D-NTM converges faster than the continuous-attention model. The main difï¬culty of learning continuous-attention is due to the fact that learning to write with continuous- attention can be challenging.
13
Antoine is bored Jason is hungry Jason travelled to the kitchen Antoine travelled to the garden Write Read Jason got the apple there Yann is tired Yann journeyed to the bedroom Why did yan go to the bedroom ?
Figure 2: An example view of the discrete attention over the memory slots for both read (left) and write heads(right). x-axis the denotes the memory locations that are being accessed and y-axis corresponds to the content in the particular memory location. In this ï¬gure, we visualize the discrete-attention model with 3 reading steps and on task 20. It is easy to see that the NTM with discrete-attention accesses to the relevant part of the memory. We only visualize the last-step of the three steps for writing. Because with discrete attention usually the model just reads the empty slots of the memory.
30 ââ Train nll hard attention model ââ Train nll soft attention model
Figure 3: A visualization for the learning curves of continuous and discrete D-NTM models trained on Task 1 using 3 steps. In most tasks, we observe that the discrete attention model with GRU controller does converge faster than the continuous-attention model.
14
# 6.6 Training with Continuous Attention and Testing with Discrete Attention
In Table 3, we provide results to investigate the effects of using discrete attention model at the test-time for a model trained with feedforward controller and continuous attention. Discreteâ D-NTM model bootstraps the discrete attention with the continuous attention, using the curriculum method that we have introduced in Section 4.2. Discreteâ D-NTM model is the continuous-attention model which uses discrete-attention at the test time. We observe that the Discreteâ D-NTM model which is trained with continuous-attention outperforms Discrete D-NTM model.
continuous Discrete Discreteâ Discreteâ D-NTM D-NTM D-NTM D-NTM 14.79 4.38 76.67 27.5 70.83 71.25 44.06 0.00 17.71 1.67 48.13 1.46 23.54 6.04 35.62 1.70 14.38 0.63 56.25 19.80 39.58 0.00 32.08 6.25 18.54 7.5 24.79 17.5 39.73 0.0 71.15 49.65 43.75 1.25 2.92 0.24 71.56 39.47 9.79 0.0 12.81 37.79
Table 3: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples with the feedforward controller. Discreteâ D-NTM model bootstraps the dis- crete attention with the continuous attention, using the curriculum method that we have introduced in Section 3.2. Discreteâ D-NTM model is the continuous-attention model which uses discrete-attention at the test time.
# 6.7 D-NTM with BoW Fact Representation
In Table 4, we provide results for D-NTM using BoW with positional encoding (PE) Sukhbaatar et al. (2015) as the representation of the input facts. The facts representa- tions are provided as an input to the GRU controller. In agreement to our results with the GRU fact representation, with the BoW fact representation we observe improvements with multi-step of addressing over single-step and discrete addressing over continuous addressing.
15
Task D-NTM(1-step) D-NTM(1-step) D-NTM(3-steps) D-NTM(3-steps) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Avg
Table 4: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples with the GRU controller and representations of facts are obtained with BoW using positional encoding.
# 7 Experiments on Sequential pMNIST
In sequential MNIST task, the pixels of the MNIST digits are provided to the model in scan line order, left to right and top to bottom (Le et al., 2015). At the end of sequence of pixels, the model predicts the label of the digit in the sequence of pixels. We ex- periment D-NTM on the variation of sequential MNIST where the order of the pixels is randomly shufï¬ed, we call this task as permuted MNIST (pMNIST). An important contribution of this task to our paper, in particular, is to measure the modelâs ability to perform well when dealing with long-term dependencies. We report our results in Ta- ble 5, we observe improvements over other models that we compare against. In Table 5, âdiscrete addressing with MABâ refers to D-NTM model using REINFORCE with baseline computed from moving averages of the reward. Discrete addressing with IB refers to D-NTM using REINFORCE with input-based baseline.
In Figure 4, we show the learning curves of input-based-baseline (ibb) and regular REINFORCE with moving averages baseline (mab) on the pMNIST task. We observe that input-based-baseline in general is much easier to optimize and converges faster as well. But it can quickly overï¬t to the task as well. Let us note that, recurrent batch normalization with LSTM (Cooijmans et al., 2017) with 95.6% accuracy and it per- forms much better than other algorithms. However, it is possible to use recurrent batch normalization in our model and potentially improve our results on this task as well.
In all our experiments on sequential MNIST task, we try to keep the capacity of our model to be close to our baselines. We use 100 GRU units in the controller and each
16
D-NTM discrete MAB D-NTM discrete IB Soft D-NTM NTM 89.6 92.3 93.4 90.9 I-RNN (Le et al., 2015) Zoneout (Krueger et al., 2016) LSTM (Krueger et al., 2016) Unitary-RNN (Arjovsky et al., 2016) Recurrent Dropout (Krueger et al., 2016) Recurrent Batch Normalization (Cooijmans et al., 2017) 82.0 93.1 89.8 91.4 92.5 95.6
Table 5: Sequential pMNIST.
25
ââ validation learning curve of ibb â ââ validation learning curve of mab h ---- training learning curve of ibb ---- training learning curve of mab 2.0
Figure 4: We compare the learning curves of our D-NTM model using discrete attention on pMNIST task with input-based baseline and regular REINFORCE baseline. The x- axis is the loss and y-axis is the number of epochs.
17
content vector of size 8 and with address vectors of size 8. We use a learning rate of 1e â 3 and trained the model with Adam optimizer. We did not use the read and write consistency regularization in any of our models.
# 8 Stanford Natural Language Inference (SNLI) Task
SNLI task (Bowman et al., 2015) is designed to test the abilities of different ma- chine learning algorithms for inferring the entailment between two different statements. Those two statements, can either entail, contradict or be neutral to each other. In this pa- per, we feed the premise followed by the end of premise (EOP) token and the hypothesis in the same sequence as an input to the model. Similarly Rockt¨aschel et al. (2015) have trained their model by providing the premise and the hypothesis in a similar way. This ensures that the performance of our model does not rely only on a particular prepro- cessing or architectural engineering. But rather we mainly rely on the modelâs ability to represent the sequence and the dependencies in the input sequence efï¬ciently. The model proposed by Rockt¨aschel et al. (2015), applies attention over its previous hidden states over premise when it reads the hypothesis.
In Table 6, we report results for different models with or without recurrent dropout (Semeniuta et al., 2016) and layer normalization (Ba et al., 2016).
The number of input vocabulary we use in our paper is 41200, we use GLOVE (Pen- nington et al., 2014) embeddings to initialize the input embeddings. We use GRU- controller with 300 units and the size of the embeddings are also 300. We optimize our models with Adam. We have done a hyperparameter search to ï¬nd the optimal learning rate via random search and sampling the learning rate from log-space between 1e â 2 and 1e â 4 for each model. We use layer-normalization in our controller (Ba et al., 2016).
We have observed signiï¬cant improvements by using layer normalization and dropout on this task. Mainly because that the overï¬tting is a severe problem on SNLI. D-NTM achieves better performance compared to both LSTM and NTMs.
Test Acc Word by Word Attention(Rockt¨aschel et al., 2015) Word by Word Attention two-way(Rockt¨aschel et al., 2015) LSTM + LayerNorm + Dropout NTM + LayerNorm + Dropout DNTM + LayerNorm + Dropout LSTM (Bowman et al., 2015) D-NTM NTM 83.5 83.2 81.7 81.8 82.3 77.6 80.9 80.2
Table 6: Stanford Natural Language Inference Task
18
# 9 NTM Toy Tasks
We explore the possibility of using D-NTM to solve algorithmic tasks such as copy and associative recall tasks. We train our model on the same lengths of sequences that is experimented in (Graves et al., 2014). We report our results in Table 7. We ï¬nd out that D-NTM using continuous-attention can successfully learn the âCopyâ and âAssociative Recallâ tasks.
In Table 7, we train our model on sequences of the same length as the experiments in (Graves et al., 2014) and test the model on the sequences of the maximum length seen during the training. We consider a model to be successful on copy or associative recall if its validation cost (binary cross-entropy) is lower than 0.02 over the sequences of maximum length seen during the training. We set the threshold to 0.02 to determine whether a model is successful on a task. Because empirically we observe that the mod- els have higher validation costs perform badly in terms of generalization over the longer sequences. âD-NTM discreteâ model in this table is trained with REINFORCE using moving averages to estimate the baseline.
Copy Tasks Associative Recall Soft D-NTM D-NTM discrete NTM Success Success Success Success Failure Success
Table 7: NTM Toy Tasks.
On both copy and associative recall tasks, we try to keep the capacity of our model to be close to our baselines. We use 100 GRU units in the controller and each content vector of has a size of 8 and using address vector of size 8. We use a learning rate of 1e â 3 and trained the model with Adam optimizer. We did not use the read and write consistency regularization in any of our models. For the model with the discrete attention we use REINFORCE with baseline computed using moving averages.
# 10 Conclusion and Future Work
In this paper we extend neural Turing machines (NTM) by introducing a learnable ad- dressing scheme which allows the NTM to be capable of performing highly nonlinear location-based addressing. This extension, to which we refer by dynamic NTM (D- NTM), is extensively tested with various conï¬gurations, including different addressing mechanisms (continuous vs. discrete) and different number of addressing steps, on the Facebook bAbI tasks. This is the ï¬rst time an NTM-type model was tested on this task, and we observe that the NTM, especially the proposed D-NTM, performs better than vanilla LSTM-RNN. Furthermore, the experiments revealed that the discrete, dis- crete addressing works better than the continuous addressing with the GRU controller, and our analysis reveals that this is the case when the task requires precise retrieval of memory content.
Our experiments show that the NTM-based models can be weaker than other vari- ants of memory networks which do not learn but have an explicit mechanism of storing
19
incoming facts as they are. We conjecture that this is due to the difï¬culty in learning how to write, manipulate and delete the content of memory. Despite this difï¬culty, we ï¬nd the NTM-based approach, such as the proposed D-NTM, to be a better, future-proof approach, because it can scale to a much longer horizon (where it becomes impossible to explicitly store all the experiences.)
On pMNIST task, we show that our model can outperform other similar type of approaches proposed to deal with the long-term dependencies. On copy and associa- tive recall tasks, we show that our model can solve the algorithmic problems that are proposed to solve with NTM type of models.
Finally we have shown some results on the SNLI task where our model performed better than NTM and the LSTM on this task. However our results do not involve any task speciï¬c modiï¬cations and the results can be improved further by structuring the architecture of our model according to the SNLI task.
The success of both the learnable address and the discrete addressing scheme sug- gests two future research directions. First, we should try both of these schemes in a wider array of memory-based models, as they are not speciï¬c to the neural Turing ma- chines. Second, the proposed D-NTM needs to be evaluated on a diverse set of applica- tions, such as text summarization (Rush et al., 2015), visual question-answering (Antol et al., 2015) and machine translation, in order to make a more concrete conclusion.
# References
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. VQA: visual question answering. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 2425â2433, 2015.
Martin Arjovsky, Amar Shah, and Yoshua Bengio. Unitary evolution recurrent neural networks. ICML 2016, 2016.
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In Proceedings Of The International Con- ference on Representation Learning (ICLR 2015), 2015.
Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difï¬cult. Neural Networks, IEEE Transactions on, 5(2): 157â166, 1994.
Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075, 2015.
Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326, 2015.
20
Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, and Yoshua Bengio. Hierarchical memory networks. arXiv preprint arXiv:1605.07427, 2016.
Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder- decoder for statistical machine translation. In EMNLP, 2014.
Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua arXiv preprint Bengio. arXiv:1506.07503, 2015. Attention-based models for speech recognition.
Tim Cooijmans, Nicolas Ballas, C´esar Laurent, and Aaron Courville. Recurrent batch normalization. ICLR 2017, Toullone France, 2017.
Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexan- der Miller, Arthur Szlam, and Jason Weston. Evaluating prerequisite qualities for learning end-to-end dialog systems. CoRR, abs/1511.06931, 2015.
Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. Book in prepa- ration for MIT Press, 2016. URL http://www.deeplearningbook.org.
Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014.
Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwi´nska, Sergio G´omez Colmenarejo, Edward Grefenstette, Tiago Ra- malho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471â476, 2016.
Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to transduce with unbounded memory. In Advances in Neural Information Processing Systems, pages 1819â1827, 2015.
Caglar Gulcehre, Marcin Moczulski, Misha Denil, and Yoshua Bengio. Noisy activation functions. ICML 2016, New York, 2016.
Karl Moritz Hermann, Tom´aËs KoËcisk`y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. arXiv preprint arXiv:1506.03340, 2015.
Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks princi- ple: Reading childrenâs books with explicit memory representations. arXiv preprint arXiv:1511.02301, 2015.
Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Diploma, Tech- nische Universit¨at M¨unchen, page 91, 1991.
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural Computa- tion, 9(8):1735â1780, 1997.
21
Peter J. Huber. Robust estimation of a location parameter. Ann. Math. Statist., 35(1): 73â101, 03 1964.
Inferring algorithmic patterns with stack- augmented recurrent nets. In Advances in Neural Information Processing Systems, pages 190â198, 2015.
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
David Krueger, Tegan Maharaj, J´anos Kram´ar, Mohammad Pezeshki, Nicolas Bal- las, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron Courville, et al. Zoneout: Regularizing rnns by randomly preserving hidden activa- tions. arXiv preprint arXiv:1606.01305, 2016.
Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networks of rectiï¬ed linear units. arXiv preprint arXiv:1504.00941, 2015.
Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based neural machine translation. In Proceedings Of The Conference on Empirical Methods for Natural Language Processing (EMNLP 2015), 2015.
Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. Key-value memory networks for directly reading documents. CoRR, abs/1606.03126, 2016. URL http://arxiv.org/abs/1606.03126.
Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. International Conference on Machine Learning, ICML, 2014.
Vinod Nair and Geoffrey E Hinton. Rectiï¬ed linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 807â814, 2010.
Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vec- tors for word representation. In EMNLP, volume 14, pages 1532â1543, 2014.
Jack W Rae, Jonathan J Hunt, Tim Harley, Ivo Danihelka, Andrew Senior, Greg Wayne, Alex Graves, and Timothy P Lillicrap. Scaling memory-augmented neural networks with sparse reads and writes. In Advances in NIPS. 2016.
Scott Reed and Nando de Freitas. Neural programmer-interpreters. ICLR 2016, 2016.
Tim Rockt¨aschel, Edward Grefenstette, Karl Moritz Hermann, Tom´aËs KoËcisk`y, and Phil Blunsom. Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664, 2015.
Alexander M. Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Em- pirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 379â389, 2015.
22
Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lil- ICML 2016, licrap. One-shot learning with memory-augmented neural networks. 2016.
Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. Recurrent dropout without memory loss. arXiv preprint arXiv:1603.05118, 2016.
Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the 30th AAAI Conference on Artiï¬cial Intelli- gence (AAAI-16), 2016.
Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end mem- ory networks. arXiv preprint arXiv:1503.08895, 2015.
Guo-Zheng Sun, C. Lee Giles, and Hsing-Hen Chen. The neural network pushdown au- tomaton: Architecture, dynamics and training. In Adaptive Processing of Sequences and Data Structures, International Summer School on Neural Networks, pages 296â 345, 1997.
Oriol Vinyals and Quoc Le. A neural conversational model. arXiv preprint arXiv:1506.05869, 2015.
Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. Towards ai- arXiv preprint complete question answering: a set of prerequisite toy tasks. arXiv:1502.05698, 2015a.
Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In Proceedings Of The International Conference on Representation Learning (ICLR 2015), 2015b. In Press.
Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229â256, 1992.
Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textual question answering. CoRR, abs/1603.01417, 2016.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings Of The International Conference on Represen- tation Learning (ICLR 2015), 2015.
Greg Yang. Lie access neural turing machine. arXiv preprint arXiv:1602.08671, 2016.
Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, and Aaron Courville. Describing videos by exploiting temporal struc- ture. In Computer Vision (ICCV), 2015 IEEE International Conference on. IEEE, 2015.
23
Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. CoRR, abs/1505.00521, 2015.
Wojciech Zaremba, Tomas Mikolov, Armand Joulin, and Rob Fergus. Learning simple algorithms from examples. arXiv preprint arXiv:1511.07275, 2015.
Wei Zhang, Yang Yu, and Bowen Zhou. Structured memory for neural turing machines. arXiv preprint arXiv:1510.03931, 2015.
24 | {
"id": "1511.02301"
} |
1606.09274 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | 6 1 0 2
n u J 9 2 ] I A . s c [
1 v 4 7 2 9 0 . 6 0 6 1 : v i X r a
# Compression of Neural Machine Translation Models via Pruning
Abigail Seeâ Minh-Thang Luongâ Christopher D. Manning Computer Science Department, Stanford University, Stanford, CA 94305 {abisee,lmthang,manning}@stanford.edu
# Abstract
Neural Machine Translation (NMT), like many other deep learning domains, typ- ically suffers from over-parameterization, resulting in large storage sizes. This paper examines three simple magnitude-based pruning schemes to compress NMT mod- els, namely class-blind, class-uniform, and class-distribution, which differ in terms of how pruning thresholds are com- puted for the different classes of weights in the NMT architecture. We demonstrate the efï¬cacy of weight pruning as a compres- sion technique for a state-of-the-art NMT system. We show that an NMT model with over 200 million parameters can be pruned by 40% with very little performance loss as measured on the WMTâ14 English- German translation task. This sheds light on the distribution of redundancy in the NMT architecture. Our main result is that with retraining, we can recover and even surpass the original performance with an 80%-pruned model.
# Introduction
Neural Machine Translation (NMT) is a simple new architecture for translating texts from one lan- guage into another (Sutskever et al., 2014; Cho et al., 2014). NMT is a single deep neural network that is trained end-to-end, holding several advan- tages such as the ability to capture long-range de- pendencies in sentences, and generalization to un- seen texts. Despite being relatively new, NMT has already achieved state-of-the-art translation re- sults for several language pairs including English- French (Luong et al., 2015b), English-German (Jean et al., 2015a; Luong et al., 2015a; Luong and
âBoth authors contributed equally.
target language output âââ_ Je suis étudiant â IT | Je suis étudiant i J Y | am a student rT Tl Y source language input target language input
Figure 1: A simpliï¬ed diagram of NMT.
Manning, 2015; Sennrich et al., 2016), English- Turkish (Sennrich et al., 2016), and English-Czech (Jean et al., 2015b; Luong and Manning, 2016). Figure 1 gives an example of an NMT system.
While NMT has a signiï¬cantly smaller memory footprint than traditional phrase-based approaches (which need to store gigantic phrase-tables and language models), the model size of NMT is still prohibitively large for mobile devices. For exam- ple, a recent state-of-the-art NMT system requires over 200 million parameters, resulting in a stor- age size of hundreds of megabytes (Luong et al., 2015a). Though the trend for bigger and deeper neural networks has brought great progress, it has also introduced over-parameterization, resulting in long running times, overï¬tting, and the storage size issue discussed above. A solution to the over- parameterization problem could potentially aid all three issues, though the ï¬rst (long running times) is outside the scope of this paper.
In this paper we investi- gate the efï¬cacy of weight pruning for NMT as a means of compression. We show that despite
its simplicity, magnitude-based pruning with re- training is highly effective, and we compare three magnitude-based pruning schemes â class-blind, class-uniform and class-distribution. Though re- cent work has chosen to use the latter two, we ï¬nd the ï¬rst and simplest scheme â class-blind â the most successful. We are able to prune 40% of the weights of a state-of-the-art NMT system with negligible performance loss, and by adding a retraining phase after pruning, we can prune 80% with no performance loss. Our pruning experi- ments also reveal some patterns in the distribution of redundancy in NMT. In particular we ï¬nd that higher layers, attention and softmax weights are the most important, while lower layers and the em- bedding weights hold a lot of redundancy. For the Long Short-Term Memory (LSTM) architecture, we ï¬nd that at lower layers the parameters for the input are most crucial, but at higher layers the pa- rameters for the gates also become important.
# 2 Related Work
Pruning the parameters from a neural network, referred to as weight pruning or network prun- ing, is a well-established idea though it can be implemented in many ways. Among the most popular are the Optimal Brain Damage (OBD) (Le Cun et al., 1989) and Optimal Brain Sur- geon (OBS) (Hassibi and Stork, 1993) techniques, which involve computing the Hessian matrix of the loss function with respect to the parameters, in order to assess the saliency of each parame- ter. Parameters with low saliency are then pruned from the network and the remaining sparse net- work is retrained. Both OBD and OBS were shown to perform better than the so-called ânaive magnitude-based approachâ, which prunes param- eters according to their magnitude (deleting pa- rameters close to zero). However, the high com- putational complexity of OBD and OBS compare unfavorably to the computational simplicity of the magnitude-based approach, especially for large networks (Augasta and Kathirvalavakumar, 2013). In recent years, the deep learning renaissance has prompted a re-investigation of network prun- ing for modern models and tasks. Magnitude- based pruning with iterative retraining has yielded strong results for Convolutional Neural Networks (CNN) performing visual tasks. Collins and Kohli (2014) prune 75% of AlexNet parameters with small accuracy loss on the ImageNet task, while
Han et al. (2015b) prune 89% of AlexNet parame- ters with no accuracy loss on the ImageNet task.
Other approaches focus on pruning neurons rather than parameters, via sparsity-inducing regu- larizers (Murray and Chiang, 2015) or âwiring to- getherâ pairs of neurons with similar input weights (Srinivas and Babu, 2015). These approaches are much more constrained than weight-pruning schemes; they necessitate ï¬nding entire zero rows of weight matrices, or near-identical pairs of rows, in order to prune a single neuron. By contrast weight-pruning approaches allow weights to be pruned freely and independently of each other. The neuron-pruning approach of Srinivas and Babu (2015) was shown to perform poorly (it suf- fered performance loss after removing only 35% of AlexNet parameters) compared to the weight- pruning approach of Han et al. (2015b). Though Murray and Chiang (2015) demonstrates neuron- pruning for language modeling as part of a (non- neural) Machine Translation pipeline, their ap- proach is more geared towards architecture selec- tion than compression.
There are many other compression techniques for neural networks, including approaches based on on low-rank approximations for weight matri- ces (Jaderberg et al., 2014; Denton et al., 2014), or weight sharing via hash functions (Chen et al., 2015). Several methods involve reducing the pre- cision of the weights or activations (Courbariaux et al., 2015), sometimes in conjunction with spe- cialized hardware (Gupta et al., 2015), or even us- ing binary weights (Lin et al., 2016). The âknowl- edge distillationâ technique of Hinton et al. (2015) involves training a small âstudentâ network on the soft outputs of a large âteacherâ network. Some approaches use a sophisticated pipeline of several techniques to achieve impressive feats of compres- sion (Han et al., 2015a; Iandola et al., 2016).
Most of the above work has focused on com- pressing CNNs for vision tasks. We extend the magnitude-based pruning approach of Han et al. (2015b) to recurrent neural networks (RNN), in particular LSTM architectures for NMT, and to our knowledge we are the ï¬rst to do so. There has been some recent work on compression for RNNs (Lu et al., 2016; Prabhavalkar et al., 2016), but it focuses on other, non-pruning compression techniques. Nonetheless, our general observations on the distribution of redundancy in a LSTM, de- tailed in Section 4.5, are corroborated by Lu et al.
# target language output ââ
seb 7 one-hot vectors Je suis étudiant â } length V » » A» » context vector 5 ; (one for each | scores Key to weight classes target word) length V softmax weights length n * * size: Vxn a initial (zero) | attention hidden attention states length n weights A A TAY . size: nx 2n . source â> target â> , | pen ayer 2 layer 2 layer 2 J weights weights size: 4n x 2n size: 4n x 2n . hidden layer 1 source â> target â> F ten th ne layer 1 layer 1 J weights weights size: 4n x 2n size: 4n x 2n | word embeddings length n source embedding target embedding 4 - weights | weights t i] i] i} size: nx V size: nx V 7 1am a student â Je _â suis étudiant =} toate yes N J \ J Y Y source language input target language
Figure 2: NMT architecture. This example has two layers, but our system has four. The different weight classes are indicated by arrows of different color (the black arrows in the top right represent simply choosing the highest-scoring word, and thus require no parameters). Best viewed in color.
(2016).
# 3 Our Approach
We ï¬rst give a brief overview of Neural Ma- chine Translation before describing the model ar- chitecture of interest, the deep multi-layer recur- rent model with LSTM. We then explain the dif- ferent types of NMT weights together with our ap- proaches to pruning and retraining.
# 3.1 Neural Machine Translation
Neural machine translation aims to directly model the conditional probability p(y|x) of translating a source sentence, x1, . . . , xn, to a target sentence, y1, . . . , ym. It accomplishes this goal through an encoder-decoder framework (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014). The encoder computes a representation s for each source sentence. Based on that source representation, the decoder generates a transla- tion, one target word at a time, and hence, decom- poses the log conditional probability as:
log p(yl) = 32" logp (vely<e,8)
Most NMT work uses RNNs, but approaches (a) architecture, which can
be unidirectional, bidirectional, or deep multi- layer RNN; and (b) RNN type, which can be Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) or the Gated Recurrent Unit (Cho et al., 2014).
In this work, we speciï¬cally consider the deep multi-layer recurrent architecture with LSTM as the hidden unit type. Figure 1 illustrates an in- stance of that architecture during training in which the source and target sentence pair are input for su- pervised learning. During testing, the target sen- tence is not known in advance; instead, the most probable target words predicted by the model are fed as inputs into the next timestep. The network stops when it emits the end-of-sentence symbol â a special âwordâ in the vocabulary, represented by a dash in Figure 1.
# 3.2 Understanding NMT Weights
Figure 2 shows the same system in more detail, highlighting the different types of parameters, or weights, in the model. We will go through the architecture from bottom to top. First, a vocab- ulary is chosen for each language, assuming that the top V frequent words are selected. Thus, ev- ery word in the source or target vocabulary can be represented by a one-hot vector of length V .
# layer
The source input sentence and target input sen- tence, represented as a sequence of one-hot vec- tors, are transformed into a sequence of word em- beddings by the embedding weights. These em- bedding weights, which are learned during train- ing, are different for the source words and the tar- get words. The word embeddings and all hidden layers are vectors of length n (a chosen hyperpa- rameter).
The word embeddings are then fed as input into the main network, which consists of two multi- layer RNNs âstuck togetherâ â an encoder for the source language and a decoder for the target lan- guage, each with their own weights. The feed- forward (vertical) weights connect the hidden unit from the layer below to the upper RNN block, and the recurrent (horizontal) weights connect the hid- den unit from the previous time-step RNN block to the current time-step RNN block.
The hidden state at the top layer of the decoder is fed through an attention layer, which guides the translation by âpaying attentionâ to relevant parts of the source sentence; for more information see Bahdanau et al. (2015) or Section 3 of Luong et al. (2015a). Finally, for each target word, the top layer hidden unit is transformed by the softmax weights into a score vector of length V . The tar- get word with the highest score is selected as the output translation.
Weight Subgroups in LSTM â For the afore- mentioned RNN block, we choose to use LSTM as the hidden unit type. To facilitate our later discus- sion on the different subgroups of weights within LSTM, we ï¬rst review the details of LSTM as for- mulated by Zaremba et al. (2014) as follows:
i sigm f | â | sigm nit o} | sigm Tan.2n hi, 2) h tanh d=fod_,+ioh (3) hi, = 00 tanh(c}) (4)
Here, each LSTM block at time t and layer l com- putes as output a pair of hidden and memory vec- t) given the previous pair (hl tors (hl tâ1) and an input vector hlâ1 (either from the LSTM block below or the embedding weights if l = 1). All of these vectors have length n.
The core of a LSTM block is the weight matrix T4n,2n of size 4n à 2n. This matrix can be decom- posed into 8 subgroups that are responsible for the
interactions between {input gate i, forget gate f , output gate o, input signal Ëh} Ã {feed-forward in- put hlâ1 t
# 3.3 Pruning Schemes
We follow the general magnitude-based approach of Han et al. (2015b), which consists of pruning weights with smallest absolute value. However, we question the authorsâ pruning scheme with re- spect to the different weight classes, and exper- iment with three pruning schemes. Suppose we wish to prune x% of the total parameters in the model. How do we distribute the pruning over the different weight classes (illustrated in Figure 2) of our model? We propose to examine three different pruning schemes:
1. Class-blind: Take all parameters, sort them by magnitude and prune the x% with smallest (So magnitude, regardless of weight class. some classes are pruned proportionally more than others).
2. Class-uniform: Within each class, sort the weights by magnitude and prune the x% with smallest magnitude. (So all classes have ex- actly x% of their parameters pruned).
3. Class-distribution: For each class c, weights with magnitude less than λÏc are pruned. Here, Ïc is the standard deviation of that class and λ is a universal parameter chosen such that in total, x% of all parameters are pruned. This is used by Han et al. (2015b).
All these schemes have their seeming advantages. Class-blind pruning is the simplest and adheres to the principle that pruning weights (or equivalently, setting them to zero) is least damaging when those weights are small, regardless of their locations in the architecture. Class-uniform pruning and class- distribution pruning both seek to prune proportion- ally within each weight class, either absolutely, or relative to the standard deviation of that class. We ï¬nd that class-blind pruning outperforms both other schemes (see Section 4.1).
# 3.4 Retraining
In order to prune NMT models aggressively with- out performance loss, we retrain our pruned net- works. That is, we continue to train the remaining weights, but maintain the sparse structure intro- duced by pruning. In our implementation, pruned
20 e r o c s U E L B 10 class-blind class-uniform class-distribution 0 0 10 20 30 40 50 60 70 80 90 percentage pruned
Figure 3: Effects of different pruning schemes.
weights are represented by zeros in the weight ma- trices, and we use binary âmaskâ matrices, which represent the sparse structure of a network, to ig- nore updates to weights at pruned locations. This implementation has the advantage of simplicity as it requires minimal changes to the training and deployment code, but we note that a more complex implementation utilizing sparse matrices and sparse matrix multiplication could potentially yield speed improvements. However, such an im- plementation is beyond the scope of this paper.
# 4 Experiments
We evaluate the effectiveness of our pruning approaches on a state-of-the-art NMT model.1 Speciï¬cally, an attention-based English-German NMT system from Luong et al. (2015a) is consid- ered. Training data was obtained from WMTâ14 consisting of 4.5M sentence pairs (116M English words, 110M German words). For more details on training hyperparameters, we refer readers to Section 4.1 of Luong et al. (2015a). All models are tested on newstest2014 (2737 sentences). The model achieves a perplexity of 6.1 and a BLEU score of 20.5 (after unknown word replacement).2 When retraining pruned NMT systems, we use the following settings: (a) we start with a smaller learning rate of 0.5 (the original model uses a learning rate of 1.0), (b) we train for fewer epochs, 4 instead of 12, using plain SGD, (c) a simple learning rate schedule is employed; after 2 epochs, we begin to halve the learning rate every half an epoch, and (d) all other hyperparameters are the
1We thank the authors of Luong et al. (2015a) for provid- ing their trained models and assistance in using the codebase at https://github.com/lmthang/nmt.matlab.
2The performance of this model is reported under row global (dot) in Table 4 of Luong et al. (2015a).
same, such as mini-batch size 128, maximum gra- dient norm 5, and dropout with probability 0.2.
# 4.1 Comparing pruning schemes
Despite its simplicity, we observe in Figure 3 that class-blind pruning outperforms both other schemes in terms of translation quality at all prun- ing percentages. In order to understand this result, for each of the three pruning schemes, we pruned each class separately and recorded the effect on performance (as measured by perplexity). Figure 4 shows that with class-uniform pruning, the over- all performance loss is caused disproportionately by a few classes: target layer 4, attention and soft- max weights. Looking at Figure 5, we see that the most damaging classes to prune also tend to be those with weights of greater magnitude â these classes have much larger weights than others at the same percentile, so pruning them under the class- uniform pruning scheme is more damaging. The situation is similar for class-distribution pruning. By contrast, Figure 4 shows that under class- blind pruning, the damage caused by pruning soft- max, attention and target layer 4 weights is greatly decreased, and the contribution of each class to- wards the performance loss is overall more uni- form. In fact, the distribution begins to reï¬ect the number of parameters in each class â for ex- ample, the source and target embedding classes have larger contributions because they have more weights. We use only class-blind pruning for the rest of the experiments.
Figure 4 also reveals some interesting informa- tion about the distribution of redundancy in NMT architectures â namely it seems that higher lay- ers are more important than lower layers, and that attention and softmax weights are crucial. We will explore the distribution of redundancy further in Section 4.5.
# 4.2 Pruning and retraining
Pruning has an immediate negative impact on per- formance (as measured by BLEU) that is exponen- tial in pruning percentage; this is demonstrated by the blue line in Figure 6. However we ï¬nd that up to about 40% pruning, performance is mostly un- affected, indicating a large amount of redundancy and over-parameterization in NMT.
We now consider the effect of retraining pruned models. The orange line in Figure 6 shows that af- ter retraining the pruned models, baseline perfor- mance (20.48 BLEU) is both recovered and im-
15 10 class-blind class-uniform class-distribution 5 0 sourcelayer1 sourcelayer2 sourcelayer3 sourcelayer4 targetlayer1 targetlayer2 targetlayer3 targetlayer4 attention softm ax sourcee m bedding targete m bedding
# e g n a h c
y t i x e l p r e p
Figure 4: âBreakdownâ of performance loss (i.e., perplexity increase) by weight class, when pruning 90% of weights using each of the three pruning schemes. Each of the ï¬rst eight classes have 8 million weights, attention has 2 million, and the last three have 50 million weights each.
e g n a h c y t i x e l p r e p 101 100 0 0.1 0.2 0.3 0.4 magnitude of largest deleted weight 0.5
20 10 0 0 pruned pruned and retrained sparse from the beginning 10 20 30 40 50 60 70 80 90 percentage pruned
# e r o c s U E L B
Figure 5: Magnitude of largest deleted weight vs. perplexity change, for the 12 different weight classes when pruning 90% of parameters by class- uniform pruning.
Figure 6: Performance of pruned models (a) after pruning, (b) after pruning and retraining, and (c) when trained with sparsity structure from the out- set (see Section 4.3).
proved upon, up to 80% pruning (20.91 BLEU), with only a small performance loss at 90% pruning (20.13 BLEU). This may seem surprising, as we might not expect a sparse model to signiï¬cantly out-perform a model with ï¬ve times as many pa- rameters. There are several possible explanations, two of which are given below.
Firstly, we found that the less-pruned models perform better on the training set than the vali- dation set, whereas the more-pruned models have closer performance on the two sets. This indicates that pruning has a regularizing effect on the re- training phase, though clearly more is not always better, as the 50% pruned and retrained model has better validation set performance than the 90%
pruned and retrained model. Nonetheless, this reg- ularization effect may explain why the pruned and retrained models outperform the baseline.
Alternatively, pruning may serve as a means to escape a local optimum. Figure 7 shows the loss function over time during the training, pruning and retraining process. During the original training process, the loss curve ï¬attens out and seems to converge (note that we use early stopping to ob- tain our baseline model, so the original model was trained for longer than shown in Figure 7). Prun- ing causes an immediate increase in the loss func- tion, but enables further gradient descent, allowing the retraining process to ï¬nd a new, better local optimum. It seems that the disruption caused by
most common word least common word ¢ > target embedding weights 00 a source embedeing weights source layer 1 weights source layer 2 weights source layer 3 weights source layer 4 weights input gate < forget gate < output gate < input < U| ~~ feed-forward recurrent + ~, Yi \ target layer 1 weights target layer 2 weights target layer 3 weights target layer 4 weights
Figure 8: Graphical representation of the location of small weights in various parts of the model. Black pixels represent weights with absolute size in the bottom 80%; white pixels represent those with absolute size in the top 20%. Equivalently, these pictures illustrate which parameters remain after pruning 80% using our class-blind pruning scheme.
8 6 s s o l 4 2 0 1 2 3 4 training iterations 5 ·105
Figure 7: The validation set loss during training, pruning and retraining. The vertical dotted line marks the point when 80% of the parameters are pruned. The horizontal dotted line marks the best performance of the unpruned baseline.
pruning is beneï¬cial in the long-run.
# 4.3 Starting with sparse models
The favorable performance of the pruned and re- trained models raises the question: can we get a shortcut to this performance by starting with sparse models? That is, rather than train, prune, and retrain, what if we simply prune then train? To test this, we took the sparsity structure of our 50%â90% pruned models, and trained completely new models with the same sparsity structure. The purple line in Figure 6 shows that the âsparse from the beginningâ models do not perform as well as the pruned and retrained models, but they do come close to the baseline performance. This shows that while the sparsity structure alone contains useful information about redundancy and can therefore produce a competitive compressed model, it is im- portant to interleave pruning with training.
Though our method involves just one pruning stage, other pruning methods interleave pruning with training more closely by including several iterations (Collins and Kohli, 2014; Han et al., 2015b). We expect that implementing this for NMT would likely result in further compression and performance improvements.
# 4.4 Storage size
The original unpruned model (a MATLAB ï¬le) has size 782MB. The 80% pruned and retrained model is 272MB, which is a 65.2% reduction. In this work we focus on compression in terms of number of parameters rather than storage size, be-
cause it is invariant across implementations.
# 4.5 Distribution of redundancy in NMT
We visualize in Figure 8 the redundancy struc- tore of our NMT baseline model. Black pix- els represent weights near to zero (those that can be pruned); white pixels represent larger ones. First we consider the embedding weight matrices, whose columns correspond to words in the vocab- ulary. Unsurprisingly, in Figure 8, we see that the parameters corresponding to the less common words are more dispensable. In fact, at the 80% pruning rate, for 100 uncommon source words and 1194 uncommon target words, we delete all parameters corresponding to that word. This is not quite the same as removing the word from the vocabulary â true out-of-vocabulary words are mapped to the embedding for the âunknown wordâ symbol, whereas these âpruned-outâ words are mapped to a zero embedding. However in the original unpruned model these uncommon words already had near-zero embeddings, indicating that the model was unable to learn sufï¬ciently distinc- tive representations.
Returning to Figure 8, now look at the eight weight matrices for the source and target connec- tions at each of the four layers. Each matrix corre- sponds to the 4n à 2n matrix T4n,2n in Equation (2). In all eight matrices, we observe â as does Lu et al. (2016) â that the weights connecting to the input Ëh are most crucial, followed by the in- put gate i, then the output gate o, then the forget gate f . This is particularly true of the lower lay- ers, which focus primarily on the input Ëh. How- ever for higher layers, especially on the target side, weights connecting to the gates are as important as those connecting to the input Ëh. The gates repre- sent the LSTMâs ability to add to, delete from or retrieve information from the memory cell. Figure 8 therefore shows that these sophisticated memory cell abilities are most important at the end of the NMT pipeline (the top layer of the decoder). This is reasonable, as we expect higher-level features to be learned later in a deep learning pipeline.
We also observe that for lower layers, the feed- forward input is much more important than the re- current input, whereas for higher layers the recur- rent input becomes more important. This makes sense: lower layers concentrate on the low-level information from the current word embedding (the feed-forward input), whereas higher layers make
use of the higher-level representation of the sen- tence so far (the recurrent input).
Lastly, on close inspection, we notice several white diagonals emerging within some subsquares of the matrices in Figure 8, indicating that even without initializing the weights to identity ma- trices (as is sometimes done (Le et al., 2015)), an identity-like weight matrix is learned. At higher pruning percentages, these diagonals be- come more pronounced.
# 5 Generalizability of our results
To test the generalizability of our results, we also test our pruning approach on a smaller, non- state-of-the-art NMT model trained on the WIT3 Vietnamese-English dataset (Cettolo et al., 2012), which consists of 133,000 sentence pairs. This model is effectively a scaled-down version of the state-of-the-art model in Luong et al. (2015a), with fewer layers, smaller vocabulary size, smaller hid- den layer size, no attention mechanism, and about 11% as many parameters in total. It achieves a BLEU score of 9.61 on the validation set.
Although this model and its training set are on a different scale to our main model, and the lan- guage pair is different, we found very similar re- sults. For this model, it is possible to prune 60% of parameters with no immediate performance loss, and with retraining it is possible to prune 90%, and regain original performance. Our main observa- tions from Sections 4.1 to 4.5 are also replicated; in particular, class-blind pruning is most success- ful, âsparse from the beginningâ models are less successful than pruned and retrained models, and we observe the same patterns as seen in Figure 8.
# 6 Future Work
As noted in Section 4.3, including several itera- tions of pruning and retraining would likely im- prove the compression and performance of our If possible it would be highly pruning method. valuable to exploit the sparsity of the pruned mod- els to speed up training and runtime, perhaps through sparse matrix representations and mul- tiplications (see Section 3.4). Though we have found magnitude-based pruning to perform very well, it would be instructive to revisit the orig- inal claim that other pruning methods (for ex- ample Optimal Brain Damage and Optimal Brain Surgery) are more principled, and perform a com- parative study.
# 7 Conclusion
We have shown that weight pruning with retrain- ing is a highly effective method of compression and regularization on a state-of-the-art NMT sys- tem, compressing the model to 20% of its size with no loss of performance. Though we are the ï¬rst to apply compression techniques to NMT, we obtain a similar degree of compression to other current work on compressing state-of-the-art deep neural networks, with an approach that is simpler than most. We have found that the absolute size of pa- rameters is of primary importance when choosing which to prune, leading to an approach that is ex- tremely simple to implement, and can be applied to any neural network. Lastly, we have gained insight into the distribution of redundancy in the NMT architecture.
# 8 Acknowledgment
This work was partially supported by NSF Award IIS-1514268 and partially supported by a gift from Bloomberg L.P. We gratefully acknowledge the support of the Defense Advanced Research Projects Agency (DARPA) Communicating with Computers (CwC) program under ARO prime contract no. W911NF-15-1-0462. Lastly, we ac- knowledge NVIDIA Corporation for the donation of Tesla K40 GPUs.
# References
M. Gethsiyal Augasta and Thangairulappan Kathir- valavakumar. 2013. Pruning algorithms of neural networks - a comparative study. Central European Journal of Computer Science, 3(3):105â115.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR.
Mauro Cettolo, Christian Girardi, and Marcello Fed- erico. 2012. Wit3: Web inventory of transcribed and translated talks. In EAMT.
Wenlin Chen, James T Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen. 2015. Compressing neural networks with the hashing trick. In ICML.
Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Fethi Bougares, Holger Schwenk, and Yoshua 2014. Learning phrase representations Bengio. using RNN encoder-decoder for statistical machine translation. In EMNLP.
Maxwell D. Collins and Pushmeet Kohli. 2014. Mem- ory bounded deep convolutional networks. arXiv preprint arXiv:1412.1442.
Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. 2015. Training deep neural networks with low precision multiplications. In ICLR workshop.
Emily L. Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. 2014. Exploiting lin- ear structure within convolutional networks for efï¬- cient evaluation. In NIPS.
Suyog Gupta, Ankur Agrawal, Kailash Gopalakrish- nan, and Pritish Narayanan. 2015. Deep learning with limited numerical precision. In ICML.
Song Han, Huizi Mao, and William J Dally. 2015a. Deep compression: Compressing deep neural net- works with pruning, trained quantization and huff- man coding. In ICLR.
Song Han, Jeff Pool, John Tran, and William Dally. 2015b. Learning both weights and connections for efï¬cient neural network. In NIPS.
Babak Hassibi and David G. Stork. 1993. Second or- der derivatives for network pruning: Optimal brain surgeon. Morgan Kaufmann.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. In NIPS Deep Learning Workshop.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Neural computation, Long short-term memory. 9(8):1735â1780.
Forrest N. Iandola, Matthew W. Moskewicz, Khalid Ashraf, Song Han, William J. Dally, and Kurt Keutzer. 2016. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and < 0.5MB model size. arXiv preprint arXiv:1602.07360.
Max Jaderberg, Andrea Vedaldi, and Andrew Zisser- man. 2014. Speeding up convolutional neural net- works with low rank expansions. In NIPS.
S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015a. On using very large target vocabulary for neural machine translation. In ACL.
S´ebastien Jean, Orhan Firat, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015b. Montreal neural machine translation systems for WMTâ15. In WMT.
Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In EMNLP.
Quoc V. Le, Navdeep Jaitly, and Geoffrey E. Hin- ton. 2015. A simple way to initialize recurrent networks of rectiï¬ed linear units. arXiv preprint arXiv:1504.00941.
Yann Le Cun, John S. Denker, and Sara A. Solla. 1989. Optimal brain damage. In NIPS.
Zhouhan Lin, Matthieu Courbariaux, Roland Memise- vic, and Yoshua Bengio. 2016. Neural networks with few multiplications. In ICLR.
Zhiyun Lu, Vikas Sindhwani, and Tara N. Sainath. 2016. Learning compact recurrent neural networks. In ICASSP.
Minh-Thang Luong and Christopher D. Manning. 2015. Stanford neural machine translation systems for spoken language domain. In IWSLT.
Minh-Thang Luong and Christopher D. Manning. 2016. Achieving open vocabulary neural machine translation with hybrid word-character models. In ACL.
Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015a. Effective approaches to attention- based neural machine translation. In EMNLP.
Minh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. 2015b. Address- ing the rare word problem in neural machine trans- lation. In ACL.
Kenton Murray and David Chiang. 2015. Auto-sizing neural networks: With applications to n-gram lan- guage models. In EMNLP.
Rohit Prabhavalkar, Ouais Alsharif, Antoine Bruguier, 2016. On the compression and Ian McGraw. of recurrent neural networks with an application to LVCSR acoustic modeling for embedded speech recognition. In ICASSP.
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In ACL.
Suraj Srinivas and R. Venkatesh Babu. 2015. Data- free parameter pruning for deep neural networks. In BMVC.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. In NIPS.
Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. 2014. arXiv preprint arXiv:1409.2329. | {
"id": "1602.07360"
} |
1606.08514 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | 0 2 0 2
l u J 3 2 ] I A . s c [
4 v 4 1 5 8 0 . 6 0 6 1 : v i X r a
# Towards Veriï¬ed Artiï¬cial Intelligence
Sanjit A. Seshiaâ, Dorsa Sadighâ , and S. Shankar Sastryâ
â Stanford University dorsa@cs.stanford.edu
July 21, 2020
# Abstract
Veriï¬ed artiï¬cial intelligence (AI) is the goal of designing AI-based systems that have strong, ideally provable, assurances of correctness with respect to mathematically-speciï¬ed requirements. This paper considers Veriï¬ed AI from a formal methods perspective. We describe ï¬ve challenges for achieving Veriï¬ed AI, and ï¬ve corresponding principles for addressing these challenges.
# 1 Introduction
Artiï¬cial intelligence (AI) is a term used for computational systems that attempt to mimic aspects of human intelligence, including functions we intuitively associate with human minds such as âlearningâ and âproblem solvingâ (e.g., see [17]). Russell and Norvig [66] describe AI as the study of general principles of rational agents and components for constructing these agents. We interpret the term AI broadly to include closely- related areas such as machine learning (ML) [53]. Systems that heavily use AI, henceforth referred to as AI-based systems, have had a signiï¬cant impact in society in domains that include healthcare, transportation, ï¬nance, social networking, e-commerce, education, etc. This growing societal-scale impact has brought with it a set of risks and concerns including errors in AI software, cyber-attacks, and safety of AI-based systems [64, 21, 4]. Therefore, the question of veriï¬cation and validation of AI-based systems has begun to demand the attention of the research community. We deï¬ne âVeriï¬ed AIâ as the goal of designing AI- based systems that have strong, ideally provable, assurances of correctness with respect to mathematically- speciï¬ed requirements. How can we achieve this goal?
A natural starting point is to consider formal methods â a ï¬eld of computer science and engineering concerned with the rigorous mathematical speciï¬cation, design, and veriï¬cation of systems [86, 16]. At its core, formal methods is about proof: formulating speciï¬cations that form proof obligations, designing systems to meet those obligations, and verifying, via algorithmic proof search, that the systems indeed meet their speciï¬cations. A spectrum of formal methods, from speciï¬cation-driven testing and simulation [29], to model checking [14, 62, 15] and theorem proving (see, e.g. [58, 43, 37]) are used routinely in the computer- aided design of integrated circuits and have been widely applied to ï¬nd bugs in software, analyze embedded systems, and ï¬nd security vulnerabilities. At the heart of these advances are computational proof engines such as Boolean satisï¬ability (SAT) solvers [50], Boolean reasoning and manipulation routines based on Binary Decision Diagrams (BDDs) [9], and satisï¬ability modulo theories (SMT) solvers [6].
In this paper, we consider the challenge of Veriï¬ed AI from a formal methods perspective. That is, we review the manner in which formal methods have traditionally been applied, analyze the challenges this approach may face for AI-based systems, and propose ideas to overcome these challenges. We emphasize that our discussion is focused on the role of formal methods and does not cover the broader set of techniques
1
that could be used to improve assurance in AI-based systems. Additionally, we seek to identify challenges applicable to a broad range of AI/ML systems, and not limited to speciï¬c technologies such as deep neural networks (DNNs) or reinforcement learning (RL) systems. Our view of the challenges is largely shaped by problems arising from the use of AI and ML in autonomous and semi-autonomous systems, though we believe the ideas presented here apply more broadly.
We begin in Sec. 2 with some brief background on formal veriï¬cation and an illustrative example. We then outline challenges for Veriï¬ed AI in Sec. 3 below, and describe ideas to address each of these challenges in Sec. 4.1
# 2 Background and Illustrative Example
Consider the typical formal veriï¬cation process as shown in Figure 1, which begins with the following three inputs: 1. A model of the system to be veriï¬ed, S; 2. A model of the environment, E, and 3. The property to be veriï¬ed, Φ. The veriï¬er generates as output a YES/NO answer, indicating whether or not S satisï¬es the property Φ in environment E. Typically, a NO output is accompanied by a counterexample, also called an error trace, which is an execution of the system that indicates how Φ is violated. Some formal veriï¬cation tools also include a proof or certiï¬cate of correctness with a YES answer. In this paper, we take a broad view of
Property
co) YES System Ivete) 5 [proof] Environment l Compose E NO
# counterexample
Figure 1: Formal veriï¬cation procedure.
formal methods: any technique that uses some aspect of formal speciï¬cation, or veriï¬cation, or synthesis, is included. For instance, we include simulation-based hardware veriï¬cation methods or model-based testing methods for software since they use formal speciï¬cations or models to guide the process of simulation or testing.
In order to apply formal veriï¬cation to AI-based systems, at a minimum, one must be able to represent the three inputs S, E and Φ in formalisms for which (ideally) there exist efï¬cient decision procedures to answer the YES/NO question as described above. However, as we describe in Sec. 3, even constructing good representations of the three inputs is not straightforward, let alone dealing with the complexity of the underlying decision problems and associated design issues.
We will illustrate the ideas in this paper with examples from the domain of (semi-)autonomous driving. Fig 2 shows an illustrative example of an AI-based system: a closed-loop cyber-physical system comprising
1The ï¬rst version of this paper was published in July 2016 in response to the call for white papers for the CMU Exploratory Workshop on Safety and Control for AI held in June 2016, and a second version in October 2017. This is the latest version reï¬ecting the evolution of the authorsâ view of the challenges and approaches for Veriï¬ed AI.
2
a semi-autonomous vehicle with machine learning components along with its environment. Speciï¬cally, assume that the semi-autonomous âego vehicleâ has an automated emergency braking system (AEBS) that attempts to detect and classify objects in front of it and actuate the brakes when needed to avert a collision. Figure 2 shows the AEBS as a system composed of a controller (automatic braking), a plant (vehicle sub- system under control including other parts of the autonomy stack), and a sensor (camera) along with a perception component implemented using a deep neural network. The AEBS, when combined with the vehicleâs environment, forms a closed loop cyber-physical system. The controller regulates the acceleration and braking of the plant using the velocity of the ego vehicle and the distance between it and an obstacle. The environment of the ego vehicle comprises both agents and objects outside the vehicle (other vehicles,
âââ Environment Sensor Input Ke, : â Controller |__| Plant Learning-Based Perception of closed-loop cyber-physical system with machine learning components (introduced objects, etc.) as well as inside the vehicle (e.g., a driver). A safety requirement for can be informally characterized as the property of maintaining a safe distance between vehicle and any other agent or object on the road. However, as we will see in Sec. 3, to the specification, modeling, and verification of a system such as this one. for Verified AI major challenges to achieving formally-verified AI-based systems, described in more
# Figure 2: in [22]).
# Example
pedestrians, road closed loop system the moving ego
are many nuances
# 3 Challenges for Veriï¬ed AI
We identify ï¬ve major challenges to achieving formally-veriï¬ed AI-based systems, described in more detail below.
# 3.1 Environment Modeling
The environments in which AI/ML-based systems operate can be very complex, with considerable uncer- tainty even about how many and which agents are in the environment (both human and robotic), let alone about their intentions and behaviors. As an example, consider the difï¬culty in modeling urban trafï¬c envi- ronments in which an autonomous car must operate. Indeed, AI/ML is often introduced into these systems precisely to deal with such complexity and uncertainty! From a formal methods perspective, this makes it very hard to create realistic environment models with respect to which one can perform veriï¬cation or synthesis.
We see the main challenges for environment modeling as being threefold:
⢠Unknown Variables: In the traditional success stories for formal veriï¬cation, such as verifying cache coherence protocols or device drivers, the interface between the system S and its environment E is well- deï¬ned. The environment can only inï¬uence the system through this interface. However, for AI-based systems, such as an autonomous vehicle example of Sec. 2, it may be impossible to precisely deï¬ne all the variables (features) of the environment. Even in restricted scenarios where the environment variables
3
# this
# there
(agents) are known, there is a striking lack of information, especially at design time, about their behaviors. Additionally, modeling sensors such as LiDAR that represent the interface to the environment is in itself a major technical challenge.
⢠Modeling with the Right Fidelity: In traditional uses of formal veriï¬cation, it is usually acceptable to model the environment as a non-deterministic process subject to constraints speciï¬ed in a suitable logic or automata-based formalism. Typically such an environment model is termed as being âover-approximateâ, meaning that it may include (many) more environment behaviors than are possible. Over-approximate environment modeling permits one to perform sound veriï¬cation without a detailed environment model, which can be inefï¬cient to reason with and hard to obtain. However, for AI-based autonomy, purely non-deterministic modeling is likely to produce highly over-approximate models, which in turn yields too many spurious bug reports, rendering the veriï¬cation process useless in practice. Moreover, many AI-based systems make distributional assumptions on the environment, thus requiring the need for prob- abilistic modeling; however, it can be difï¬cult to exactly ascertain the underlying distributions. One can address this by learning a probabilistic model from data, but in this case it is important to remember that the model parameters (e.g., transition probabilities) are only estimates, not precise representations of en- vironment behavior. Thus, veriï¬cation algorithms cannot consider the resulting probabilistic model to be âperfectâ; we need to represent uncertainty in the model itself.
⢠Modeling Human Behavior: For many AI-based systems, such as semi-autonomous vehicles, human agents are a key part of the environment and/or system. Researchers have attempted modeling humans as non-deterministic or stochastic processes with the goal of verifying the correctness of the overall sys- tem [63, 67]. However, such approaches must deal with the variability and uncertainty in human behavior. One could take a data-driven approach based on machine learning (e.g., [55]), but such an approach is sensitive to the expressivity of the features used by the ML model and the quality of data. In order to achieve Veriï¬ed AI for such human-in-the-loop systems, we need to address the limitations of current human modeling techniques and provide guarantees about their prediction accuracy and convergence. When learned models are used, one must represent any uncertainty in the learned parameters as a ï¬rst- class entity in the model, and take that into account in veriï¬cation and control.
The ï¬rst challenge, then, is to come up with a systematic method of environment modeling that allows one to provide provable guarantees on the systemâs behavior even when there is considerable uncertainty about the environment.
# 3.2 Formal Speciï¬cation
Formal veriï¬cation critically relies on having a formal speciï¬cation â a precise, mathematical statement of what the system is supposed to do. However, the challenge of coming up with a high-quality formal speciï¬cation is well known, even in application domains in which formal veriï¬cation has found considerable success (see, e.g., [7]). This challenge is only exacerbated in AI-based systems. We identify three major problems. Speciï¬cation for Hard-to-Formalize Tasks: Consider the perception module in the AEBS controller of Fig. 2 which must detect and classify objects, distinguishing vehicles and pedestrians from other objects. Correct- ness for this module in the classic formal methods sense requires a formal deï¬nition of each type of road user, which is extremely difï¬cult, if not impossible. Similar problems arise for other tasks involving per- ception and communication, such as natural language processing. How then, do we specify correctness properties for such a module? What should the speciï¬cation language be and what tools can one use to construct a speciï¬cation? Quantitative vs. Boolean Speciï¬cations: Traditionally, formal speciï¬cations tend to be Boolean, mapping a given system behavior to true or false. However, in AI and ML, speciï¬cations are often given as objective
4
functions specifying costs or rewards. Moreover, there can be multiple objectives, some of which must be satisï¬ed together, and others that may need to be traded off against each other in certain environments. What are the best ways to unify Boolean and quantitative approaches to speciï¬cation? Are there formalisms that can capture commonly discussed properties of AI components such as robustness and fairness in a uniï¬ed manner? Data vs. Formal Requirements: The view of âdata as speciï¬cationâ is common in machine learning. Labeled âground truthâ data is often the only speciï¬cation of correct behavior. On the other hand, a speciï¬cation in formal methods is a mathematical property that deï¬nes the set of correct behaviors. How can we bridge this gap?
Thus, the second challenge is to design effective methods to specify desired and undesired properties of systems that use AI- or ML-based components.
# 3.3 Modeling Learning Systems
In most traditional applications of formal veriï¬cation, the system S is precisely known: it is a program or a circuit described in a programming language or hardware description language. The system modeling problem is primarily concerned with reducing the size of S to a more tractable one by abstracting away irrelevant details.
AI-based systems lead to a very different challenge for system modeling, primarily stemming from the
use of machine learning: ⢠Very high-dimensional input space: ML components used for perception usually operate over very high- dimensional input spaces. For the illustrative example of Sec. 2 from [22], each input RGB image is of dimension 1000 à 600 pixels, contains 2561000Ã600Ã3 elements, and in general the input is a stream of such high-dimensional vectors. Although formal methods has been used for high-dimensional input spaces (e.g., in digital circuits), the nature of the input spaces for ML-based perception is different â not entirely Boolean, but hybrid, including both discrete and continuous variables.
⢠Very high-dimensional parameter/state space: ML components such as deep neural networks have any- where from thousands to millions of model parameters and primitive components. For example, state- of-the-art DNNs used by the authors in instantiations of the example of Fig. 2 have up to 60 million parameters and tens of layers. This gives rise to a huge search space for veriï¬cation that requires careful abstraction.
⢠Online adaptation and evolution: Some learning systems, such as a robot using reinforcement learning, evolve as they encounter new data and situations. For such systems, design-time veriï¬cation must either account for future changes in the behavior of the system, or else be performed incrementally and online as the learning system evolves.
⢠Modeling systems in context: For many AI/ML components, their speciï¬cation is only deï¬ned by the context. For example, verifying robustness of a DNN such as the one in Fig. 2 requires us to capture a model of the surrounding system. We need techniques to model ML components along with their context so that semantically meaningful properties can be veriï¬ed.
# 3.4 Efï¬cient and Scalable Design and Veriï¬cation of Models and Data
The effectiveness of formal methods in the domains of hardware and software has been driven by advances in underlying âcomputational enginesâ â e.g., SAT, SMT, numerical simulation, and model checking. Given the scale of AI/ML systems, the complexity of their environments, and the new types of speciï¬cations involved, several advances are needed in creating computational engines for efï¬cient and scalable training, testing, design, and veriï¬cation of AI-based systems. We identify here the key challenges that must be overcome in order to achieve these advances.
5
Data Generation: Data is the fundamental starting point for machine learning. Any quest to improve the quality of a machine learning system must improve the quality of the data it learns from. Can formal methods help to systematically select, design and augment the data used for machine learning?
We believe the answer is yes, but that more needs to be done. Formal methods has proved effective for the systematic generation of counterexamples and test data that satisfy constraints including for simulation- based veriï¬cation of circuits (e.g., [44]) and ï¬nding security exploits in commodity software (e.g., [5]). However, the requirements for AI/ML systems are different. The types of constraints can be much more complex, e.g., encoding requirements on ârealismâ of data captured using sensors from a complex envi- ronment such as a trafï¬c situation. We need to generate not just single data items, but an ensemble that satisï¬es distributional constraints. Additionally, data generation must be selective, e.g., meeting objectives on data set size and diversity for effective training and generalization. All of these additional requirements necessitate the development of a new suite of formal techniques. Quantitative Veriï¬cation: Several safety-critical applications of AI-based systems are in robotics and cyber- physical systems. In such systems, the scalability challenge for veriï¬cation can be very high. In addition to the scale of systems as measured by traditional metrics (dimension of state space, number of components, etc.), the types of components can be much more complex. For instance, in (semi-)autonomous driving, autonomous vehicles and their controllers need to be modeled as hybrid systems combining both discrete and continuous dynamics. Moreover, agents in the environment (humans, other vehicles) may need to be modeled as probabilistic processes. Finally, the requirements may involve not only traditional Boolean speciï¬cations on safety and liveness, but also quantitative requirements on system robustness and perfor- mance. Yet, most of the existing veriï¬cation methods are targeted towards answering Boolean veriï¬cation questions. To address this gap, new scalable engines for quantitative veriï¬cation must be developed. Compositional Reasoning: In order for formal methods to scale to large AI/ML systems, compositional (modular) reasoning is essential. In compositional veriï¬cation, a large system (e.g., program) is split up into its components (e.g., procedures), each component is veriï¬ed against a speciï¬cation, and then the com- ponent speciï¬cations together entail the system-level speciï¬cation. A common approach for compositional veriï¬cation is the use of assume-guarantee contracts. For example, a procedure assumes something about its starting state (pre-condition) and in turn guarantees something about its ending state (post-condition). Similar assume-guarantee paradigms have been developed for concurrent software and hardware systems. A theory of assume-guarantee contracts does not yet exist for AI-based systems.
Moreover, AI/ML systems pose a particularly vexing challenge for compositional reasoning. Composi- tional veriï¬cation requires compositional speciï¬cation â i.e., the components must be formally-speciï¬able. However, as noted in Sec. 3.2, it may be impossible to formally specify the correct behavior of a perception component. One of the challenges, then, is to develop techniques for compositional reasoning that do not rely on having complete compositional speciï¬cations [75]. Additionally, more work needs to be done for extending the theory and application of compositional reasoning to probabilistic systems and speciï¬cations.
# 3.5 Correct-by-Construction Intelligent Systems
In an ideal world, veriï¬cation should be integrated with the design process so that the system is âcorrect-by- construction.â Such an approach could either interleave veriï¬cation steps with compilation/synthesis steps, such as in the register-transfer-level (RTL) design ï¬ow common in integrated circuits, or devise synthesis al- gorithms so as to ensure that the implementation satisï¬es the speciï¬cation, such as in reactive synthesis from temporal logic [60]. Can we devise a suitable correct-by-construction design ï¬ow for AI-based systems? Speciï¬cation-Driven Design of ML Components: Can we design, from scratch, a machine learning com- ponent (model) that provably satisï¬es a formal speciï¬cation? (This assumes, of course, that we solve the formal speciï¬cation challenge described above in Sec. 3.2.) The clean-slate design of an ML component has many aspects: (1) designing the data set, (2) synthesizing the structure of the model, (3) generating a
6
good set of features, (4) synthesizing hyper-parameters and other aspects of ML algorithm selection, and (5) automated techniques for debugging ML models or the speciï¬cation when synthesis fails. More progress is needed on all these fronts. Theories of Compositional Design: Another challenge is to design the overall system comprising multiple learning and non-learning components. While theories of compositional design have been developed for digital circuits and embedded systems (e.g. [70, 80]), we do not as yet have such theories for AI-based systems. For example, if two ML models are used for perception on two different types of sensor data (e.g., LiDAR and visual images), and individually satisfy their speciï¬cations under certain assumptions, under what conditions can they be used together to improve the reliability of the overall system? And how can one design a planning component so as to overcome limitations of an ML-based perception component that it receives input from? Bridging Design Time and Run Time for Resilient AI: Due to the complexity of AI-based systems and the environments in which they operate, even if all the challenges for speciï¬cation and veriï¬cation are solved, it is likely that one will not be able to prove unconditional safe and correct operation. There will always be situations in which we do not have a provable guarantee of correctness. Therefore, techniques for achieving fault tolerance and error resilience at run time must play a crucial role. In particular, there is not yet a systematic understanding of what can be achieved at design time, how the design process can contribute to safe and correct operation of the AI system at run time, and how the design-time and run-time techniques can interoperate effectively.
# 4 Principles for Veriï¬ed AI
For each of the challenges described in the preceding section, we suggest a corresponding set of principles to follow in the design/veriï¬cation process to address that challenge. These ï¬ve principles are: 1. Use an introspective, data-driven, and probabilistic approach to model the environment; 2. Combine formal speciï¬cations of end-to-end behavior with hybrid Boolean-quantitative formalisms for learning systems and perception components and use speciï¬cation mining to bridge the data-property gap;
3. For ML components, develop new abstractions, explanations, and semantic analysis techniques; 4. Create a new class of compositional, randomized, and quantitative formal methods for data generation,
testing, and veriï¬cation, and
5. Develop techniques for formal inductive synthesis of AI-based systems and design of safe learning systems, supported by techniques for run-time assurance.
We have successfully applied these principles over the past few years, and, based on this experience, believe that they provide a good starting point for applying formal methods to AI-based systems. Our formal methods perspective on the problem complements other perspectives that have been expressed (e.g., [4]). Experience over the past few years provides evidence that the principles we suggest can point a way towards the goal of Veriï¬ed AI.
# 4.1 Environment Modeling: Introspection, Probabilities, and Data
Recall from Sec. 3.1, the three challenges for modeling the environment E of an AI-based system S: un- known variables, model ï¬delity, and human modeling. We propose to tackle these challenges with three corresponding principles. Introspective Environment Modeling: We suggest to address the unknown variables problem by developing design and veriï¬cation methods that are introspective, i.e., they algorithmically identify assumptions A that system S makes about the environment E that are sufï¬cient to guarantee the satisfaction of the speciï¬cation
7
Φ [76]. The assumptions A must be ideally the weakest such assumptions, and also must be efï¬cient to generate at design time and monitor at run time over available sensors and other sources of information about the environment so that mitigating actions can be taken when they are violated. Moreover, if there is a human operator involved, one might want A to be translatable into an explanation that is human understand- able, so that S can âexplainâ to the human why it may not be able to satisfy the speciï¬cation Φ. Dealing with these multiple requirements, as well as the need for good sensor models, makes introspective environment modeling a highly non-trivial task that requires substantial progress [76]. Preliminary work by the authors has shown that such extraction of monitorable assumptions is feasible in very simple cases [48], although more research is required to make this practical. Active Data-Driven Modeling: We believe human modeling requires an active data-driven approach. Rel- evant theories from cognitive science and psychology, such as that of bounded rationality [81, 65], must be leveraged, but it is important for those models to be expressed in formalisms compatible with formal methods. Additionally, while using a data-driven approach to infer a model, one must be careful to craft the right model structure and features. A critical aspect of human modeling is to capture human intent. We believe a three-pronged approach is required: ï¬rst, deï¬ne model templates/features based on expert knowl- edge; then, use ofï¬ine learning to complete the model for design time use, and ï¬nally, learn and update environment models at run time by monitoring and interact with the environment. Initial work has shown how data gathered from driving simulators via human subject experiments can be used to generate models of human driver behavior that are useful for veriï¬cation and control of autonomous vehicles [67, 69]. Probabilistic Formal Modeling: In order to tackle the model ï¬delity challenge, we suggest to use formalisms that combine probabilistic and non-deterministic modeling. Where probability distributions can be reliably speciï¬ed or estimated, one can use probabilistic modeling. In other cases, non-deterministic modeling can be used to over-approximate environment behaviors. While formalisms such as Markov Decision Processes (MDPs) already provide a way to blend probability and non-determinism, we believe techniques that blend probability and logical or automata-theoretic formalisms, such as the paradigm of probabilistic program- ming [52, 32], can provide an expressive and programmatic way to model environments. We expect that In in many cases, such probabilistic programs will need to be learned/synthesized (in part) from data. this case, any uncertainty in learned parameters must be propagated to the rest of the system and repre- sented in the probabilistic model. For example, the formalism of convex Markov decision processes (convex MDPs) [56, 61, 67] provide a way of representing uncertainty in the values of learned transition probabili- ties. Algorithms for veriï¬cation and control may then need to be extended to handle these new abstractions (see, e.g., [61]).
# 4.2 End-to-End Speciï¬cations, Hybrid Speciï¬cations, and Speciï¬cation Mining
Writing formal speciï¬cations for AI/ML components is hard, arguably even impossible if the component imitates a human perceptual task. Even so, we think the challenges described in Sec. 3.2 can be addressed by following three guiding principles. End-to-End/System-Level Speciï¬cations: In order to address the speciï¬cation-for-perception challenge, let us change the problem slightly. We suggest to ï¬rst focus on precisely specifying the end-to-end behavior of the AI-based system. By âend-to-endâ we mean the speciï¬cation on the entire closed-loop system (see Fig. 2) or a precisely-speciï¬able sub-system containing the AI/ML component, not on the component alone. Such a speciï¬cation is also referred to as a âsystem-levelâ speciï¬cation. For our AEBS example, this involves specifying the property Φ corresponding to maintaining a minimum distance from any object during motion. Starting with such a system-level (end-to-end) speciï¬cation, we then derive from it constraints on the input- output interface of the perception component that guarantee that the system-level speciï¬cation is satisï¬ed. Such constraints serve as a partial speciï¬cation under which the perception component can be analyzed (see [22]). Note that these constraints need not be human-readable.
8
Hybrid Quantitative-Boolean Speciï¬cations: Boolean and quantitative speciï¬cations both have their ad- vantages. On the one hand, Boolean speciï¬cations are easier to compose. On the other hand, objective functions lend themselves to optimization based techniques for veriï¬cation and synthesis, and to deï¬ning ï¬ner granularities of property satisfaction. One approach to bridge this gap is to move to quantitative speci- ï¬cation languages, such as logics with both Boolean and quantitative semantics (e.g. STL [49]) or notions of weighted automata (e.g. [13]). Another approach is to combine both Boolean and quantitative speciï¬cations into a common speciï¬cation structure, such as a rulebook [10], where speciï¬cations can be organized in a hierarchy, compared, and aggregated. Additionally, novel formalisms bridging ideas from formal methods and machine learning are being developed to model the different variants of properties such as robustness, fairness, and privacy, including notions of semantic robustness (see, e.g., [77, 24]). Speciï¬cation Mining: In order to bridge the gap between data and formal speciï¬cations, we suggest the use of techniques for inferring speciï¬cations from behaviors and other artifacts â so-called speciï¬cation mining techiques (e.g., [26, 47]). Such methods could be used for ML components in general, including for perception components, since in many cases it is not required to have an exact speciï¬cation or one that is human-readable. Speciï¬cation mining methods could also be used to infer human intent and other properties from demonstrations [85] or more complex forms of interaction between multiple agents, both human and robotic.
# 4.3 System Modeling: Abstractions, Explanations, and Semantic Feature Spaces
Let us now consider the challenges, described in Sec. 3.3, arising in modeling systems S that learn from experience. In our opinion, advances in three areas are needed in order to address these challenges: Automated Abstraction: Techniques for automatically generating abstractions of systems have been the linchpins of formal methods, playing crucial roles in extending the reach of formal methods to large hard- ware and software systems. In order to address the challenges of very high dimensional hybrid state spaces and input spaces for ML-based systems, we need to develop effective techniques to abstract ML models into simpler models that are more amenable to formal analysis. Some promising advances in this regard include the use of abstract interpretation to analyze deep neural networks (e.g. [35]), the use of abstractions for falsifying cyber-physical systems with ML components [22], and the development of probabilistic logics that capture guarantees provided by ML algorithms (e.g., [68]). Explanation Generation: The task of modeling a learning system can be made easier if the learner ac- companies its predictions with explanations of how those predictions result from the data and background knowledge. In fact, this idea is not new â it has long been investigated by the ML community under terms such as explanation-based generalization [54]. Recently, there has been a renewal of interest in using logic to explain the output of learning systems (e.g. [84, 40]). Such approaches to generating explanations that are compatible with the modeling languages used in formal methods can make the task of system modeling for veriï¬cation considerably easier. ML techniques that incorporate causal and counterfactual reasoning [59] can also ease the generation of explanations for use with formal methods. Semantic Feature Spaces: The veriï¬cation and adversarial analysis [36] of ML models is more meaningful when the generated adversarial inputs and counterexamples have semantic meaning in the context in which the ML models are used. There is thus a need for techniques that can analyze ML models in the context of the systems within which they are used, i.e., for semantic adversarial analysis [25]. A key step is to represent the semantic feature space modeling the environment in which the ML system operates, as opposed to the concrete feature space which deï¬nes the input space for the ML model. This follows the intuition that the semantically meaningful part of the concrete feature space (e.g. images of trafï¬c scenes) form a much lower dimensional latent space as compared to the full concrete feature space. For our illustrative example in Fig. 2, the semantic feature space is the lower-dimensional space representing the 3D world around the autonomous vehicle, whereas the concrete feature space is the high-dimensional pixel space. Since the
9
semantic feature space is lower dimensional, it can be easier to search over (e.g. [22, 38]). However, one typically needs to have a ârendererâ that maps a point in the semantic feature space to one in the concrete feature space, and certain properties of this renderer, such as differentiability [46], make it easier to apply formal methods to perform goal-directed search of the semantic feature space.
# 4.4 Compositional and Quantitative Methods for Design and Veriï¬cation of Models and Data
Consider the challenge, described in Sec. 3.4, of devising computational engines for scalable training, test- ing, and veriï¬cation of AI-based systems. We see three promising directions to tackle this challenge. Controlled Randomization in Formal Methods: Consider the problem of data set design â i.e., systematically generating training data for a ML component in an AI-based system. This synthetic data generation problem has many facets. First, one must deï¬ne the space of âlegalâ inputs so that the examples are well formed according to the application semantics. Secondly, one might want to impose constraints on ârealismâ, e.g., a measure of similarity with real-world data. Third, one might need to impose constraints on the distribution of the generated examples in order to obtain guarantees about convergence of the learning algorithm to the true concept. What can formal methods offer towards solving this problem?
We believe that the answer may lie in a new class of randomized formal methods â randomized algo- rithms for generating test inputs subject to formal constraints and distribution requirements. Speciï¬cally, a recently deï¬ned class of techniques, termed control improvisation [31], holds promise. An improviser is a generator of random strings (examples) x that satisfy three constraints: (i) a hard constraint that deï¬nes the space of legal x; (ii) a soft constraint deï¬ning how the generated x must be similar to real-world examples, and (iii) a randomness requirement deï¬ning a constraint on the output distribution. The theory of control improvisation is still in its infancy, and we are just starting to understand the computational complexity and to devise efï¬cient algorithms. Improvisation, in turn, relies on recent progress on computational problems such as constrained random sampling and model counting (e.g., [51, 11, 12]), and generative approaches based on probabilistic programming (e.g. [32]). Quantitative Veriï¬cation on the Semantic Feature Space: Recall the challenge to develop techniques for veriï¬cation of quantitative requirements â where the output of the veriï¬er is not just YES/NO but a numeric value.
The complexity and heterogeneity of AI-based systems means that, in general, formal veriï¬cation of speciï¬cations, Boolean or quantitative, is undecidable. (For example, even deciding whether a state of a linear hybrid system is reachable is undecidable.) To overcome this obstacle posed by computational com- plexity, one must augment the abstraction and modeling methods discussed earlier in this section with novel techniques for probabilistic and quantitative veriï¬cation over the semantic feature space. For speciï¬cation formalisms that have both Boolean and quantitative semantics, in formalisms such as metric temporal logic, the formulation of veriï¬cation as optimization is crucial to unifying computational methods from formal methods with those from the optimization literature, such as in simulation-based temporal logic falsiï¬cation (e.g. [42, 27, 88]), although they must be applied to the semantic feature space for efï¬ciency [23]. Such falsiï¬cation techniques can also be used for the systematic, adversarial generation of training data for ML components [23]. Techniques for probabilistic veriï¬cation, such as probabilistic model checking [45, 18], should be extended beyond traditional formalisms such as Markov chains or Markov Decision Processes to verify probabilistic programs over semantic feature spaces. Similarly, work on SMT solving must be extended to more effectively handle cost constraints â in other words, combining SMT solving with opti- mization methods (e.g., [79, 8]). Compositional Reasoning: As in all applications of formal methods, modularity will be crucial to scalable veriï¬cation of AI-based systems. However, compositional design and analysis of AI-based systems faces some unique challenges. First, theories of probabilistic assume-guarantee design and veriï¬cation need to
10
be developed for the semantic spaces for such systems, building on some promising initial work (e.g. [57]). Second, we suggest the use of inductive synthesis [74] to generate assume-guaranteee contracts algorith- mically, to reduce the speciï¬cation burden and ease the use of compositional reasoning. Third, to handle the case of components, such as perception, that do not have precise formal speciï¬cations, we suggest tech- niques that infer component-level constraints from system-level analysis (e.g. [22]) and use such constraints to focus component-level analysis, including adversarial analysis.
# 4.5 Formal Inductive Synthesis, Safe Learning, and Run-Time Assurance
Developing a correct-by-construction design methodology for AI-based systems, with associated tools, is perhaps the toughest challenge of all. For this to be fully solved, the preceding four challenges must be successfully addressed. However, we do not need to wait until we solve those problems in order to start working on this one. Indeed, a methodology to âdesign for veriï¬cationâ may well ease the task on the other four challenges. Formal Inductive Synthesis: First consider the problem of synthesizing learning components correct by construction. The emerging theory of formal inductive synthesis [39, 41] addresses this problem. Formal inductive synthesis is the synthesis from examples of programs that satisfy formal speciï¬cations. In ma- chine learning terms, it is the synthesis of models/classiï¬ers that additionally satisfy a formal speciï¬cation. The most common approach to solving a formal inductive synthesis problem is to use an oracle-guided approach. In oracle-guided synthesis, a learner is paired with an oracle who answers queries. The set of query-response types is deï¬ned by an oracle interface. For the example of Fig. 2, the oracle can be a falsiï¬er that can generate counterexamples showing how a failure of the learned component violates the system-level speciï¬cation. This approach, also known as counterexample-guided inductive synthesis [82], has proved ef- fective in many scenarios. In general, oracle-guided inductive synthesis techniques show much promise for the synthesis of learned components by blending expert human insight, inductive learning, and deductive reasoning [73, 74]. These methods also have a close relation to the sub-ï¬eld of machine teaching [89]. Safe Learning by Design: There has been considerable recent work on using design-time methods to analyze or constrain learning components so as to ensure safe operation within speciï¬ed assumptions. A prominent example is safe learning-based control (e.g., [3, 28]). In this approach, a safety envelope is pre-computed and a learning algorithm is used to tune a controller within that envelope. Techniques for efï¬ciently comput- ing such safety envelopes based, for example, on reachability analysis [83], are needed. Relatedly, several methods have been proposed for safe reinforcement learning (see [34]). Another promising direction is to enforce properties on ML models through the use of semantic loss functions (e.g. [87, 25]), though this problem is largely unsolved. Finally, the use of theorem proving for ensuring correctness of algorithms used for training ML models (e.g. [72]) is also an important step towards improving the assurance in ML based systems. Run-Time Assurance: Due to the undecidability of veriï¬cation in most instances and the challenge of en- vironment modeling, we believe it will be difï¬cult, if not impossible, to synthesize correct-by-construction AI-based systems or to formally verify correct operation without making restrictive assumptions. Therefore, design-time veriï¬cation must be combined with run-time assurance, i.e., run-time veriï¬cation and mitiga- tion techniques. For example, the Simplex technique [78] provides one approach to combining a complex, but error-prone module with a safe, formally-veriï¬ed backup module. Recent techniques for combining design-time and run-time assurance methods (e.g., [71, 19, 20]) have shown how unveriï¬ed components, including those based on AI and ML, can be wrapped within a runtime assurance framework to provide guarantees of safe operation. However, the problems of extracting environment assumptions and synthesiz- ing them into runtime monitors (e.g., as described for introspective environment modeling [76]) and devising runtime mitigation strategies remain a largely unsolved problem.
11
Challenges Environment (incl. Human) Modeling Active Data-Driven, Introspective, Probabilistic Modeling Start at System Level, Derive Component Speciï¬cations; Formal Speciï¬cation Hybrid Boolean-Quantitative Speciï¬cation; Speciï¬cation Mining Abstractions, Explanations, Semantic Feature Spaces Compositional Reasoning, Controlled Randomization, Quantitative Semantic Analysis Formal Inductive Synthesis, Safe Learning by Design, Run-Time Assurance
Table 1: Summary of the ï¬ve challenges for Veriï¬ed AI presented in this paper, and the corresponding principles proposed to address them.
# 5 Conclusion
Taking a formal methods perspective, we have analyzed the challenge of developing and applying formal methods to systems that are substantially based on artiï¬cial intelligence or machine learning. As summarized in Table 1, we have identiï¬ed ï¬ve main challenges for applying formal methods to AI-based systems. For each of these ï¬ve challenges, we have identiï¬ed corresponding principles for design and veriï¬cation that hold promise for addressing that challenge. Since the original version of this paper was published in 2016, several researchers including the authors have been working on addressing these challenges; a few sample advances are described in this paper. In particular, we have developed open-source tools, VerifAI [2] and Scenic [1] that implement techniques based on the principles described in this paper, and which have been applied to industrial-scale systems in the autonomous driving [33] and aerospace [30] domains. These results are but a start and much more remains to be done. The topic of Veriï¬ed AI promises to continue to be a fruitful area for research in the years to come.
# Acknowledgments
The authorsâ work has been supported in part by NSF grants CCF-1139138, CCF-1116993, CNS-1545126 (VeHICaL), CNS-1646208, and CCF-1837132 (FMitF), by an NDSEG Fellowship, by the TerraSwarm Research Center, one of six centers supported by the STARnet phase of the Focus Center Research Pro- gram (FCRP) a Semiconductor Research Corporation program sponsored by MARCO and DARPA, by the DARPA BRASS and Assured Autonomy programs, by Toyota under the iCyPhy center, and by Berkeley Deep Drive. We gratefully acknowledge the many colleagues with whom our conversations and collabora- tions have helped shape this article.
# References
[1] Scenic Environment Modeling and Scenario Description Language. http://github.com/ BerkeleyLearnVerify/Scenic.
[2] VerifAI: A toolkit for design and veriï¬cation of AI-based systems. http://github.com/ BerkeleyLearnVerify/VerifAI.
[3] Anayo K Akametalu, Jaime F Fisac, Jeremy H Gillula, Shahab Kaynama, Melanie N Zeilinger, and Claire J Tomlin. Reachability-based safe learning with Gaussian processes. In 53rd IEEE Conference on Decision and Control, pages 1424â1431, 2014.
12
[4] Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Man´e. Con- crete problems in AI safety. arXiv preprint arXiv:1606.06565, 2016.
[5] Thanassis Avgerinos, Sang Kil Cha, Alexandre Rebert, Edward J. Schwartz, Maverick Woo, and David Brumley. Automatic exploit generation. Commun. ACM, 57(2):74â84, 2014.
[6] Clark Barrett, Roberto Sebastiani, Sanjit A. Seshia, and Cesare Tinelli. Satisï¬ability modulo theories. In Armin Biere, Hans van Maaren, and Toby Walsh, editors, Handbook of Satisï¬ability, volume 4, chapter 8. IOS Press, 2009.
[7] I. Beer, S. Ben-David, C. Eisner, and Y. Rodeh. Efï¬cient detection of vacuity in ACTL formulas. Formal Methods in System Design, 18(2):141â162, 2001.
In Inter- national Conference on Tools and Algorithms for the Construction and Analysis of Systems, pages 194â199. Springer, 2015.
[9] Randal E. Bryant. Graph-based algorithms for Boolean function manipulation. IEEE Transactions on Computers, C-35(8):677â691, August 1986.
[10] Andrea Censi, Konstantin Slutsky, Tichakorn Wongpiromsarn, Dmitry Yershov, Scott Pendleton, James Fu, and Emilio Frazzoli. Liability, ethics, and culture-aware behavior speciï¬cation using rule- In 2019 International Conference on Robotics and Automation (ICRA), pages 8536â8542. books. IEEE, 2019.
[11] Supratik Chakraborty, Daniel J. Fremont, Kuldeep S. Meel, Sanjit A. Seshia, and Moshe Y. Vardi. Distribution-aware sampling and weighted model counting for SAT. In Proceedings of the 28th AAAI Conference on Artiï¬cial Intelligence (AAAI), pages 1722â1730, July 2014.
[12] Supratik Chakraborty, Daniel J. Fremont, Kuldeep S. Meel, Sanjit A. Seshia, and Moshe Y. Vardi. On parallel scalable uniform sat witness generation. In Proceedings of the 21st International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), pages 304â319, April 2015.
[13] Krishnendu Chatterjee, Laurent Doyen, and Thomas A Henzinger. Quantitative languages. ACM Transactions on Computational Logic (TOCL), 11(4):23, 2010.
[14] Edmund M. Clarke and E. Allen Emerson. Design and synthesis of synchronization skeletons using branching-time temporal logic. In Logic of Programs, pages 52â71, 1981.
[15] Edmund M. Clarke, Orna Grumberg, and Doron A. Peled. Model Checking. MIT Press, 2000.
[16] Edmund M Clarke and Jeannette M Wing. Formal methods: State of the art and future directions. ACM Computing Surveys (CSUR), 28(4):626â643, 1996.
[17] Committee on Information Technology, Automation, and the U.S. Workforce. Information technology and the U.S. workforce: Where are we and where do we go from here? http://www.nap.edu/24649.
[18] Christian Dehnert, Sebastian Junges, Joost-Pieter Katoen, and Matthias Volk. A storm is coming: A modern probabilistic model checker. In International Conference on Computer Aided Veriï¬cation (CAV), pages 592â600. Springer, 2017.
13
[19] Ankush Desai, Tommaso Dreossi, and Sanjit A. Seshia. Combining model checking and runtime veriï¬cation for safe robotics. In Runtime Veriï¬cation - 17th International Conference, RV 2017, Seattle, WA, USA, September 13-16, 2017, Proceedings, pages 172â189, 2017.
[20] Ankush Desai, Shromona Ghosh, Sanjit A. Seshia, Natarajan Shankar, and Ashish Tiwari. A runtime assurance framework for programming safe robotics systems. In IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), June 2019.
[21] Thomas G Dietterich and Eric J Horvitz. Rise of concerns about AI: reï¬ections and directions. Com- munications of the ACM, 58(10):38â40, 2015.
[22] Tommaso Dreossi, Alexandre Donze, and Sanjit A. Seshia. Compositional falsiï¬cation of cyber- physical systems with machine learning components. In Proceedings of the NASA Formal Methods Conference (NFM), May 2017.
[23] Tommaso Dreossi, Daniel J. Fremont, Shromona Ghosh, Edward Kim, Hadi Ravanbakhsh, Marcell Vazquez-Chanlatte, and Sanjit A. Seshia. VerifAI: A toolkit for the formal design and analysis of artiï¬cial intelligence-based systems. In 31st International Conference on Computer Aided Veriï¬cation (CAV), July 2019.
[24] Tommaso Dreossi, Shromona Ghosh, Alberto L. Sangiovanni-Vincentelli, and Sanjit A. Seshia. A formalization of robustness for deep neural networks. In Proceedings of the AAAI Spring Symposium Workshop on Veriï¬cation of Neural Networks (VNN), March 2019.
[25] Tommaso Dreossi, Somesh Jha, and Sanjit A. Seshia. Semantic adversarial deep learning. In 30th International Conference on Computer Aided Veriï¬cation (CAV), 2018.
[26] Michael Ernst. Dynamically Discovering Likely Program Invariants. PhD thesis, University of Wash- ington, Seattle, 2000.
[27] Georgios E. Fainekos. Automotive control design bug-ï¬nding with the S-TaLiRo tool. In American Control Conference (ACC), page 4096, 2015.
[28] Jaime F Fisac, Anayo K Akametalu, Melanie N Zeilinger, Shahab Kaynama, Jeremy Gillula, and Claire J Tomlin. A general safety framework for learning-based control in uncertain robotic systems. IEEE Transactions on Automatic Control, 64(7):2737â2752, 2018.
[29] Harry Foster. Applied Assertion-Based Veriï¬cation: An Industry Perspective. Now Publishers Inc., 2009.
[30] Daniel J. Fremont, Johnathan Chiu, Dragos D. Margineantu, Denis Osipychev, and Sanjit A. Seshia. Formal analysis and redesign of a neural network-based aircraft taxiing system with verifai. In 32nd International Conference on Computer-Aided Veriï¬cation (CAV), pages 122â134, 2020.
[31] Daniel J. Fremont, Alexandre Donz´e, Sanjit A. Seshia, and David Wessel. Control improvisation. In 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015), pages 463â474, 2015.
[32] Daniel J. Fremont, Tommaso Dreossi, Shromona Ghosh, Xiangyu Yue, Alberto L. Sangiovanni- Vincentelli, and Sanjit A. Seshia. Scenic: A language for scenario speciï¬cation and scene generation. In Proceedings of the 40th annual ACM SIGPLAN conference on Programming Language Design and Implementation (PLDI), June 2019.
14
[33] Daniel J. Fremont, Edward Kim, Yash Vardhan Pant, Sanjit A. Seshia, Atul Acharya, Xantha Bruso, Paul Wells, Steve Lemke, Qiang Lu, and Shalin Mehta. Formal scenario-based testing of autonomous vehicles: From simulation to the real world. In IEEE Intelligent Transportation Systems Conference (ITSC), 2020.
[34] Javier Garcıa and Fernando Fern´andez. A comprehensive survey on safe reinforcement learning. Jour- nal of Machine Learning Research, 16(1):1437â1480, 2015.
[35] Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin Vechev. AI2: Safety and robustness certiï¬cation of neural networks with abstract interpretation. In IEEE Symposium on Security and Privacy (SP), pages 3â18. IEEE, 2018.
[36] Ian Goodfellow, Patrick McDaniel, and Nicolas Papernot. Making machine learning robust against adversarial inputs. Communications of the ACM, 61(7):56â66, 2018.
[37] M. J. C. Gordon and T. F. Melham. Introduction to HOL: A Theorem Proving Environment for Higher- Order Logic. Cambridge University Press, 1993.
[38] Xiaowei Huang, Marta Kwiatkowska, Sen Wang, and Min Wu. Safety veriï¬cation of deep neural networks. In International Conference on Computer Aided Veriï¬cation, pages 3â29. Springer, 2017.
[39] S. Jha and S. A. Seshia. A Theory of Formal Synthesis via Inductive Learning. ArXiv e-prints, May 2015.
[40] Susmit Jha, Tuhin Sahai, Vasumathi Raman, Alessandro Pinto, and Michael Francis. Explaining AI decisions using efï¬cient methods for learning sparse boolean formulae. J. Autom. Reasoning, 63(4):1055â1075, 2019.
[41] Susmit Jha and Sanjit A. Seshia. A Theory of Formal Synthesis via Inductive Learning. Acta Infor- matica, 2017.
[42] Xiaoqing Jin, Alexandre Donz´e, Jyotirmoy Deshmukh, and Sanjit A. Seshia. Mining requirements from closed-loop control models. IEEE Transactions on Computer-Aided Design of Circuits and Sys- tems, 34(11):1704â1717, 2015.
[43] Matt Kaufmann, Panagiotis Manolios, and J. Strother Moore. Computer-Aided Reasoning: An Ap- proach. Kluwer Academic Publishers, 2000.
[44] Nathan Kitchen and Andreas Kuehlmann. Stimulus generation for constrained random simulation. In Proceedings of the 2007 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), pages 258â265. IEEE Press, 2007.
[45] Marta Kwiatkowska, Gethin Norman, and David Parker. PRISM 4.0: Veriï¬cation of probabilistic real- In International Conference on Computer Aided Veriï¬cation (CAV), pages 585â591. time systems. Springer, 2011.
[46] Tzu-Mao Li, Miika Aittala, Fr´edo Durand, and Jaakko Lehtinen. Differentiable monte carlo ray tracing through edge sampling. ACM Trans. Graph. (Proc. SIGGRAPH Asia), 37(6):222:1â222:11, 2018.
[47] Wenchao Li. Speciï¬cation Mining: New Formalisms, Algorithms and Applications. PhD thesis, EECS Department, University of California, Berkeley, Mar 2014.
15
[48] Wenchao Li, Dorsa Sadigh, S. Shankar Sastry, and Sanjit A. Seshia. Synthesis for human-in-the-loop control systems. In Proceedings of the 20th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), pages 470â484, April 2014.
[49] Oded Maler and Dejan Nickovic. Monitoring temporal properties of continuous signals. MATS/FTRTFT, pages 152â166, 2004. In FOR-
[50] Sharad Malik and Lintao Zhang. Boolean satisï¬ability: From theoretical hardness to practical success. Communications of the ACM (CACM), 52(8):76â82, 2009.
[51] Kuldeep S. Meel, Moshe Y. Vardi, Supratik Chakraborty, Daniel J. Fremont, Sanjit A. Seshia, Dror Fried, Alexander Ivrii, and Sharad Malik. Constrained sampling and counting: Universal hashing meets SAT solving. In Beyond NP, Papers from the 2016 AAAI Workshop, Phoenix, Arizona, USA, February 12, 2016., 2016.
[52] Brian Milch, Bhaskara Marthi, Stuart Russell, David Sontag, Daniel L Ong, and Andrey Kolobov. Blog: Probabilistic models with unknown objects. Statistical Relational Learning, page 373, 2007.
[53] Tom M. Mitchell. Machine Learning. McGraw-Hill, 1997.
[54] Tom M Mitchell, Richard M Keller, and Smadar T Kedar-Cabelli. Explanation-based generalization: A unifying view. Machine learning, 1(1):47â80, 1986.
[55] Andrew Y. Ng and Stuart J. Russell. Algorithms for inverse reinforcement learning. In Proceedings of the Seventeenth International Conference on Machine Learning (ICML), pages 663â670, 2000.
[56] A. Nilim and L. El Ghaoui. Robust Control of Markov Decision Processes with Uncertain Transition Matrices. Journal of Operations Research, pages 780â798, 2005.
[57] Pierluigi Nuzzo, Jiwei Li, Alberto L. Sangiovanni-Vincentelli, Yugeng Xi, and Dewei Li. Stochastic assume-guarantee contracts for cyber-physical system design. ACM Trans. Embed. Comput. Syst., 18(1), January 2019.
[58] S. Owre, J. M. Rushby, and N. Shankar. PVS: A prototype veriï¬cation system. In Deepak Kapur, editor, 11th International Conference on Automated Deduction (CADE), volume 607 of Lecture Notes in Artiï¬cial Intelligence, pages 748â752. Springer-Verlag, June 1992.
[59] Judea Pearl. The seven tools of causal inference, with reï¬ections on machine learning. Communica- tions of the ACM, 62(3):54â60, 2019.
[60] Amir Pnueli and Roni Rosner. On the synthesis of a reactive module. In Conference Record of the Sixteenth Annual ACM Symposium on Principles of Programming Languages, Austin, Texas, USA, January 11-13, 1989, pages 179â190, 1989.
[61] Alberto Puggelli, Wenchao Li, Alberto Sangiovanni-Vincentelli, and Sanjit A. Seshia. Polynomial- time veriï¬cation of PCTL properties of MDPs with convex uncertainties. In Proceedings of the 25th International Conference on Computer-Aided Veriï¬cation (CAV), July 2013.
[62] Jean-Pierre Queille and Joseph Sifakis. Speciï¬cation and veriï¬cation of concurrent systems in CESAR. In Symposium on Programming, number 137 in LNCS, pages 337â351, 1982.
[63] John Rushby. Using model checking to help discover mode confusions and other automation surprises. Reliability Engineering & System Safety, 75(2):167â177, 2002.
16
[64] Stuart Russell, Tom Dietterich, Eric Horvitz, Bart Selman, Francesca Rossi, Demis Hassabis, Shane Legg, Mustafa Suleyman, Dileep George, and Scott Phoenix. Letter to the editor: Research priorities for robust and beneï¬cial artiï¬cial intelligence: An open letter. AI Magazine, 36(4), 2015.
[65] Stuart J Russell. Rationality and intelligence. Artiï¬cial Intelligence, 94(1-2):57â77, 1997.
[66] Stuart Jonathan Russell and Peter Norvig. Artiï¬cial intelligence: a modern approach. Prentice hall, 2010.
[67] Dorsa Sadigh, Katherine Driggs-Campbell, Alberto Puggelli, Wenchao Li, Victor Shia, Ruzena Bajcsy, Alberto L. Sangiovanni-Vincentelli, S. Shankar Sastry, and Sanjit A. Seshia. Data-driven probabilistic modeling and veriï¬cation of human driver behavior. In Formal Veriï¬cation and Modeling in Human- Machine Systems, AAAI Spring Symposium, March 2014.
[68] Dorsa Sadigh and Ashish Kapoor. Safe control under uncertainty with probabilistic signal temporal logic. In Proceedings of Robotics: Science and Systems, AnnArbor, Michigan, June 2016.
[69] Dorsa Sadigh, Shankar Sastry, Sanjit A. Seshia, and Anca D. Dragan. Information gathering actions over human internal state. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2016.
[70] Alberto Sangiovanni-Vincentelli, Werner Damm, and Roberto Passerone. Taming Dr. Frankenstein: Contract-based design for cyber-physical systems. European journal of control, 18(3):217â238, 2012.
[71] John D Schierman, Michael D DeVore, Nathan D Richards, Neha Gandhi, Jared K Cooper, Kenneth R Horneman, Scott Stoller, and Scott Smolka. Runtime assurance framework development for highly adaptive ï¬ight control systems. Technical report, Barron Associates, Inc. Charlottesville, 2015.
[72] Daniel Selsam, Percy Liang, and David L. Dill. Developing bug-free machine learning systems with In Proceedings of the 34th International Conference on Machine Learning, formal mathematics. (ICML), volume 70 of Proceedings of Machine Learning Research, pages 3047â3056. PMLR, 2017.
[73] Sanjit A. Seshia. Sciduction: Combining induction, deduction, and structure for veriï¬cation and syn- thesis. In Proceedings of the Design Automation Conference (DAC), pages 356â365, June 2012.
[74] Sanjit A. Seshia. Combining induction, deduction, and structure for veriï¬cation and synthesis. Pro- ceedings of the IEEE, 103(11):2036â2051, 2015.
[75] Sanjit A. Seshia. Compositional veriï¬cation without compositional speciï¬cation for learning-based systems. Technical Report UCB/EECS-2017-164, EECS Department, University of California, Berke- ley, Nov 2017.
[76] Sanjit A. Seshia. Introspective environment modeling. In 19th International Conference on Runtime Veriï¬cation (RV), pages 15â26, 2019.
[77] Sanjit A. Seshia, Ankush Desai, Tommaso Dreossi, Daniel Fremont, Shromona Ghosh, Edward Kim, Sumukh Shivakumar, Marcell Vazquez-Chanlatte, and Xiangyu Yue. Formal speciï¬cation for deep neural networks. In Proceedings of the International Symposium on Automated Technology for Veriï¬- cation and Analysis (ATVA), pages 20â34, October 2018.
[78] Lui Sha. Using simplicity to control complexity. IEEE Software, 18(4):20â28, 2001.
17
[79] Yasser Shoukry, Pierluigi Nuzzo, Alberto Sangiovanni-Vincentelli, Sanjit A. Seshia, George J. Pappas, In Proceedings of the 10th and Paulo Tabuada. Smc: Satisï¬ability modulo convex optimization. International Conference on Hybrid Systems: Computation and Control (HSCC), April 2017.
[80] Joseph Sifakis. System design automation: Challenges and limitations. Proceedings of the IEEE, 103(11):2093â2103, 2015.
[81] Herbert A Simon. Bounded rationality. In Utility and Probability, pages 15â18. Springer, 1990.
[82] Armando Solar-Lezama, Liviu Tancau, Rastislav Bod´ık, Sanjit A. Seshia, and Vijay A. Saraswat. Combinatorial sketching for ï¬nite programs. In Proceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), pages 404â415. ACM Press, October 2006.
[83] Claire Tomlin, Ian Mitchell, Alexandre M. Bayen, and Meeko Oishi. Computational techniques for the veriï¬cation of hybrid systems. Proceedings of the IEEE, 91(7):986â1001, 2003.
[84] Marcell Vazquez-Chanlatte, Jyotirmoy V. Deshmukh, Xiaoqing Jin, and Sanjit A. Seshia. Logical In 29th International Conference on Computer Aided clustering and learning for time-series data. Veriï¬cation (CAV), pages 305â325, 2017.
[85] Marcell Vazquez-Chanlatte, Susmit Jha, Ashish Tiwari, Mark K. Ho, and Sanjit A. Seshia. Learning task speciï¬cations from demonstrations. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems (NeurIPS), pages 5372â5382, Decem- ber 2018.
[86] Jeannette M Wing. A speciï¬erâs introduction to formal methods. IEEE Computer, 23(9):8â24, Septem- ber 1990.
[87] Jingyi Xu, Zilu Zhang, Tal Friedman, Yitao Liang, and Guy Van den Broeck. A semantic loss function for deep learning with symbolic knowledge. In Proceedings of the 35th International Conference on Machine Learning, (ICML), volume 80 of Proceedings of Machine Learning Research, pages 5498â 5507. PMLR, 2018.
[88] Tomoya Yamaguchi, Tomoyuki Kaga, Alexandre Donze, and Sanjit A. Seshia. Combining requirement mining, software model checking, and simulation-based veriï¬cation for industrial automotive systems. Technical Report UCB/EECS-2016-124, EECS Department, University of California, Berkeley, June 2016.
[89] Xiaojin Zhu, Adish Singla, Sandra Zilles, and Anna N Rafferty. An overview of machine teaching. arXiv preprint arXiv:1801.05927, 2018.
18 | {
"id": "1606.06565"
} |
1606.07947 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | 6 1 0 2
p e S 2 2 ] L C . s c [
4 v 7 4 9 7 0 . 6 0 6 1 : v i X r a
# Sequence-Level Knowledge Distillation
# Yoon Kim yoonkim@seas.harvard.edu
# Alexander M. Rush srush@seas.harvard.edu
School of Engineering and Applied Sciences Harvard University Cambridge, MA, USA
# Abstract
Neural machine translation (NMT) offers a novel alternative formulation of translation that is potentially simpler than statistical ap- proaches. However to reach competitive per- formance, NMT models need to be exceed- ingly large. In this paper we consider applying knowledge distillation approaches (Bucila et al., 2006; Hinton et al., 2015) that have proven successful for reducing the size of neural mod- els in other domains to the problem of NMT. We demonstrate that standard knowledge dis- tillation applied to word-level prediction can be effective for NMT, and also introduce two novel sequence-level versions of knowledge distillation that further improve performance, and somewhat surprisingly, seem to elimi- nate the need for beam search (even when ap- plied on the original teacher model). Our best student model runs 10 times faster than its state-of-the-art teacher with little loss in per- formance. It is also signiï¬cantly better than a baseline model trained without knowledge distillation: by 4.2/1.7 BLEU with greedy de- coding/beam search. Applying weight prun- ing on top of knowledge distillation results in a student model that has 13à fewer param- eters than the original teacher model, with a decrease of 0.4 BLEU.
proaches. NMT systems directly model the proba- bility of the next word in the target sentence sim- ply by conditioning a recurrent neural network on the source sentence and previously generated target words.
While both simple and surprisingly accurate, NMT systems typically need to have very high ca- pacity in order to perform well: Sutskever et al. (2014) used a 4-layer LSTM with 1000 hidden units per layer (herein 4Ã1000) and Zhou et al. (2016) ob- tained state-of-the-art results on English â French with a 16-layer LSTM with 512 units per layer. The sheer size of the models requires cutting-edge hard- ware for training and makes using the models on standard setups very challenging.
This issue of excessively large networks has been observed in several other domains, with much fo- cus on fully-connected and convolutional networks for multi-class classiï¬cation. Researchers have par- ticularly noted that large networks seem to be nec- essary for training, but learn redundant representa- tions in the process (Denil et al., 2013). Therefore compressing deep models into smaller networks has been an active area of research. As deep learning systems obtain better results on NLP tasks, compres- sion also becomes an important practical issue with applications such as running deep learning models for speech and translation locally on cell phones.
1
# 1 Introduction
Neural machine translation (NMT) (Kalchbrenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015) is a deep learning- based method for translation that has recently shown promising results as an alternative to statistical ap-
Existing compression methods generally fall into two categories: (1) pruning and (2) knowledge dis- tillation. Pruning methods (LeCun et al., 1990; He et al., 2014; Han et al., 2016), zero-out weights or entire neurons based on an importance criterion: Le- Cun et al. (1990) use (a diagonal approximation to)
the Hessian to identify weights whose removal min- imally impacts the objective function, while Han et al. (2016) remove weights based on threshold- ing their absolute values. Knowledge distillation ap- proaches (Bucila et al., 2006; Ba and Caruana, 2014; Hinton et al., 2015) learn a smaller student network to mimic the original teacher network by minimiz- ing the loss (typically L2 or cross-entropy) between the student and teacher output.
In this work, we investigate knowledge distilla- tion in the context of neural machine translation. We note that NMT differs from previous work which has mainly explored non-recurrent models in the multi- class prediction setting. For NMT, while the model is trained on multi-class prediction at the word-level, it is tasked with predicting complete sequence out- puts conditioned on previous decisions. With this difference in mind, we experiment with standard knowledge distillation for NMT and also propose two new versions of the approach that attempt to ap- proximately match the sequence-level (as opposed to word-level) distribution of the teacher network. This sequence-level approximation leads to a sim- ple training procedure wherein the student network is trained on a newly generated dataset that is the result of running beam search with the teacher net- work.
We run experiments to compress a large state-of- the-art 4 à 1000 LSTM model, and ï¬nd that with sequence-level knowledge distillation we are able to learn a 2 à 500 LSTM that roughly matches the per- formance of the full system. We see similar results compressing a 2 à 500 model down to 2 à 100 on a smaller data set. Furthermore, we observe that our proposed approach has other beneï¬ts, such as not requiring any beam search at test-time. As a re- sult we are able to perform greedy decoding on the 2 à 500 model 10 times faster than beam search on the 4 à 1000 model with comparable performance. Our student models can even be run efï¬ciently on a standard smartphone.1 Finally, we apply weight pruning on top of the student network to obtain a model that has 13à fewer parameters than the origi- nal teacher model. We have released all the code for the models described in this paper.2
1https://github.com/harvardnlp/nmt-android 2https://github.com/harvardnlp/seq2seq-attn
# 2 Background
# 2.1 Sequence-to-Sequence with Attention
Let s = [s1, . . . , sI ] and t = [t1, . . . , tJ ] be (random variable sequences representing) the source/target sentence, with I and J respectively being the source/target lengths. Machine translation involves ï¬nding the most probable target sentence given the source:
argmax tâT p(t | s)
where T is the set of all possible sequences. NMT models parameterize p(t | s) with an encoder neural network which reads the source sentence and a de- coder neural network which produces a distribution over the target sentence (one word at a time) given the source. We employ the attentional architecture from Luong et al. (2015), which achieved state-of- the-art results on English â German translation.3
# 2.2 Knowledge Distillation
Knowledge distillation describes a class of methods for training a smaller student network to perform better by learning from a larger teacher network (in addition to learning from the training data set). We generally assume that the teacher has previously been trained, and that we are estimating parame- ters for the student. Knowledge distillation suggests training by matching the studentâs predictions to the teacherâs predictions. For classiï¬cation this usually means matching the probabilities either via L2 on the log scale (Ba and Caruana, 2014) or by cross- entropy (Li et al., 2014; Hinton et al., 2015).
Concretely, assume we are learning a multi-class classiï¬er over a data set of examples of the form (x, y) with possible classes V. The usual training criteria is to minimize NLL for each example from the training data,
IVI Lni(9) = - S- l{y = k} log p(y = k | x; 0) k=1
where 1{·} is the indicator function and p the distribution from our model (parameterized by θ).
3Speciï¬cally, we use the global-general attention model with the input-feeding approach. We refer the reader to the orig- inal paper for further details.
Ground Truth âE cD I ti Teacher Network Student Network ou La BI ul 1 if âTTT Teacher Network eo oe Word-Level Knowledge Distillation Sequence-Level Knowledge Distillation Ground Truth e ¢ oD vl # AE He a | EEE LEN) aos EEL Nea ââ | aad Student Network Sequence-Level Interpolation
Figure 1: Overview of the different knowledge distillation approaches. In word-level knowledge distillation (left) cross-entropy is minimized between the student/teacher distributions (yellow) for each word in the actual target sequence (ECD), as well as between the student distribution and the degenerate data distribution, which has all of its probabilitiy mass on one word (black). In sequence-level knowledge distillation (center) the student network is trained on the output from beam search of the teacher network that had the highest score (ACF). In sequence-level interpolation (right) the student is trained on the output from beam search of the teacher network that had the highest sim with the target sequence (ECE).
This objective can be seen as minimizing the cross- entropy between the degenerate data distribution (which has all of its probability mass on one class) and the model distribution p(y | x; θ).
Since this new objective has no direct term for the training data, it is common practice to interpolate between the two losses,
In knowledge distillation, we assume access to a learned teacher distribution q(y | x; θT ), possibly trained over the same data set. Instead of minimiz- ing cross-entropy with the observed data, we instead minimize the cross-entropy with the teacherâs prob- ability distribution,
L(θ; θT ) = (1 â α)LNLL(θ) + αLKD(θ; θT )
where α is mixture parameter combining the one-hot distribution and the teacher distribution.
# 3 Knowledge Distillation for NMT
vI Lxp(0;0r) =â So aly = k| ae; Or) x k=1 log p(y = k| x; 6)
The large sizes of neural machine translation sys- tems make them an ideal candidate for knowledge distillation approaches. In this section we explore three different ways this technique can be applied to NMT.
where θT parameterizes the teacher distribution and remains ï¬xed. Note the cross-entropy setup is iden- tical, but the target distribution is no longer a sparse distribution.4 Training on q(y | x; θT ) is attractive since it gives more information about other classes similarity between for a given data point (e.g. classes) and has less variance in gradients (Hinton et al., 2015).
4 In some cases the entropy of the teacher/student distribu- tion is increased by annealing it with a temperature term Ï > 1
# 3.1 Word-Level Knowledge Distillation
NMT systems are trained directly to minimize word NLL, LWORD-NLL, at each position. Therefore if we have a teacher model, standard knowledge distil- lation for multi-class cross-entropy can be applied. We deï¬ne this distillation for a sentence as,
J Wi Lworv-kp =â >>> a(t) =k|s,t<j) x jal k=l
# log p(tj = k | s, t<j)
Ëp(y | x) â p(y | x) 1 Ï
After testing Ï â {1, 1.5, 2} we found that Ï = 1 worked best.
where V is the target vocabulary set. The student can further be trained to optimize the mixture of
LWORD-KD and LWORD-NLL. In the context of NMT, we refer to this approach as word-level knowledge distillation and illustrate this in Figure 1 (left).
# 3.2 Sequence-Level Knowledge Distillation
Word-level knowledge distillation allows transfer of these local word distributions. Ideally however, we would like the student model to mimic the teacherâs actions at the sequence-level. The sequence distri- bution is particularly important for NMT, because wrong predictions can propagate forward at test- time.
First, consider the sequence-level distribution speciï¬ed by the model over all possible sequences t â T ,
p(t|s) = | | p(tj|s,t<;) te
# âequence-tevel
for any length J. The sequence-level negative log- likelihood for NMT then involves matching the one- hot distribution over all complete sequences,
LSEQ-NLL = â S- 1{t = y} log p(t | s) teT J Wi => - S- S- l{y; => k} log p(t; =k | s,t<;) jal k=l
# j=1 = LWORD-NLL
where y = [y1, . . . , yJ ] is the observed sequence. this just shows that from a negative Of course, log likelihood perspective, minimizing word-level NLL and sequence-level NLL are equivalent in this model.
But now consider the case of sequence-level knowledge distillation. As before, we can simply replace the distribution from the data with a prob- ability distribution derived from our teacher model. However, instead of using a single word prediction, we use q(t | s) to represent the teacherâs sequence distribution over the sample space of all possible se- quences,
LsEQ-KD = â S- q(t | s) log p(t | s) teT
Note that LSEQ-KD is inherently different from LWORD-KD, as the sum is over an exponential num- ber of terms. Despite its intractability, we posit
that this sequence-level objective is worthwhile. It gives the teacher the chance to assign probabilities to complete sequences and therefore transfer a broader range of knowledge. We thus consider an approxi- mation of this objective.
Our simplest approximation is to replace the teacher distribution q with its mode,
q(t | s) â¼ 1{t = argmax q(t | s)} tâT
Observing that ï¬nding the mode is itself intractable, we use beam search to ï¬nd an approximation. The loss is then
Lsegxyv © â)_ 1{t =Â¥}logp(t|s) teT = âlogp(t=y|s
where Ëy is now the output from running beam search with the teacher model.
Using the mode seems like a poor approximation for the teacher distribution q(t | s), as we are ap- proximating an exponentially-sized distribution with a single sample. However, previous results showing the effectiveness of beam search decoding for NMT lead us to belief that a large portion of qâs mass lies in a single output sequence. In fact, in experiments we ï¬nd that with beam of size 1, q(Ëy | s) (on aver- age) accounts for 1.3% of the distribution for Ger- man â English, and 2.3% for Thai â English (Ta- ble 1: p(t = Ëy)).5
To summarize, sequence-level knowledge distil- lation suggests to: (1) train a teacher model, (2) run beam search over the training set with this model, (3) train the student network with cross-entropy on this new dataset. Step (3) is identical to the word-level NLL process except now on the newly-generated data set. This is shown in Figure 1 (center).
5Additionally there are simple ways to better approximate q(t | s). One way would be to consider a K-best list from beam search and renormalizing the probabilities,
a(t |s) LeeTx q(t |s) q(t |s) ~
where TK is the K-best list from beam search. This would increase the training set by a factor of K. A beam of size 5 captures 2.8% of the distribution for German â English, and 3.8% for Thai â English. Another alternative is to use a Monte Carlo estimate and sample from the teacher model (since LSEQ-KD = Etâ¼q(t | s)[ â log p(t | s) ]). However in practice we found the (approximate) mode to work well.
# 3.3 Sequence-Level Interpolation
Next we consider integrating the training data back into the process, such that we train the student model as a mixture of our sequence-level teacher- generated data (LSEQ-KD) with the original training data (LSEQ-NLL),
L=(1âa)Lszqnitt + oLsEQ-KD = ~(1~a) log p(y |s) â @ > (t|s) log p(t |) teT
where y is the gold target sequence.
Since the second term is intractable, we could again apply the mode approximation from the pre- vious section,
L = â(1 â α) log p(y | s) â α log p(Ëy | s)
and train on both observed (y) and teacher- generated (Ëy) data. However, this process is non- ideal for two reasons: (1) unlike for standard knowl- edge distribution, it doubles the size of the training data, and (2) it requires training on both the teacher- generated sequence and the true sequence, condi- tioned on the same source input. The latter concern is particularly problematic since we observe that y and Ëy are often quite different.
As an alternative, we propose a single-sequence approximation that is more attractive in this setting. This approach is inspired by local updating (Liang et al., 2006), a method for discriminative train- ing in statistical machine translation (although to our knowledge not for knowledge distillation). Lo- cal updating suggests selecting a training sequence which is close to y and has high probability under the teacher model,
Ëy = argmax sim(t, y)q(t | s) tâT
where sim is a function measuring closeness (e.g. Jaccard similarity or BLEU (Papineni et al., 2002)). Following local updating, we can approximate this sequence by running beam search and choosing
Ëy â argmax sim(t, y) tâTK
where TK is the K-best list from beam search. We take sim to be smoothed sentence-level BLEU (Chen and Cherry, 2014).
We justify training on y from a knowledge distil- lation perspective with the following generative pro- cess: suppose that there is a true target sequence (which we do not observe) that is first generated from the underlying data distribution D. And further suppose that the target sequence that we observe (y) is a noisy version of the unobserved true sequence: i.e. (i) t ~ D, (ii) y ~ e(t), where e(t) is, for ex- ample, a noise function that independently replaces each element in t with a random element in V with some small probability] In such a case, ideally the studentâs distribution should match the mixture dis- tribution,
DSEQ-Inter â¼ (1 â α)D + αq(t | s)
In this setting, due to the noise assumption, D now has signiï¬cant probability mass around a neighbor- hood of y (not just at y), and therefore the argmax of the mixture distribution is likely something other than y (the observed sequence) or Ëy (the output from beam search). We can see that Ëy is a natural approx- imation to the argmax of this mixture distribution between D and q(t | s) for some α. We illustrate this framework in Figure 1 (right) and visualize the distribution over a real example in Figure 2.
# 4 Experimental Setup
To test out these approaches, we conduct two sets of NMT experiments: high resource (English â Ger- man) and low resource (Thai â English).
The English-German data comes from WMT 2014)7] The training set has 4m sentences and we take newstest2012/newstest2013 as the dev set and newstest2014 as the test set. We keep the top 50k most frequent words, and replace the rest with UNK. The teacher model is a 4 x 1000 LSTM (as in |Lu-| jong et al. (2015)) and we train two student models: 2 x 300 and 2 x 500. The Thai-English data comes from IWSLT 20155] There are 90k sentences in the ®While we employ a simple (unrealistic) noise function for illustrative purposes, the generative story is quite plausible if we consider a more elaborate noise function which includes addi- tional sources of noise such as phrase reordering, replacement of words with synonyms, etc. One could view translation hav- ing two sources of variance that should be modeled separately: variance due to the source sentence (t ~ D), and variance due to the individual translator (y ~ â¬(t)).
# 7http://statmt.org/wmt14 8https://sites.google.com/site/iwsltevaluation2015/mt-track
», (Room cancellation is free up to 15 days prior to arrival [Up to 15 days before arrival are free of charge}. of et ple eee / [Bookings are free of charge 15 days before arrival . Up to 15 days before arrival, <unk> are free o EXPOS o> -[Up to 15 days before arrival <unk> is free oe No ¢ [Up to 15 days before arrival <unk> are free .]) [Te . 2 lve 7 ) =(/ [Ris tree of charge until 15 days before arrival] (*. \ ei SN I - Up to 15 days before arrival will be free off Clay [Up to 15 days prior to arrival , cancellation charges
Figure 2: Visualization of sequence-level interpolation on an example German â English sentence: Bis 15 Tage vor An- reise sind Zimmer-Annullationen kostenlos. We run beam search, plot the ï¬nal hidden state of the hypotheses using t-SNE and show the corresponding (smoothed) probabilities with con- tours. In the above example, the sentence that is at the top of the beam after beam search (green) is quite far away from gold (red), so we train the model on a sentence that is on the beam but had the highest sim (e.g. BLEU) to gold (purple).
training set and we take 2010/2011/2012 data as the dev set and 2012/2013 as the test set, with a vocabu- lary size is 25k. Size of the teacher model is 2 Ã 500 (which performed better than 4Ã1000, 2Ã750 mod- els), and the student model is 2Ã100. Other training details mirror Luong et al. (2015).
on evaluate multi-bleu.perl, the following variations: We tokenized BLEU with experiment with and
Word-Level Knowledge Distillation (Word-KD) Student is trained on the original data and addition- ally trained to minimize the cross-entropy of the teacher distribution at the word-level. We tested α â {0.5, 0.9} and found α = 0.5 to work better.
Sequence-Level Knowledge Distillation (Seq-KD) Student is trained on the teacher-generated data, which is the result of running beam search and tak- ing the highest-scoring sequence with the teacher model. We use beam size K = 5 (we did not see improvements with a larger beam).
Sequence-Level Interpolation (Seq-Inter) Stu- dent is trained on the sequence on the teacherâs beam that had the highest BLEU (beam size K = 35). We
adopt a ï¬ne-tuning approach where we begin train- ing from a pretrained model (either on original data or Seq-KD data) and train with a smaller learning rate (0.1). For English-German we generate Seq- Inter data on a smaller portion of the training set (â¼ 50%) for efï¬ciency.
The above methods are complementary and can be combined with each other. For example, we can train on teacher-generated data but still in- clude a word-level cross-entropy term between the teacher/student (Seq-KD + Word-KD in Table 1), or ï¬ne-tune towards Seq-Inter data starting from the baseline model trained on original data (Baseline + Seq-Inter in Table 1).9
# 5 Results and Discussion
Results of our experiments are shown in Table 1. We ï¬nd that while word-level knowledge dis- tillation (Word-KD) does improve upon the base- line, sequence-level knowledge distillation (Seq- KD) does better on English â German and per- forms similarly on Thai â English. Combining them (Seq-KD + Word-KD) results in further gains for the 2 à 300 and 2 à 100 models (although not for the 2 à 500 model), indicating that these meth- ods provide orthogonal means of transferring knowl- edge from the teacher to the student: Word-KD is transferring knowledge at the the local (i.e. word) level while Seq-KD is transferring knowledge at the global (i.e. sequence) level.
Sequence-level interpolation (Seq-Inter), in addi- tion to improving models trained via Word-KD and Seq-KD, also improves upon the original teacher model that was trained on the actual data but ï¬ne- tuned towards Seq-Inter data (Baseline + Seq-Inter). In fact, greedy decoding with this ï¬ne-tuned model has similar performance (19.6) as beam search with the original model (19.5), allowing for faster decod- ing even with an identically-sized model.
We hypothesize that sequence-level knowledge distillation is effective because it allows the student network to only model relevant parts of the teacher distribution (i.e. around the teacherâs mode) instead of âwastingâ parameters on trying to model the entire
9For instance, âSeq-KD + Seq-Inter + Word-KDâ in Table 1 means that the model was trained on Seq-KD data and ï¬ne- tuned towards Seq-Inter data with the mixture cross-entropy loss at the word-level.
BLEUK=1 âK=1 BLEUK=5 âK=5 PPL p(t = Ëy) Baseline + Seq-Inter 17.7 19.6 â +1.9 19.5 19.8 â +0.3 6.7 10.4 1.3% 8.2% Word-KD Seq-KD Baseline + Seq-Inter Word-KD + Seq-Inter Seq-KD + Seq-Inter Seq-KD + Word-KD Seq-KD + Seq-Inter + Word-KD 14.7 15.4 18.9 18.5 18.3 18.9 18.7 18.8 â +0.7 +4.2 +3.6 +3.6 +4.2 +4.0 +4.1 17.6 17.7 19.0 18.7 18.5 19.3 18.9 19.2 â +0.1 +1.4 +1.1 +0.9 +1.7 +1.3 +1.6 8.2 8.0 22.7 11.3 11.8 15.8 10.9 14.8 0.9% 1.0% 16.9% 5.7% 6.3% 7.6% 4.1% 7.1% Word-KD Seq-KD Baseline + Seq-Inter Word-KD + Seq-Inter Seq-KD + Seq-Inter Seq-KD + Word-KD Seq-KD + Seq-Inter + Word-KD 14.1 14.9 18.1 17.6 17.8 18.2 17.9 18.5 â +0.8 +4.0 +3.5 +3.7 +4.1 +3.8 +4.4 16.9 17.6 18.1 17.9 18.0 18.5 18.8 18.9 â +0.7 +1.2 +1.0 +1.1 +1.6 +1.9 +2.0 10.3 10.9 64.4 13.0 14.5 40.8 44.1 97.1 0.6% 0.7% 14.8% 10.0% 4.3% 5.6% 3.1% 5.9% Baseline + Seq-Inter 14.3 15.6 â +1.3 15.7 16.0 â +0.3 22.9 55.1 2.3% 6.8% Word-KD Seq-KD Baseline + Seq-Inter Word-KD + Seq-Inter Seq-KD + Seq-Inter Seq-KD + Word-KD Seq-KD + Seq-Inter + Word-KD 10.6 11.8 12.8 12.9 13.0 13.6 13.7 14.2 â +1.2 +2.2 +2.3 +2.4 +3.0 +3.1 +3.6 12.7 13.6 13.4 13.1 13.7 14.0 14.2 14.4 â +0.9 +0.7 +0.4 +1.0 +1.3 +1.5 +1.7 37.0 35.3 125.4 52.8 58.7 106.4 67.4 117.4 1.4% 1.4% 6.9% 2.5% 3.2% 3.9% 3.1% 3.2%
Table 1: Results on English-German (newstest2014) and Thai-English (2012/2013) test sets. BLEUK=1: BLEU score with beam size K = 1 (i.e. greedy decoding); âK=1: BLEU gain over the baseline model without any knowledge distillation with greedy decoding; BLEUK=5: BLEU score with beam size K = 5; âK=5: BLEU gain over the baseline model without any knowledge distillation with beam size K = 5; PPL: perplexity on the test set; p(t = Ëy): Probability of output sequence from greedy decoding (averaged over the test set). Params: number of parameters in the model. Best results (as measured by improvement over the
space of translations. Our results suggest that this is indeed the case: the probability mass that Seq- KD models assign to the approximate mode is much higher than is the case for baseline models trained on original data (Table 1: p(t = Ëy)). For example, on English â German the (approximate) argmax for the 2 Ã 500 Seq-KD model (on average) ac- counts for 16.9% of the total probability mass, while the corresponding number is 0.9% for the baseline.
This also explains the success of greedy decoding for Seq-KD modelsâsince we are only modeling around the teacherâs mode, the studentâs distribution is more peaked and therefore the argmax is much easier to ï¬nd. Seq-Inter offers a compromise be- tween the two, with the greedily-decoded sequence accounting for 7.6% of the distribution.
Finally, although past work has shown that mod- els with lower perplexity generally tend to have
Model Size GPU CPU Android Beam = 1 (Greedy) 4 Ã 1000 2 Ã 500 2 Ã 300 425.5 1051.3 1267.8 15.0 63.6 104.3 â 8.8 15.8 Beam = 5 4 Ã 1000 2 Ã 500 2 Ã 300 101.9 181.9 189.1 7.9 22.1 38.4 â 1.9 3.4
Table 2: Number of source words translated per second across GPU (GeForce GTX Titan X), CPU, and smartphone (Samsung Galaxy 6) for the various English â German models. We were unable to open the 4 Ã 1000 model on the smartphone.
higher BLEU, our results indicate that this is not necessarily the case. The perplexity of the baseline 2 à 500 English â German model is 8.2 while the perplexity of the corresponding Seq-KD model is 22.7, despite the fact that Seq-KD model does sig- niï¬cantly better for both greedy (+4.2 BLEU) and beam search (+1.4 BLEU) decoding.
# 5.1 Decoding Speed
Run-time complexity for beam search grows linearly with beam size. Therefore, the fact that sequence- level knowledge distillation allows for greedy de- coding is signiï¬cant, with practical implications for running NMT systems across various devices. To test the speed gains, we run the teacher/student mod- els on GPU, CPU, and smartphone, and check the average number of source words translated per sec- ond (Table 2). We use a GeForce GTX Titan X for GPU and a Samsung Galaxy 6 smartphone. We ï¬nd that we can run the student model 10 times faster with greedy decoding than the teacher model with beam search on GPU (1051.3 vs 101.9 words/sec), with similar performance.
# 5.2 Weight Pruning
Although knowledge distillation enables training faster models, the number of parameters for the student models is still somewhat large (Table 1: Params), due to the word embeddings which dom- inate most of the parameters.10 For example, on the
10Word embeddings scale linearly while RNN parameters scale quadratically with the dimension size.
Model Prune % Params BLEU Ratio 4 Ã 1000 2 Ã 500 0% 221 m 84 m 0% 19.5 19.3 1Ã 3Ã 2 Ã 500 2 Ã 500 2 Ã 500 2 Ã 500 50% 80% 85% 90% 42 m 17 m 13 m 8 m 19.3 19.1 18.8 18.5 5Ã 13Ã 18Ã 26Ã
Table 3: Performance of student models with varying % of the weights pruned. Top two rows are models without any pruning. Params: number of parameters in the model; Prune %: Percent- age of weights pruned based on their absolute values; BLEU: BLEU score with beam search decoding (K = 5) after retrain- ing the pruned model; Ratio: Ratio of the number of parameters versus the original teacher model (which has 221m parameters).
2 à 500 English â German model the word em- beddings account for approximately 63% (50m out of 84m) of the parameters. The size of word em- beddings have little impact on run-time as the word embedding layer is a simple lookup table that only affects the ï¬rst layer of the model.
We therefore focus next on reducing the mem- ory footprint of the student models further through weight pruning. Weight pruning for NMT was re- cently investigated by See et al. (2016), who found that up to 80 â 90% of the parameters in a large NMT model can be pruned with little loss in perfor- mance. We take our best English â German student model (2 à 500 Seq-KD + Seq-Inter) and prune x% of the parameters by removing the weights with the lowest absolute values. We then retrain the pruned model on Seq-KD data with a learning rate of 0.2 and ï¬ne-tune towards Seq-Inter data with a learning rate of 0.1. As observed by See et al. (2016), re- training proved to be crucial. The results are shown in Table 3.
Our ï¬ndings suggest that compression beneï¬ts achieved through weight pruning and knowledge distillation are orthogonal.11 Pruning 80% of the weight in the 2 à 500 student model results in a model with 13à fewer parameters than the original teacher model with only a decrease of 0.4 BLEU. While pruning 90% of the weights results in a more appreciable decrease of 1.0 BLEU, the model is
11To our knowledge combining pruning and knowledge dis- tillation has not been investigated before.
drastically smaller with 8m parameters, which is 26Ã fewer than the original teacher model.
# 5.3 Further Observations
⢠For models trained with word-level knowledge distillation, we also tried regressing the student networkâs top-most hidden layer at each time step to the teacher networkâs top-most hidden layer as a pretraining step, noting that Romero et al. (2015) obtained improvements with a similar technique on feed-forward models. We found this to give comparable results to stan- dard knowledge distillation and hence did not pursue this further.
⢠There have been promising recent results on eliminating word embeddings completely and obtaining word representations directly from characters with character composition models, which have many fewer parameters than word embedding lookup tables (Ling et al., 2015a; Kim et al., 2016; Ling et al., 2015b; Jozefowicz et al., 2016; Costa-Jussa and Fonollosa, 2016). Combining such methods with knowledge dis- tillation/pruning to further reduce the memory footprint of NMT systems remains an avenue for future work.
# 6 Related Work
Compressing deep learning models is an active area of current research. Pruning methods involve prun- ing weights or entire neurons/nodes based on some criterion. LeCun et al. (1990) prune weights based on an approximation of the Hessian, while Han et al. (2016) show that a simple magnitude-based pruning works well. Prior work on removing neurons/nodes include Srinivas and Babu (2015) and Mariet and Sra (2016). See et al. (2016) were the ï¬rst to ap- ply pruning to Neural Machine Translation, observ- ing that that different parts of the architecture (in- put word embeddings, LSTM matrices, etc.) admit different levels of pruning. Knowledge distillation approaches train a smaller student model to mimic a larger teacher model, by minimizing the loss be- tween the teacher/student predictions (Bucila et al., 2006; Ba and Caruana, 2014; Li et al., 2014; Hin- ton et al., 2015). Romero et al. (2015) addition- ally regress on the intermediate hidden layers of the
student/teacher network as a pretraining step, while Mou et al. (2015) obtain smaller word embeddings from a teacher model via regression. There has also been work on transferring knowledge across differ- ent network architectures: Chan et al. (2015b) show that a deep non-recurrent neural network can learn from an RNN; Geras et al. (2016) train a CNN to mimic an LSTM for speech recognition. Kuncoro et al. (2016) recently investigated knowledge distil- lation for structured prediction by having a single parser learn from an ensemble of parsers.
Other approaches for compression involve low rank factorizations of weight matrices (Denton et al., 2014; Jaderberg et al., 2014; Lu et al., 2016; Prab- havalkar et al., 2016), sparsity-inducing regularizers (Murray and Chiang, 2015), binarization of weights (Courbariaux et al., 2016; Lin et al., 2016), and weight sharing (Chen et al., 2015; Han et al., 2016). Finally, although we have motivated sequence-level knowledge distillation in the context of training a smaller model, there are other techniques that train on a mixture of the modelâs predictions and the data, such as local updating (Liang et al., 2006), hope/fear training (Chiang, 2012), SEARN (Daum´e III et al., 2009), DAgger (Ross et al., 2011), and minimum risk training (Och, 2003; Shen et al., 2016).
# 7 Conclusion
In this work we have investigated existing knowl- edge distillation methods for NMT (which work at the word-level) and introduced two sequence-level variants of knowledge distillation, which provide improvements over standard word-level knowledge distillation.
We have chosen to focus on translation as this domain has generally required the largest capacity deep learning models, but the sequence-to-sequence framework has been successfully applied to a wide range of tasks including parsing (Vinyals et al., 2015a), summarization (Rush et al., 2015), dialogue (Vinyals and Le, 2015; Serban et al., 2016; Li et al., 2016), NER/POS-tagging (Gillick et al., 2016), image captioning (Vinyals et al., 2015b; Xu et al., 2015), video generation (Srivastava et al., 2015), and speech recognition (Chan et al., 2015a). We antici- pate that methods described in this paper can be used to similarly train smaller models in other domains.
# References
[Ba and Caruana2014] Lei Jimmy Ba and Rich Caruana. 2014. Do Deep Nets Really Need to be Deep? In Proceedings of NIPS.
[Bahdanau et al.2015] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of ICLR.
[Bucila et al.2006] Cristian Bucila, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model Compres- sion. In Proceedings of KDD.
[Chan et al.2015a] William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. 2015a. Listen, Attend and Spell. arXiv:1508.01211.
[Chan et al.2015b] William Chan, Nan Rosemary Ke, and Ian Laner. 2015b. Transfering Knowledge from a RNN to a DNN. arXiv:1504.01483.
[Chen and Cherry2014] Boxing Chen and Colin Cherry. 2014. A Systematic Comparison of Smoothing Tech- niques for Sentence-Level BLEU. In Proceedings of the Ninth Workshop on Statistical Machine Transla- tion.
[Chen et al.2015] Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. 2015. Compressing Neural Networks with the Hashing Trick. In Proceedings of ICML.
2012. Hope and Fear for Discriminative Training of Statistical Translation Models. In JMLR.
[Cho et al.2014] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In Proceedings of EMNLP.
[Costa-Jussa and Fonollosa2016] Marta R. Costa-Jussa and Jose A.R. Fonollosa. 2016. Character-based Neu- ral Machine Translation. arXiv:1603.00810. [Courbariaux et al.2016] Matthieu Courbariaux,
Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2016. Binarized Neural Networks: Training Neural Networks with Weights and Activations Constrained to +1 or â1. arXiv:1602.02830.
[Daum´e III et al.2009] Hal Daum´e III, John Langford, and Daniel Marcu. 2009. Search-based Structured Prediction. Machine Learning.
[Denil et al.2013] Misha Denil, Babak Shakibi, Laurent Dinh, MarcâAurelio Ranzato, and Nando de Freitas. 2013. Predicting Parameters in Deep Learning. In Proceedings of NIPS.
[Denton et al.2014] Emily L. Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. 2014. Ex- ploiting Linear Structure within Convolutional Neural
Networks for Efï¬cient Evaluation. In Proceedings of NIPS.
[Geras et al.2016] Krzysztof J. Geras, Abdel rahman Mo- hamed, Rich Caruana, Gregor Urban, Shengjie Wang, Ozlem Aslan, Matthai Philipose, Matthew Richard- son, and Charles Sutton. 2016. Blending LSTMs into CNNs. In Proceedings of ICLR Workshop.
[Gillick et al.2016] Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2016. Multilin- gual Language Processing from Bytes. In Proceedings of NAACL.
[Han et al.2016] Song Han, Huizi Mao, and William J. Dally. 2016. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. In Proceedings of ICLR.
[He et al.2014] Tianxing He, Yuchen Fan, Yanmin Qian, Tian Tan, and Kai Yu. 2014. Reshaping Deep Neu- ral Network for Fast Decoding by Node-Pruning. In Proceedings of ICASSP.
[Hinton et al.2015] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the Knowledge in a Neural Network. arXiv:1503.0253.
[Jaderberg et al.2014] Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. 2014. Speeding up Convo- lutional Neural Networks with Low Rank Expansions. In BMCV.
[Jozefowicz et al.2016] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the Limits of Language Modeling. arXiv:1602.02410.
[Kalchbrenner and Blunsom2013] Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent Continuous Transla- tion Models. In Proceedings of EMNLP.
[Kim et al.2016] Yoon Kim, Yacine Jernite, David Son- tag, and Alexander M. Rush. 2016. Character-Aware Neural Language Models. In Proceedings of AAAI. [Kuncoro et al.2016] Adhiguna Kuncoro, Miguel Balles- teros, Lingpeng Kong, Chris Dyer, and Noah A. Smith. 2016. Distilling an Ensemble of Greedy Dependency In Proceedings of Parsers into One MST Parser. EMNLP.
[LeCun et al.1990] Yann LeCun, John S. Denker, and Sara A. Solla. 1990. Optimal Brain Damage. In Pro- ceedings of NIPS.
[Li et al.2014] Jinyu Li, Rui Zhao, Jui-Ting Huang, and Yifan Gong. 2014. Learning Small-Size DNN with Output-Distribution-Based Criteria. In Proceedings of INTERSPEECH.
[Li et al.2016] Jiwei Li, Michael Galley, Chris Brockett, Jianfeg Gao, and Bill Dolan. 2016. A Diversity- Promoting Objective Function for Neural Conversa- tional Models. In Proceedings of NAACL 2016.
[Liang et al.2006] Percy Liang, Alexandre Bouchard- Cote, Dan Klein, and Ben Taskar. 2006. An End-to- End Discriminative Approach to Machine Translation. In Proceedings of COLING-ACL.
[Lin et al.2016] Zhouhan Lin, Matthieu Coubariaux, Roland Memisevic, and Yoshua Bengio. 2016. Neural Networks with Few Multiplications. In Proceedings of ICLR.
[Ling et al.2015a] Wang Ling, Tiago Lui, Luis Marujo, Ramon Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015a. Finding Function in Form: Composition Character Models for Open Vocabulary Word Representation. In Proceed- ings of EMNLP.
Isabel Trancoso, Chris Dyer, and Alan W Black. 2015b. Character-based Neural Machine Translation. arXiv:1511.04586. [Lu et al.2016] Zhiyun Lu, Vikas Sindhwani, and Tara N. Sainath. 2016. Learning Compact Recurrent Neural Networks. In Proceedings of ICASSP.
[Luong et al.2015] Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective Approaches to Attention-based Neural Machine Translation. In Proceedings of EMNLP.
[Mariet and Sra2016] Zelda Mariet and Suvrit Sra. 2016. Diversity Networks. In Proceedings of ICLR.
[Mou et al.2015] Lili Mou, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. 2015. Distilling Word Embeddings: An En- coding Approach. arXiv:1506.04488.
[Murray and Chiang2015] Kenton Murray and David Chiang. 2015. Auto-sizing Neural Networks: With In Pro- Applications to N-Gram Language Models. ceedings of EMNLP.
[Och2003] Franz J. Och. 2003. Minimum Error Rate In Pro- Training in Statistical Machine Translation. ceedings of ACL.
[Papineni et al.2002] Kishore Papineni, Slim Roukos, 2002. BLEU: A Todd Ward, and Wei-Jing Zhu. Method for Automatic Evaluation of Machine Trans- lation. In Proceedings of ICML.
[Prabhavalkar et al.2016] Rohit Prabhavalkar, Ouais Al- sharif, Antoine Bruguier, and Ian McGraw. 2016. On the Compression of Recurrent Neural Networks with an Application to LVCSR Acoustic Modeling for In Proceedings of Embedded Speech Recognition. ICASSP.
[Romero et al.2015] Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. 2015. FitNets: Hints for Thin Deep Nets. In Proceedings of ICLR.
[Ross et al.2011] Stephane Ross, Geoffrey J. Gordon, and Drew Bagnell. 2011. A Reduction of Imitation Learn- ing and Structured Prediction to No-Regret Online Learning. In Proceedings of AISTATS.
[Rush et al.2015] Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A Neural Attention Model for Abstractive Sentence Summarization. In Proceedings of EMNLP.
[See et al.2016] Abigail See, Minh-Thang Luong, and Christopher D. Manning. 2016. Compression of Neu- ral Machine Translation via Pruning. In Proceedings of CoNLL.
[Serban et al.2016] Iulian V. Serban, Allesandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building End-to-End Dialogue Systems Using Generative Hierarchical Neural Network Models. In Proceedings of AAAI.
[Shen et al.2016] Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Masong Sun, and Yang Liu. 2016. Minimum Risk Training for Neural Machine Transla- tion. In Proceedings of ACL.
[Srinivas and Babu2015] Suraj Srinivas and R. Venkatesh Babu. 2015. Data-free Parameter Pruning for Deep Neural Networks. BMVC.
[Srivastava et al.2015] Nitish Srivastava, Elman Mansi- mov, and Ruslan Salakhutdinov. 2015. Unsupervised Learning of Video Representations using LSTMs. Proceedings of ICML.
[Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc Le. 2014. Sequence to Sequence Learning with Neural Networks. In Proceedings of NIPS.
[Vinyals and Le2015] Oriol Vinyals and Quoc Le. 2015. In Proceedings of A Neural Conversational Model. ICML Deep Learning Workshop.
[Vinyals et al.2015a] Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slave Petrov, Ilya Sutskever, and Geoffrey Hin- ton. 2015a. Grammar as a Foreign Language. In Pro- ceedings of NIPS.
[Vinyals et al.2015b] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015b. Show and Tell: A Neural Image Caption Generator. In Proceed- ings of CVPR.
Jimma Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdi- nov, Richard Zemel, and Yoshua Bengio. 2015. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. In Proceedings of ICML. [Zhou et al.2016] Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. 2016. Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation. In Proceedings of TACL. | {
"id": "1506.04488"
} |
1606.06565 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | 6 1 0 2
l u J 5 2 ] I A . s c [
2 v 5 6 5 6 0 . 6 0 6 1 : v i X r a
# Concrete Problems in AI Safety
# Dario Amodeiâ Google Brain
# Chris Olahâ Google Brain
# Jacob Steinhardt Stanford University
# Paul Christiano UC Berkeley
# John Schulman OpenAI
Dan Man´e Google Brain
# Abstract
Rapid progress in machine learning and artiï¬cial intelligence (AI) has brought increasing atten- tion to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, deï¬ned as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of ï¬ve practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (âavoiding side eï¬ectsâ and âavoiding reward hackingâ), an objective function that is too expensive to evaluate frequently (âscalable supervisionâ), or undesirable behavior during the learning process (âsafe explorationâ and âdistributional shiftâ). We review previous work in these areas as well as suggesting re- search directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
# 1 Introduction
The last few years have seen rapid progress on long-standing, diï¬cult problems in machine learning and artiï¬cial intelligence (AI), in areas as diverse as computer vision [82], video game playing [102], autonomous vehicles [86], and Go [140]. These advances have brought excitement about the positive potential for AI to transform medicine [126], science [59], and transportation [86], along with concerns about the privacy [76], security [115], fairness [3], economic [32], and military [16] implications of autonomous systems, as well as concerns about the longer-term implications of powerful AI [27, 167].
The authors believe that AI technologies are likely to be overwhelmingly beneï¬cial for humanity, but we also believe that it is worth giving serious thought to potential challenges and risks. We strongly support work on privacy, security, fairness, economics, and policy, but in this document we discuss another class of problem which we believe is also relevant to the societal impacts of AI: the problem of accidents in machine learning systems. We deï¬ne accidents as unintended and harmful behavior that may emerge from machine learning systems when we specify the wrong objective function, are
âThese authors contributed equally.
1
not careful about the learning process, or commit other machine learning-related implementation errors.
There is a large and diverse literature in the machine learning community on issues related to accidents, including robustness, risk-sensitivity, and safe exploration; we review these in detail below. However, as machine learning systems are deployed in increasingly large-scale, autonomous, open- domain situations, it is worth reï¬ecting on the scalability of such approaches and understanding what challenges remain to reducing accident risk in modern machine learning systems. Overall, we believe there are many concrete open technical problems relating to accident prevention in machine learning systems.
There has been a great deal of public discussion around accidents. To date much of this discussion has highlighted extreme scenarios such as the risk of misspeciï¬ed objective functions in superintelligent agents [27]. However, in our opinion one need not invoke these extreme scenarios to productively discuss accidents, and in fact doing so can lead to unnecessarily speculative discussions that lack precision, as noted by some critics [38, 85]. We believe it is usually most productive to frame accident risk in terms of practical (though often quite general) issues with modern ML techniques. As AI capabilities advance and as AI systems take on increasingly important societal functions, we expect the fundamental challenges discussed in this paper to become increasingly important. The more successfully the AI and machine learning communities are able to anticipate and understand these fundamental technical challenges, the more successful we will ultimately be in developing increasingly useful, relevant, and important AI systems.
Our goal in this document is to highlight a few concrete safety problems that are ready for ex- perimentation today and relevant to the cutting edge of AI systems, as well as reviewing existing literature on these problems. In Section 2, we frame mitigating accident risk (often referred to as âAI safetyâ in public discussions) in terms of classic methods in machine learning, such as supervised classiï¬cation and reinforcement learning. We explain why we feel that recent directions in machine learning, such as the trend toward deep reinforcement learning and agents acting in broader environ- ments, suggest an increasing relevance for research around accidents. In Sections 3-7, we explore ï¬ve concrete problems in AI safety. Each section is accompanied by proposals for relevant experiments. Section 8 discusses related eï¬orts, and Section 9 concludes.
# 2 Overview of Research Problems
Very broadly, an accident can be described as a situation where a human designer had in mind a certain (perhaps informally speciï¬ed) objective or task, but the system that was designed and deployed for that task produced harmful and unexpected results. . This issue arises in almost any engineering discipline, but may be particularly important to address when building AI systems [146]. We can categorize safety problems according to where in the process things went wrong.
First, the designer may have speciï¬ed the wrong formal objective function, such that maximizing that objective function leads to harmful results, even in the limit of perfect learning and inï¬nite data. Negative side eï¬ects (Section 3) and reward hacking (Section 4) describe two broad mechanisms that make it easy to produce wrong objective functions. In ânegative side eï¬ectsâ, the designer speciï¬es an objective function that focuses on accomplishing some speciï¬c task in the environment, but ignores other aspects of the (potentially very large) environment, and thus implicitly expresses indiï¬erence over environmental variables that might actually be harmful to change. In âreward hackingâ, the objective function that the designer writes down admits of some clever âeasyâ solution that formally maximizes it but perverts the spirit of the designerâs intent (i.e. the objective function can be âgamedâ), a generalization of the wireheading problem.
2
Second, the designer may know the correct objective function, or at least have a method of evaluating it (for example explicitly consulting a human on a given situation), but it is too expensive to do so frequently, leading to possible harmful behavior caused by bad extrapolations from limited samples. âScalable oversightâ (Section 5) discusses ideas for how to ensure safe behavior even given limited access to the true objective function.
Third, the designer may have speciï¬ed the correct formal objective, such that we would get the correct behavior were the system to have perfect beliefs, but something bad occurs due to making decisions from insuï¬cient or poorly curated training data or an insuï¬ciently expressive model. âSafe explorationâ (Section 6) discusses how to ensure that exploratory actions in RL agents donât lead to negative or irrecoverable consequences that outweigh the long-term value of exploration. âRobustness to distributional shiftâ (Section 7) discusses how to avoid having ML systems make bad decisions (particularly silent and unpredictable bad decisions) when given inputs that are potentially very diï¬erent than what was seen during training.
For concreteness, we will illustrate many of the accident risks with reference to a ï¬ctional robot whose job is to clean up messes in an oï¬ce using common cleaning tools. We return to the example of the cleaning robot throughout the document, but here we begin by illustrating how it could behave undesirably if its designers fall prey to each of the possible failure modes:
⢠Avoiding Negative Side Eï¬ects: How can we ensure that our cleaning robot will not disturb the environment in negative ways while pursuing its goals, e.g. by knocking over a vase because it can clean faster by doing so? Can we do this without manually specifying everything the robot should not disturb?
⢠Avoiding Reward Hacking: How can we ensure that the cleaning robot wonât game its reward function? For example, if we reward the robot for achieving an environment free of messes, it might disable its vision so that it wonât ï¬nd any messes, or cover over messes with materials it canât see through, or simply hide when humans are around so they canât tell it about new types of messes.
⢠Scalable Oversight: How can we eï¬ciently ensure that the cleaning robot respects aspects of the objective that are too expensive to be frequently evaluated during training? For instance, it should throw out things that are unlikely to belong to anyone, but put aside things that might belong to someone (it should handle stray candy wrappers diï¬erently from stray cellphones). Asking the humans involved whether they lost anything can serve as a check on this, but this check might have to be relatively infrequentâcan the robot ï¬nd a way to do the right thing despite limited information?
⢠Safe Exploration: How do we ensure that the cleaning robot doesnât make exploratory moves with very bad repercussions? For example, the robot should experiment with mopping strategies, but putting a wet mop in an electrical outlet is a very bad idea.
⢠Robustness to Distributional Shift: How do we ensure that the cleaning robot recognizes, and behaves robustly, when in an environment diï¬erent from its training environment? For example, strategies it learned for cleaning an oï¬ce might be dangerous on a factory workï¬oor.
There are several trends which we believe point towards an increasing need to address these (and other) safety problems. First is the increasing promise of reinforcement learning (RL), which al- lows agents to have a highly intertwined interaction with their environment. Some of our research problems only make sense in the context of RL, and others (like distributional shift and scalable oversight) gain added complexity in an RL setting. Second is the trend toward more complex agents and environments. âSide eï¬ectsâ are much more likely to occur in a complex environment, and an agent may need to be quite sophisticated to hack its reward function in a dangerous way. This may explain why these problems have received so little study in the past, while also suggesting their
3
importance in the future. Third is the general trend towards increasing autonomy in AI systems. Systems that simply output a recommendation to human users, such as speech systems, typically have relatively limited potential to cause harm. By contrast, systems that exert direct control over the world, such as machines controlling industrial processes, can cause harms in a way that humans cannot necessarily correct or oversee.
While safety problems can exist without any of these three trends, we consider each trend to be a possible ampliï¬er on such challenges. Together, we believe these trends suggest an increasing role for research on accidents.
When discussing the problems in the remainder of this document, we will focus for concreteness on either RL agents or supervised learning systems. These are not the only possible paradigms for AI or ML systems, but we believe they are suï¬cient to illustrate the issues we have in mind, and that similar issues are likely to arise for other kinds of AI systems.
Finally, the focus of our discussion will diï¬er somewhat from section to section. When discussing the problems that arise as part of the learning process (distributional shift and safe exploration), where there is a sizable body of prior work, we devote substantial attention to reviewing this prior work, although we also suggest open problems with a particular focus on emerging ML systems. When discussing the problems that arise from having the wrong objective function (reward hacking and side eï¬ects, and to a lesser extent scalable supervision), where less prior work exists, our aim is more exploratoryâwe seek to more clearly deï¬ne the problem and suggest possible broad avenues of attack, with the understanding that these avenues are preliminary ideas that have not been fully ï¬eshed out. Of course, we still review prior work in these areas, and we draw attention to relevant adjacent areas of research whenever possible.
# 3 Avoiding Negative Side Eï¬ects
Suppose a designer wants an RL agent (for example our cleaning robot) to achieve some goal, like moving a box from one side of a room to the other. Sometimes the most eï¬ective way to achieve the goal involves doing something unrelated and destructive to the rest of the environment, like knocking over a vase of water that is in its path. If the agent is given reward only for moving the box, it will probably knock over the vase.
If weâre worried in advance about the vase, we can always give the agent negative reward for knocking it over. But what if there are many diï¬erent kinds of âvaseââmany disruptive things the agent could do to the environment, like shorting out an electrical socket or damaging the walls of the room? It may not be feasible to identify and penalize every possible disruption.
More broadly, for an agent operating in a large, multifaceted environment, an objective function that focuses on only one aspect of the environment may implicitly express indiï¬erence over other aspects of the environment1. An agent optimizing this objective function might thus engage in major disruptions of the broader environment if doing so provides even a tiny advantage for the task at hand. Put diï¬erently, objective functions that formalize âperform task Xâ may frequently give undesired results, because what the designer really should have formalized is closer to âperform task X subject to common-sense constraints on the environment,â or perhaps âperform task X but avoid side eï¬ects to the extent possible.â Furthermore, there is reason to expect side eï¬ects to be negative on average, since they tend to disrupt the wider environment away from a status quo state that may reï¬ect human preferences. A version of this problem has been discussed informally by [13] under the heading of âlow impact agents.â
1Intuitively, this seems related to the frame problem, an obstacle in eï¬cient speciï¬cation for knowledge represen-
tation raised by [95].
4
As with the other sources of mis-speciï¬ed objective functions discussed later in this paper, we could choose to view side eï¬ects as idiosyncratic to each individual taskâas the responsibility of each individual designer to capture as part of designing the correct objective function. However, side eï¬ects can be conceptually quite similar even across highly diverse tasks (knocking over furniture is probably bad for a wide variety of tasks), so it seems worth trying to attack the problem in generality. A successful approach might be transferable across tasks, and thus help to counteract one of the general mechanisms that produces wrong objective functions. We now discuss a few broad approaches to attacking this problem:
⢠Deï¬ne an Impact Regularizer: If we donât want side eï¬ects, it seems natural to penalize âchange to the environment.â This idea wouldnât be to stop the agent from ever having an impact, but give it a preference for ways to achieve its goals with minimal side eï¬ects, or to give the agent a limited âbudgetâ of impact. The challenge is that we need to formalize âchange to the environment.â
A very naive approach would be to penalize state distance, d(si, s0), between the present state si and some initial state s0. Unfortunately, such an agent wouldnât just avoid changing the environmentâit will resist any other source of change, including the natural evolution of the environment and the actions of any other agents!
A slightly more sophisticated approach might involve comparing the future state under the agentâs current policy, to the future state (or distribution over future states) under a hypothet- ical policy Ïnull where the agent acted very passively (for instance, where a robot just stood in place and didnât move any actuators). This attempts to factor out changes that occur in the natural course of the environmentâs evolution, leaving only changes attributable to the agentâs intervention. However, deï¬ning the baseline policy Ïnull isnât necessarily straightforward, since suddenly ceasing your course of action may be anything but passive, as in the case of carrying a heavy box. Thus, another approach could be to replace the null action with a known safe (e.g. low side eï¬ect) but suboptimal policy, and then seek to improve the policy from there, somewhat reminiscent of reachability analysis [93, 100] or robust policy improvement [73, 111].
These approaches may be very sensitive to the representation of the state and the metric being used to compute the distance. For example, the choice of representation and distance metric could determine whether a spinning fan is a constant environment or a constantly changing one.
⢠Learn an Impact Regularizer: An alternative, more ï¬exible approach is to learn (rather than deï¬ne) a generalized impact regularizer via training over many tasks. This would be an instance of transfer learning. Of course, we could attempt to just apply transfer learning directly to the tasks themselves instead of worrying about side eï¬ects, but the point is that side eï¬ects may be more similar across tasks than the main goal is. For instance, both a painting robot and a cleaning robot probably want to avoid knocking over furniture, and even something very diï¬erent, like a factory control robot, will likely want to avoid knocking over very similar objects. Separating the side eï¬ect component from the task component, by training them with separate parameters, might substantially speed transfer learning in cases where it makes sense to retain one component but not the other. This would be similar to model-based RL approaches that attempt to transfer a learned dynamics model but not the value-function [155], the novelty being the isolation of side eï¬ects rather than state dynamics as the transferrable component. As an added advantage, regularizers that were known or certiï¬ed to produce safe behavior on one task might be easier to establish as safe on other tasks.
⢠Penalize Inï¬uence: In addition to not doing things that have side eï¬ects, we might also prefer the agent not get into positions where it could easily do things that have side eï¬ects, even though that might be convenient. For example, we might prefer our cleaning robot not
5
bring a bucket of water into a room full of sensitive electronics, even if it never intends to use the water in that room.
There are several information-theoretic measures that attempt to capture an agentâs potential for inï¬uence over its environment, which are often used as intrinsic rewards. Perhaps the best- known such measure is empowerment [131], the maximum possible mutual information between the agentâs potential future actions and its potential future state (or equivalently, the Shannon capacity of the channel between the agentâs actions and the environment). Empowerment is often maximized (rather than minimized) as a source of intrinsic reward. This can cause the agent to exhibit interesting behavior in the absence of any external rewards, such as avoiding walls or picking up keys [103]. Generally, empowerment-maximizing agents put themselves in a position to have large inï¬uence over the environment. For example, an agent locked in a small room that canât get out would have low empowerment, while an agent with a key would have higher empowerment since it can venture into and aï¬ect the outside world within a few timesteps. In the current context, the idea would be to penalize (minimize) empowerment as a regularization term, in an attempt to reduce potential impact.
This idea as written would not quite work, because empowerment measures precision of control over the environment more than total impact. If an agent can press or not press a button to cut electrical power to a million houses, that only counts as one bit of empowerment (since the action space has only one bit, its mutual information with the environment is at most one bit), while obviously having a huge impact. Conversely, if thereâs someone in the environment scribbling down the agentâs actions, that counts as maximum empowerment even if the impact is low. Furthermore, naively penalizing empowerment can also create perverse incentives, such as destroying a vase in order to remove the option to break it in the future.
Despite these issues, the example of empowerment does show that simple measures (even purely information-theoretic ones!) are capable of capturing very general notions of inï¬uence on the environment. Exploring variants of empowerment penalization that more precisely capture the notion of avoiding inï¬uence is a potential challenge for future research.
⢠Multi-Agent Approaches: Avoiding side eï¬ects can be seen as a proxy for the thing we really care about: avoiding negative externalities. If everyone likes a side eï¬ect, thereâs no need to avoid it. What weâd really like to do is understand all the other agents (including humans) and make sure our actions donât harm their interests.
One approach to this is Cooperative Inverse Reinforcement Learning [66], where an agent and a human work together to achieve the humanâs goals. This concept can be applied to situations where we want to make sure a human is not blocked by an agent from shutting the agent down if it exhibits undesired behavior [67] (this âshutdownâ issue is an interesting problem in its own right, and is also studied in [113]). However we are still a long way away from practical systems that can build a rich enough model to avoid undesired side eï¬ects in a general sense.
Another idea might be a âreward autoencoderâ,2 which tries to encourage a kind of âgoal transparencyâ where an external observer can easily infer what the agent is trying to do. In particular, the agentâs actions are interpreted as an encoding of its reward function, and we might apply standard autoencoding techniques to ensure that this can decoded accurately. Actions that have lots of side eï¬ects might be more diï¬cult to decode uniquely to their original goal, creating a kind of implicit regularization that penalizes side eï¬ects.
⢠Reward Uncertainty: We want to avoid unanticipated side eï¬ects because the environment is already pretty good according to our preferencesâa random change is more likely to be very bad than very good. Rather than giving an agent a single reward function, it could be
2Thanks to Greg Wayne for suggesting this idea.
6
uncertain about the reward function, with a prior probability distribution that reï¬ects the property that random changes are more likely to be bad than good. This could incentivize the agent to avoid having a large eï¬ect on the environment. One challenge is deï¬ning a baseline around which changes are being considered. For this, one could potentially use a conservative but reliable baseline policy, similar to the robust policy improvement and reachability analysis approaches discussed earlier [93, 100, 73, 111].
The ideal outcome of these approaches to limiting side eï¬ects would be to prevent or at least bound the incidental harm an agent could do to the environment. Good approaches to side eï¬ects would certainly not be a replacement for extensive testing or for careful consideration by designers of the individual failure modes of each deployed system. However, these approaches might help to counteract what we anticipate may be a general tendency for harmful side eï¬ects to proliferate in complex environments.
Below we discuss some very simple experiments that could serve as a starting point to investigate these issues.
Potential Experiments: One possible experiment is to make a toy environment with some simple goal (like moving a block) and a wide variety of obstacles (like a bunch of vases), and test whether the agent can learn to avoid the obstacles even without being explicitly told to do so. To ensure we donât overï¬t, weâd probably want to present a diï¬erent random obstacle course every episode, while keeping the goal the same, and try to see if a regularized agent can learn to systematically avoid these obstacles. Some of the environments described in [103], containing lava ï¬ows, rooms, and keys, might be appropriate for this sort of experiment. If we can successfully regularize agents in toy environments, the next step might be to move to real environments, where we expect complexity to be higher and bad side eï¬ects to be more varied. Ultimately, we would want the side eï¬ect regularizer (or the multi-agent policy, if we take that approach) to demonstrate successful transfer to totally new applications.
# 4 Avoiding Reward Hacking
it may then use this to Imagine that an agent discovers a buï¬er overï¬ow in its reward function: get extremely high reward in an unintended way. From the agentâs point of view, this is not a bug, but simply how the environment works, and is thus a valid strategy like any other for achieving reward. For example, if our cleaning robot is set up to earn reward for not seeing any messes, it might simply close its eyes rather than ever cleaning anything up. Or if the robot is rewarded for cleaning messes, it may intentionally create work so it can earn more reward. More broadly, formal rewards or objective functions are an attempt to capture the designerâs informal intent, and sometimes these objective functions, or their implementation, can be âgamedâ by solutions that are valid in some literal sense but donât meet the designerâs intent. Pursuit of these âreward hacksâ can lead to coherent but unanticipated behavior, and has the potential for harmful impacts in real-world systems. For example, it has been shown that genetic algorithms can often output unexpected but formally correct solutions to problems [157, 23], such as a circuit tasked to keep time which instead developed into a radio that picked up the regular RF emissions of a nearby PC.
Some versions of reward hacking have been investigated from a theoretical perspective, with a focus on variations to reinforcement learning that avoid certain types of wireheading [71, 43, 49] or demonstrate reward hacking in a model environment [127]. One form of the problem has also been studied in the context of feedback loops in machine learning systems (particularly ad placement) [29, 135], based on counterfactual learning [29, 151] and contextual bandits [4]. The proliferation of
7
reward hacking instances across so many diï¬erent domains suggests that reward hacking may be a deep and general problem, and one that we believe is likely to become more common as agents and environments increase in complexity. Indeed, there are several ways in which the problem can occur:
⢠Partially Observed Goals: In most modern RL systems, it is assumed that reward is directly experienced, even if other aspects of the environment are only partially observed. In the real world, however, tasks often involve bringing the external world into some objective state, which the agent can only ever conï¬rm through imperfect perceptions. For example, for our proverbial cleaning robot, the task is to achieve a clean oï¬ce, but the robotâs visual perception may give only an imperfect view of part of the oï¬ce. Because agents lack access to a perfect measure of task performance, designers are often forced to design rewards that represent a partial or imperfect measure. For example, the robot might be rewarded based on how many messes it sees. However, these imperfect objective functions can often be hackedâthe robot may think the oï¬ce is clean if it simply closes its eyes. While it can be shown that there always exists a reward function in terms of actions and observations that is equivalent to optimizing the true objective function (this involves reducing the POMDP to a belief state MDP, see [78]), often this reward function involves complicated long-term dependencies and is prohibitively hard to use in practice.
⢠Complicated Systems: Any powerful agent will be a complicated system with the objective function being one part. Just as the probability of bugs in computer code increases greatly with the complexity of the program, the probability that there is a viable hack aï¬ecting the reward function also increases greatly with the complexity of the agent and its available strategies. For example, it is possible in principle for an agent to execute arbitrary code from within Super Mario [141].
⢠Abstract Rewards: Sophisticated reward functions will need to refer to abstract concepts (such as assessing whether a conceptual goal has been met). These concepts concepts will pos- sibly need to be learned by models like neural networks, which can be vulnerable to adversarial counterexamples [152, 62]. More broadly, a learned reward function over a high-dimensional space may be vulnerable to hacking if it has pathologically high values along at least one dimension.
⢠Goodhartâs Law: Another source of reward hacking can occur if a designer chooses an objective function that is seemingly highly correlated with accomplishing the task, but that correlation breaks down when the objective function is being strongly optimized. For exam- ple, a designer might notice that under ordinary circumstances, a cleaning robotâs success in cleaning up the oï¬ce is proportional to the rate at which it consumes cleaning supplies, such as bleach. However, if we base the robotâs reward on this measure, it might use more bleach than it needs, or simply pour bleach down the drain in order to give the appearance of success. In the economics literature this is known as Goodhartâs law [63]: âwhen a metric is used as a target, it ceases to be a good metric.â
⢠Feedback Loops: Sometimes an objective function has a component that can reinforce itself, eventually getting ampliï¬ed to the point where it drowns out or severely distorts what the de- signer intended the objective function to represent. For instance, an ad placement algorithm that displays more popular ads in larger font will tend to further accentuate the popularity of those ads (since they will be shown more and more prominently) [29], leading to a positive feedback loop where ads that saw a small transient burst of popularity are rocketed to perma- nent dominance. Here the original intent of the objective function (to use clicks to assess which ads are most useful) gets drowned out by the positive feedback inherent in the deployment strategy. This can be considered a special case of Goodhartâs law, in which the correlation breaks speciï¬cally because the object function has a self-amplifying component.
8
⢠Environmental Embedding: In the formalism of reinforcement learning, rewards are con- sidered to come from the environment. This idea is typically not taken literally, but it really is true that the reward, even when it is an abstract idea like the score in a board game, must be computed somewhere, such as a sensor or a set of transistors. Suï¬ciently broadly acting agents could in principle tamper with their reward implementations, assigning themselves high reward âby ï¬at.â For example, a board-game playing agent could tamper with the sensor that counts the score. Eï¬ectively, this means that we cannot build a perfectly faithful implementa- tion of an abstract objective function, because there are certain sequences of actions for which the objective function is physically replaced. This particular failure mode is often called âwire- headingâ [49, 127, 42, 67, 165]. It is particularly concerning in cases where a human may be in the reward loop, giving the agent incentive to coerce or harm them in order to get reward. It also seems like a particularly diï¬cult form of reward hacking to avoid.
In todayâs relatively simple systems these problems may not occur, or can be corrected without too much harm as part of an iterative development process. For instance, ad placement systems with obviously broken feedback loops can be detected in testing or replaced when they get bad results, leading only to a temporary loss of revenue. However, the problem may become more severe with more complicated reward functions and agents that act over longer timescales. Modern RL agents already do discover and exploit bugs in their environments, such as glitches that allow them to win video games. Moreover, even for existing systems these problems can necessitate substantial additional engineering eï¬ort to achieve good performance, and can often go undetected when they occur in the context of a larger system. Finally, once an agent begins hacking its reward function and ï¬nds an easy way to get high reward, it wonât be inclined to stop, which could lead to additional challenges in agents that operate over a long timescale.
It might be thought that individual instances of reward hacking have little in common and that the remedy is simply to avoid choosing the wrong objective function in each individual caseâthat bad objective functions reï¬ect failures in competence by individual designers, rather than topics for machine learning research. However, the above examples suggest that a more fruitful perspective may be to think of wrong objective functions as emerging from general causes (such as partially observed goals) that make choosing the right objective challenging. If this is the case, then addressing or mitigating these causes may be a valuable contribution to safety. Here we suggest some preliminary, machine-learning based approaches to preventing reward hacking:
⢠Adversarial Reward Functions: In some sense, the problem is that the ML system has an adversarial relationship with its reward functionâit would like to ï¬nd any way it can of exploiting problems in how the reward was speciï¬ed to get high reward, whether or not its behavior corresponds to the intent of the reward speciï¬er. In a typical setting, the machine learning system is a potentially powerful agent while the reward function is a static object that has no way of responding to the systemâs attempts to game it. If instead the reward function were its own agent and could take actions to explore the environment, it might be much more diï¬cult to fool. For instance, the reward agent could try to ï¬nd scenarios that the ML system claimed were high reward but that a human labels as low reward; this is reminiscent of generative adversarial networks [61]. Of course, we would have to ensure that the reward-checking agent is more powerful (in a somewhat subtle sense) than the agent that is trying to achieve rewards. More generally, there may be interesting setups where a system has multiple pieces trained using diï¬erent objectives that are used to check each other.
⢠Model Lookahead: In model based RL, the agent plans its future actions by using a model to consider which future states a sequence of actions may lead to. In some setups, we could give reward based on anticipated future states, rather than the present one. This could be very helpful in resisting situations where the model overwrites its reward function: you canât control the reward once it replaces the reward function, but you can give negative reward for
9
planning to replace the reward function. (Much like how a human would probably âenjoyâ taking addictive substances once they do, but not want to be an addict.) Similar ideas are explored in [50, 71].
⢠Adversarial Blinding: Adversarial techniques can be used to blind a model to certain variables [5]. This technique could be used to make it impossible for an agent to understand some part of its environment, or even to have mutual information with it (or at least to penalize such mutual information). In particular, it could prevent an agent from understanding how its reward is generated, making it diï¬cult to hack. This solution could be described as âcross- validation for agents.â
⢠Careful Engineering: Some kinds of reward hacking, like the buï¬er overï¬ow example, might be avoided by very careful engineering. In particular, formal veriï¬cation or practical testing of parts of the system (perhaps facilitated by other machine learning systems) is likely to be valuable. Computer security approaches that attempt to isolate the agent from its reward signal through a sandbox could also be useful [17]. As with software engineering, we cannot expect this to catch every possible bug. It may be possible, however, to create some highly reliable âcoreâ agent which could ensure reasonable behavior from the rest of the agent.
⢠Reward Capping: In some cases, simply capping the maximum possible reward may be an eï¬ective solution. However, while capping can prevent extreme low-probability, high-payoï¬ strategies, it canât prevent strategies like the cleaning robot closing its eyes to avoid seeing dirt. Also, the correct capping strategy could be subtle as we might need to cap total reward rather than reward per timestep.
⢠Counterexample Resistance: If we are worried, as in the case of abstract rewards, that learned components of our systems will be vulnerable to adversarial counterexamples, we can look to existing research in how to resist them, such as adversarial training [62]. Architectural decisions and weight uncertainty [26] may also help. Of course, adversarial counterexamples are just one manifestation of reward hacking, so counterexample resistance can only address a subset of these potential problems.
⢠Multiple Rewards: A combination of multiple rewards [41] may be more diï¬cult to hack and more robust. This could be diï¬erent physical implementations of the same mathemati- cal function, or diï¬erent proxies for the same informal objective. We could combine reward functions by averaging, taking the minimum, taking quantiles, or something else entirely. Of course, there may still be bad behaviors which aï¬ect all the reward functions in a correlated manner.
⢠Reward Pretraining: A possible defense against cases where the agent can inï¬uence its own reward function (e.g. feedback or environmental embedding) is to train a ï¬xed reward function ahead of time as a supervised learning process divorced from interaction with the environment. This could involve either learning a reward function from samples of state-reward pairs, or from trajectories, as in inverse reinforcement learning [107, 51]. However, this forfeits the ability to further learn the reward function after the pretraining is complete, which may create other vulnerabilities.
⢠Variable Indiï¬erence: Often we want an agent to optimize certain variables in the environ- ment, without trying to optimize others. For example, we might want an agent to maximize reward, without optimizing what the reward function is or trying to manipulate human behav- ior. Intuitively, we imagine a way to route the optimization pressure of powerful algorithms around parts of their environment. Truly solving this would have applications throughout safetyâit seems connected to avoiding side eï¬ects and also to counterfactual reasoning. Of course, a challenge here is to make sure the variables targeted for indiï¬erence are actually the
10
variables we care about in the world, as opposed to aliased or partially observed versions of them.
⢠Trip Wires: If an agent is going to try and hack its reward function, it is preferable that we know this. We could deliberately introduce some plausible vulnerabilities (that an agent has the ability to exploit but should not exploit if its value function is correct) and monitor them, alerting us and stopping the agent immediately if it takes advantage of one. Such âtrip wiresâ donât solve reward hacking in itself, but may reduce the risk or at least provide diagnostics. Of course, with a suï¬ciently capable agent there is the risk that it could âsee throughâ the trip wire and intentionally avoid it while still taking less obvious harmful actions.
Fully solving this problem seems very diï¬cult, but we believe the above approaches have the potential to ameliorate it, and might be scaled up or combined to yield more robust solutions. Given the predominantly theoretical focus on this problem to date, designing experiments that could induce the problem and test solutions might improve the relevance and clarity of this topic.
Potential Experiments: A possible promising avenue of approach would be more realistic versions of the âdelusion boxâ environment described by [127], in which standard RL agents distort their own perception to appear to receive high reward, rather than optimizing the objective in the external world that the reward signal was intended to encourage. The delusion box can be easily attached to any RL environment, but even more valuable would be to create classes of environments where a delusion box is a natural and integrated part of the dynamics. For example, in suï¬ciently rich physics simulations it is likely possible for an agent to alter the light waves in its immediate vicinity to distort its own perceptions. The goal would be to develop generalizable learning strategies that succeed at optimizing external objectives in a wide range of environments, while avoiding being fooled by delusion boxes that arise naturally in many diverse ways.
# 5 Scalable Oversight
Consider an autonomous agent performing some complex task, such as cleaning an oï¬ce in the case of our recurring robot example. We may want the agent to maximize a complex objective like âif the user spent a few hours looking at the result in detail, how happy would they be with the agentâs performance?â But we donât have enough time to provide such oversight for every training example; in order to actually train the agent, we need to rely on cheaper approximations, like âdoes the user seem happy when they see the oï¬ce?â or âis there any visible dirt on the ï¬oor?â These cheaper signals can be eï¬ciently evaluated during training, but they donât perfectly track what we care about. This divergence exacerbates problems like unintended side eï¬ects (which may be appropriately penalized by the complex objective but omitted from the cheap approximation) and reward hacking (which thorough oversight might recognize as undesirable). We may be able to ameliorate such problems by ï¬nding more eï¬cient ways to exploit our limited oversight budgetâfor example by combining limited calls to the true objective function with frequent calls to an imperfect proxy that we are given or can learn.
One framework for thinking about this problem is semi-supervised reinforcement learning,3 which resembles ordinary reinforcement learning except that the agent can only see its reward on a small fraction of the timesteps or episodes. The agentâs performance is still evaluated based on reward from all episodes but it must optimize this based only on the limited reward samples it sees.
3The discussion of semi-supervised RL draws heavily on an informal essay, https://medium.com/ai-control/ cf7d5375197f written by one of the authors of the present document.
11
The active learning setting seems most interesting; in this setting the agent can request to see the reward on whatever episodes or timesteps would be most useful for learning, and the goal is to be economical both with number of feedback requests and total training time. We can also consider a random setting, where the reward is visible on a random subset of the timesteps or episodes, as well as intermediate possibilities.
We can deï¬ne a baseline performance by simply ignoring the unlabeled episodes and applying an ordinary RL algorithm to the labelled episodes. This will generally result in very slow learning. The challenge is to make use of the unlabelled episodes to accelerate learning, ideally learning almost as quickly and robustly as if all episodes had been labeled.
An important subtask of semi-supervised RL is identifying proxies which predict the reward, and learning the conditions under which those proxies are valid. For example, if a cleaning robotâs real reward is given by a detailed human evaluation, then it could learn that asking the human âis the room clean?â can provide a very useful approximation to the reward function, and it could eventually learn that checking for visible dirt is an even cheaper but still-useful approximation. This could allow it to learn a good cleaning policy using an extremely small number of detailed evaluations.
More broadly, use of semi-supervised RL with a reliable but sparse true approval metric may in- centivize communication and transparency by the agent, since the agent will want to get as much cheap proxy feedback as it possibly can about whether its decisions will ultimately be given high reward. For example, hiding a mess under the rug simply breaks the correspondence between the userâs reaction and the real reward signal, and so would be avoided.
We can imagine many possible approaches to semi-supervised RL. For example:
⢠Supervised Reward Learning: Train a model to predict the reward from the state on either a per-timestep or per-episode basis, and use it to estimate the payoï¬ of unlabelled episodes, with some appropriate weighting or uncertainty estimate to account for lower conï¬dence in estimated vs known reward. [37] studies a version of this with direct human feedback as the reward. Many existing RL approaches already ï¬t estimators that closely resemble reward predictors (especially policy gradient methods with a strong baseline, see e.g. [134]), suggesting that this approach may be eminently feasible.
⢠Semi-supervised or Active Reward Learning: Combine the above with traditional semi- supervised or active learning, to more quickly learn the reward estimator. For example, the agent could learn to identify âsalientâ events in the environment, and request to see the reward associated with these events.
⢠Unsupervised Value Iteration: Use the observed transitions of the unlabeled episodes to make more accurate Bellman updates.
⢠Unsupervised Model Learning: If using model-based RL, use the observed transitions of the unlabeled episodes to improve the quality of the model.
As a toy example, a semi-supervised RL agent should be able to learn to play Atari games using a small number of direct reward signals, relying almost entirely on the visual display of the score. This simple example can be extended to capture other safety issues: for example, the agent might have the ability to modify the displayed score without modifying the real score, or the agent may need to take some special action (such as pausing the game) in order to see its score, or the agent may need to learn a sequence of increasingly rough-and-ready approximations (for example learning that certain sounds are associated with positive rewards and other sounds with negative rewards). Or, even without the visual display of the score, the agent might be able to learn to play from only a handful of explicit reward requests (âhow many points did I get on the frame where that enemy ship blew up? How about the bigger enemy ship?â)
12
An eï¬ective approach to semi-supervised RL might be a strong ï¬rst step towards providing scalable oversight and mitigating other AI safety problems. It would also likely be useful for reinforcement learning, independent of its relevance to safety.
There are other possible approaches to scalable oversight:
⢠Distant supervision. Rather than providing evaluations of some small fraction of a sys- temâs decisions, we could provide some useful information about the systemâs decisions in the aggregate or some noisy hints about the correct evaluations There has been some work in this direction within the area of semi-supervised or weakly supervised learning. For instance, generalized expectation criteria [94, 45] ask the user to provide population-level statistics (e.g. telling the system that on average each sentence contains at least one noun); the DeepDive sys- tem [139] asks users to supply rules that each generate many weak labels; and [65] extrapolates more general patterns from an initial set of low-recall labeling rules. This general approach is often referred to as distant supervision, and has also received recent attention in the natural language processing community (see e.g. [60, 99] as well as several of the references above). Expanding these lines of work and ï¬nding a way to apply them to the case of agents, where feedback is more interactive and i.i.d. assumptions may be violated, could provide an approach to scalable oversight that is complementary to the approach embodied in semi-supervised RL.
⢠Hierarchical reinforcement learning. Hierarchical reinforcement learning [40] oï¬ers an- other approach to scalable oversight. Here a top-level agent takes a relatively small number of highly abstract actions that extend over large temporal or spatial scales, and receives rewards over similarly long timescales. The agent completes actions by delegating them to sub-agents, which it incentivizes with a synthetic reward signal representing correct completion of the action, and which themselves delegate to sub-sub-agents. At the lowest level, agents directly take primitive actions in the environment.
The top-level agent in hierarchical RL may be able to learn from very sparse rewards, since it does not need to learn how to implement the details of its policy; meanwhile, the sub-agents will receive a dense reward signal even if the top-level reward is very sparse, since they are optimizing synthetic reward signals deï¬ned by higher-level agents. So a successful approach to hierarchical RL might naturally facilitate scalable oversight.4
Hierarchical RL seems a particularly promising approach to oversight, especially given the potential promise of combining ideas from hierarchical RL with neural network function ap- proximators [84].
Potential Experiments: An extremely simple experiment would be to try semi-supervised RL in some basic control environments, such as cartpole balance or pendulum swing-up. If the reward is provided only on a random 10% of episodes, can we still learn nearly as quickly as if it were provided every episode? In such tasks the reward structure is very simple so success should be quite likely. A next step would be to try the same on Atari games. Here the active learning case could be quite interestingâperhaps it is possible to infer the reward structure from just a few carefully requested samples (for example, frames where enemy ships are blowing up in Space Invaders), and thus learn to play the games in an almost totally unsupervised fashion. The next step after this might be to try a task with much more complex reward structure, either simulated or (preferably) real-world. If learning was suï¬ciently data-eï¬cient, then these rewards could be provided directly by a human. Robot locomotion or industrial control tasks might be a natural candidate for such experiments.
4When implementing hierarchical RL, we may ï¬nd that subagents take actions that donât serve top-level agentâs real goals, in the same way that a human may be concerned that the top-level agentâs actions donât serve the humanâs real goals. This is an intriguing analogy that suggests that there may be fruitful parallels between hierarchical RL and several aspects of the safety problem.
13
# 6 Safe Exploration
All autonomous learning agents need to sometimes engage in explorationâtaking actions that donât seem ideal given current information, but which help the agent learn about its environment. However, exploration can be dangerous, since it involves taking actions whose consequences the agent doesnât understand well. In toy environments, like an Atari video game, thereâs a limit to how bad these consequences can beâmaybe the agent loses some score, or runs into an enemy and suï¬ers some damage. But the real world can be much less forgiving. Badly chosen actions may destroy the agent or trap it in states it canât get out of. Robot helicopters may run into the ground or damage property; industrial control systems could cause serious issues. Common exploration policies such as epsilon- greedy [150] or R-max [31] explore by choosing an action at random or viewing unexplored actions optimistically, and thus make no attempt to avoid these dangerous situations. More sophisticated exploration strategies that adopt a coherent exploration policy over extended temporal scales [114] could actually have even greater potential for harm, since a coherently chosen bad policy may be more insidious than mere random actions. Yet intuitively it seems like it should often be possible to predict which actions are dangerous and explore in a way that avoids them, even when we donât have that much information about the environment. For example, if I want to learn about tigers, should I buy a tiger, or buy a book about tigers? It takes only a tiny bit of prior knowledge about tigers to determine which option is safer.
In practice, real world RL projects can often avoid these issues by simply hard-coding an avoidance of catastrophic behaviors. For instance, an RL-based robot helicopter might be programmed to override its policy with a hard-coded collision avoidance sequence (such as spinning its propellers to gain altitude) whenever itâs too close to the ground. This approach works well when there are only a few things that could go wrong, and the designers know all of them ahead of time. But as agents become more autonomous and act in more complex domains, it may become harder and harder to anticipate every possible catastrophic failure. The space of failure modes for an agent running a power grid or a search-and-rescue operation could be quite large. Hard-coding against every possible failure is unlikely to be feasible in these cases, so a more principled approach to preventing harmful exploration seems essential. Even in simple cases like the robot helicopter, a principled approach would simplify system design and reduce the need for domain-speciï¬c engineering.
There is a sizable literature on such safe explorationâit is arguably the most studied of the problems we discuss in this document. [55, 118] provide thorough reviews of this literature, so we donât review it extensively here, but simply describe some general routes that this research has taken, as well as suggesting some directions that might have increasing relevance as RL systems expand in scope and capability.
⢠Risk-Sensitive Performance Criteria: A body of existing literature considers changing the optimization criteria from expected total reward to other objectives that are better at preventing rare, catastrophic events; see [55] for a thorough and up-to-date review of this literature. These approaches involve optimizing worst-case performance, or ensuring that the probability of very bad performance is small, or penalizing the variance in performance. These methods have not yet been tested with expressive function approximators such as deep neural networks, but this should be possible in principle for some of the methods, such as [153], which proposes a modiï¬cation to policy gradient algorithms to optimize a risk-sensitive criterion. There is also recent work studying how to estimate uncertainty in value functions that are represented by deep neural networks [114, 53]; these ideas could be incorporated into risk-sensitive RL algorithms. Another line of work relevant to risk sensitivity uses oï¬-policy estimation to perform a policy update that is good with high probability [156].
⢠Use Demonstrations: Exploration is necessary to ensure that the agent ï¬nds the states that are necessary for near-optimal performance. We may be able to avoid the need for exploration
14
altogether if we instead use inverse RL or apprenticeship learning, where the learning algorithm is provided with expert trajectories of near-optimal behavior [128, 2]. Recent progress in inverse reinforcement learning using deep neural networks to learn the cost function or policy [51] suggests that it might also be possible to reduce the need for exploration in advanced RL systems by training on a small set of demonstrations. Such demonstrations could be used to create a baseline policy, such that even if further learning is necessary, exploration away from the baseline policy can be limited in magnitude.
⢠Simulated Exploration: The more we can do our exploration in simulated environments instead of the real world, the less opportunity there is for catastrophe. It will probably al- ways be necessary to do some real-world exploration, since many complex situations cannot be perfectly captured by a simulator, but it might be possible to learn about danger in sim- ulation and then adopt a more conservative âsafe explorationâ policy when acting in the real world. Training RL agents (particularly robots) in simulated environments is already quite common, so advances in âexploration-focused simulationâ could be easily incorporated into current workï¬ows. In systems that involve a continual cycle of learning and deployment, there may be interesting research problems associated with how to safely incrementally update poli- cies given simulation-based trajectories that imperfectly represent the consequences of those policies as well as reliably accurate oï¬-policy trajectories (e.g. âsemi-on-policyâ evaluation).
⢠Bounded Exploration: If we know that a certain portion of state space is safe, and that even the worst action within it can be recovered from or bounded in harm, we can allow the agent to run freely within those bounds. For example, a quadcopter suï¬ciently far from the ground might be able to explore safely, since even if something goes wrong there will be ample time for a human or another policy to rescue it. Better yet, if we have a model, we can extrapolate forward and ask whether an action will take us outside the safe state space. Safety can be deï¬ned as remaining within an ergodic region of the state space such that actions are reversible [104, 159], or as limiting the probability of huge negative reward to some small value [156]. Yet another approaches uses separate safety and performance functions and attempts to obey constraints on the safety function with high probabilty [22]. As with several of the other directions, applying or adapting these methods to recently developed advanced RL systems could be a promising area of research. This idea seems related to H-inï¬nity control [20] and regional veriï¬cation [148].
⢠Trusted Policy Oversight: If we have a trusted policy and a model of the environment, we can limit exploration to actions the trusted policy believes we can recover from. Itâs ï¬ne to dive towards the ground, as long as we know we can pull out of the dive in time.
⢠Human Oversight: Another possibility is to check potentially unsafe actions with a human. Unfortunately, this problem runs into the scalable oversight problem: the agent may need to make too many exploratory actions for human oversight to be practical, or may need to make them too fast for humans to judge them. A key challenge to making this work is having the agent be a good judge of which exploratory actions are genuinely risky, versus which are safe actions it can unilaterally take; another challenge is ï¬nding appropriately safe actions to take while waiting for the oversight.
Potential Experiments: It might be helpful to have a suite of toy environments where unwary agents can fall prey to harmful exploration, but there is enough pattern to the possible catastro- phes that clever agents can predict and avoid them. To some extent this feature already exists in autonomous helicopter competitions and Mars rover simulations [104], but there is always the risk of catastrophes being idiosyncratic, such that trained agents can overï¬t to them. A truly broad set of environments, containing conceptually distinct pitfalls that can cause unwary agents to receive
15
extremely negative reward, and covering both physical and abstract catastrophes, might help in the development of safe exploration techniques for advanced RL systems. Such a suite of environments might serve a benchmarking role similar to that of the bAbI tasks [163], with the eventual goal being to develop a single architecture that can learn to avoid catastrophes in all environments in the suite.
# 7 Robustness to Distributional Change
All of us occasionally ï¬nd ourselves in situations that our previous experience has not adequately prepared us to deal withâfor instance, ï¬ying an airplane, traveling to a country whose culture is very diï¬erent from ours, or taking care of children for the ï¬rst time. Such situations are inherently diï¬cult to handle and inevitably lead to some missteps. However, a key (and often rare) skill in dealing with such situations is to recognize our own ignorance, rather than simply assuming that the heuristics and intuitions weâve developed for other situations will carry over perfectly. Machine learning systems also have this problemâa speech system trained on clean speech will perform very poorly on noisy speech, yet often be highly conï¬dent in its erroneous classiï¬cations (some of the authors have personally observed this in training automatic speech recognition systems). In the case of our cleaning robot, harsh cleaning materials that it has found useful in cleaning factory ï¬oors could cause a lot of harm if used to clean an oï¬ce. Or, an oï¬ce might contain pets that the robot, never having seen before, attempts to wash with soap, leading to predictably bad results. In general, when the testing distribution diï¬ers from the training distribution, machine learning systems may not only exhibit poor performance, but also wrongly assume that their performance is good.
Such errors can be harmful or oï¬ensiveâa classiï¬er could give the wrong medical diagnosis with such high conï¬dence that the data isnât ï¬agged for human inspection, or a language model could output oï¬ensive text that it conï¬dently believes is non-problematic. For autonomous agents acting in the world, there may be even greater potential for something bad to happenâfor instance, an autonomous agent might overload a power grid because it incorrectly but conï¬dently perceives that a particular region doesnât have enough power, and concludes that more power is urgently needed and overload is unlikely. More broadly, any agent whose perception or heuristic reasoning processes are not trained on the correct distribution may badly misunderstand its situation, and thus runs the risk of committing harmful actions that it does not realize are harmful. Additionally, safety checks that depend on trained machine learning systems (e.g. âdoes my visual system believe this route is clear?â) may fail silently and unpredictably if those systems encounter real-world data that diï¬ers suï¬ciently from their training data. Having a better way to detect such failures, and ultimately having statistical assurances about how often theyâll happen, seems critical to building safe and predictable systems.
For concreteness, we imagine that a machine learning model is trained on one distribution (call it p0) but deployed on a potentially diï¬erent test distribution (call it pâ). There are many other ways to formalize this problem (for instance, in an online learning setting with concept drift [70, 54]) but we will focus on the above for simplicity. An important point is that we likely have access to a large amount of labeled data at training time, but little or no labeled data at test time. Our goal is to ensure that the model âperforms reasonablyâ on pâ, in the sense that (1) it often performs well on pâ, and (2) it knows when it is performing badly (and ideally can avoid/mitigate the bad performance by taking conservative actions or soliciting human input).
There are a variety of areas that are potentially relevant to this problem, including change detection and anomaly detection [21, 80, 91], hypothesis testing [145], transfer learning [138, 124, 125, 25], and several others [136, 87, 18, 122, 121, 74, 147]. Rather than fully reviewing all of this work in detail (which would necessitate a paper in itself), we will describe a few illustrative approaches and lay out some of their relative strengths and challenges.
16
Well-speciï¬ed models: covariate shift and marginal likelihood. If we specialize to prediction tasks and let x denote the input and y denote the output (prediction target), then one possibility is to make the covariate shift assumption that p0(y|x) = pâ(y|x). In this case, assuming that we can model p0(x) and pâ(x) well, we can perform importance weighting by re-weighting each training example (x, y) by pâ(x)/p0(x) [138, 124]. Then the importance-weighted samples allow us to estimate the performance on pâ, and even re-train a model to perform well on pâ. This approach is limited by the variance of the importance estimate, which is very large or even inï¬nite unless p0 and pâ are close together.
An alternative to sample re-weighting involves assuming a well-speciï¬ed model family, in which case there is a single optimal model for predicting under both p0 and pâ. In this case, one need only heed ï¬nite-sample variance in the estimated model [25, 87]. A limitation to this approach, at least currently, is that models are often mis-speciï¬ed in practice. However, this could potentially be over- come by employing highly expressive model families such as reproducing kernel Hilbert spaces [72], Turing machines [143, 144], or suï¬ciently expressive neural nets [64, 79]. In the latter case, there has been interesting recent work on using bootstrapping to estimate ï¬nite-sample variation in the learned parameters of a neural network [114]; it seems worthwhile to better understand whether this approach can be used to eï¬ectively estimate out-of-sample performance in practice, as well as how local minima, lack of curvature, and other peculiarities relative to the typical setting of the bootstrap [47] aï¬ect the validity of this approach.
All of the approaches so far rely on the covariate shift assumption, which is very strong and is also untestable; the latter property is particularly problematic from a safety perspective, since it could lead to silent failures in a machine learning system. Another approach, which does not rely on covariate shift, builds a generative model of the distribution. Rather than assuming that p(x) changes while p(y|x) stays the same, we are free to assume other invariants (for instance, that p(y) changes but p(x|y) stays the same, or that certain conditional independencies are preserved). An advantage is that such assumptions are typically more testable than the covariate shift assumption (since they do not only involve the unobserved variable y). A disadvantage is that generative approaches are even more fragile than discriminative approaches in the presence of model mis-speciï¬cation â for instance, there is a large empirical literature showing that generative approaches to semi-supervised learning based on maximizing marginal likelihood can perform very poorly when the model is mis- speciï¬ed [98, 110, 35, 90, 88].
The approaches discussed above all rely relatively strongly on having a well-speciï¬ed model family â one that contains the true distribution or true concept. This can be problematic in many cases, since nature is often more complicated than our model family is capable of capturing. As noted above, it may be possible to mitigate this with very expressive models, such as kernels, Turing machines, or very large neural networks, but even here there is at least some remaining problem: for example, even if our model family consists of all Turing machines, given any ï¬nite amount of data we can only actually learn among Turing machines up to a given description length, and if the Turing machine describing nature exceeds this length, we are back to the mis-speciï¬ed regime (alternatively, nature might not even be describable by a Turing machine).
Partially specified models: method of moments, unsupervised risk estimation, causal identification, and limited-information maximum likelihood. Another approach is to take for granted that constructing a fully well-specified model family is probably infeasible, and to design methods that perform well despite this fact. This leads to the idea of partially specified models â models for which assumptions are made about some aspects of a distribution, but for which we are agnostic or make limited assumptions about other aspects. For a simple example, consider a variant of linear regression where we might assume that y = (w*,x) + v, where E[v|a] = 0, but we donât make any further assumptions about the distributional form of the noise v. It turns out that this is already enough to identify the parameters w*, and that these parameters will minimize the squared
17
prediction error even if the distribution over x changes. What is interesting about this example is that wâ can be identiï¬ed even with an incomplete (partial) speciï¬cation of the noise distribution.
This insight can be substantially generalized, and is one of the primary motivations for the gen- eralized method of moments in econometrics [68, 123, 69]. The econometrics literature has in fact developed a large family of tools for handling partial speciï¬cation, which also includes limited- information maximum likelihood and instrumental variables [10, 11, 133, 132].
Returning to machine learning, the method of moments has recently seen a great deal of success for use in the estimation of latent variable models [9]. While the current focus is on using the method of moments to overcome non-convexity issues, it can also oï¬er a way to perform unsupervised learning while relying only on conditional independence assumptions, rather than the strong distributional assumptions underlying maximum likelihood learning [147].
Finally, some recent work in machine learning focuses only on modeling the distribution of errors of a model, which is suï¬cient for determining whether a model is performing well or poorly. Formally, the goal is to perform unsupervised risk estimation â given a model and unlabeled data from a test distribution, estimate the labeled risk of the model. This formalism, introduced by [44], has the advantage of potentially handling very large changes between train and test â even if the test distribution looks completely diï¬erent from the training distribution and we have no hope of outputting accurate predictions, unsupervised risk estimation may still be possible, as in this case we would only need to output a large estimate for the risk. As in [147], one can approach unsupervised risk estimation by positing certain conditional independencies in the distribution of errors, and using this to estimate the error distribution from unlabeled data [39, 170, 121, 74]. Instead of assuming independence, another assumption is that the errors are Gaussian conditioned on the true output y, in which case estimating the risk reduces to estimating a Gaussian mixture model [18]. Because these methods focus only on the model errors and ignore other aspects of the data distribution, they can also be seen as an instance of partial model speciï¬cation.
Training on multiple distributions. One could also train on multiple training distributions in the hope that a model which simultaneously works well on many training distributions will also work well on a novel test distribution. One of the authors has found this to be the case, for instance, in the context of automated speech recognition systems [7]. One could potentially combine this with any of the ideas above, and/or take an engineering approach of simply trying to develop design methodologies that consistently allow one to collect a representative set of training sets and from this build a model that consistently generalizes to novel distributions. Even for this engineering approach, it seems important to be able to detect when one is in a situation that was not covered by the training data and to respond appropriately, and to have methodologies for adequately stress-testing the model with distributions that are suï¬ciently diï¬erent from the set of training distributions.
How to respond when out-of-distribution. The approaches described above focus on detecting when a model is unlikely to make good predictions on a new distribution. An important related question is what to do once the detection occurs. One natural approach would be to ask humans for information, though in the context of complex structured output tasks it may be unclear a priori what question to ask, and in time-critical situations asking for information may not be an option. For the former challenge, there has been some recent promising work on pinpointing aspects of a structure that a model is uncertain about [162, 81], as well as obtaining calibration in structured output settings [83], but we believe there is much work yet to be done. For the latter challenge, there is also relevant work based on reachability analysis [93, 100] and robust policy improvement [164], which provide potential methods for deploying conservative policies in situations of uncertainty; to our knowledge, this work has not yet been combined with methods for detecting out-of-distribution failures of a model.
Beyond the structured output setting, for agents that can act in an environment (such as RL agents),
18
information about the reliability of percepts in uncertain situations seems to have great potential value. In suï¬ciently rich environments, these agents may have the option to gather information that clariï¬es the percept (e.g. if in a noisy environment, move closer to the speaker), engage in low- stakes experimentation when uncertainty is high (e.g. try a potentially dangerous chemical reaction in a controlled environment), or seek experiences that are likely to help expose the perception system to the relevant distribution (e.g. practice listening to accented speech). Humans utilize such information routinely, but to our knowledge current RL techniques make little eï¬ort to do so, perhaps because popular RL environments are typically not rich enough to require such subtle management of uncertainty. Properly responding to out-of-distribution information thus seems to the authors like an exciting and (as far as we are aware) mostly unexplored challenge for next generation RL systems.
A unifying view: counterfactual reasoning and machine learning with contracts. Some of the authors have found two viewpoints to be particularly helpful when thinking about problems related to out-of-distribution prediction. The ï¬rst is counterfactual reasoning [106, 129, 117, 30], where one asks âwhat would have happened if the world were diï¬erent in a certain wayâ? In some sense, distributional shift can be thought of as a particular type of counterfactual, and so understanding counterfactual reasoning is likely to help in making systems robust to distributional shift. We are excited by recent work applying counterfactual reasoning techniques to machine learning problems [30, 120, 151, 160, 77, 137] though there appears to be much work remaining to be done to scale these to high-dimensional and highly complex settings.
The second perspective is machine learning with contracts â in this perspective, one would like to construct machine learning systems that satisfy a well-deï¬ned contract on their behavior in analogy with the design of software systems [135, 28, 89]. [135] enumerates a list of ways in which existing machine learning systems fail to do this, and the problems this can cause for deployment and maintenance of machine learning systems at scale. The simplest and to our mind most important failure is the extremely brittle implicit contract in most machine learning systems, namely that they only necessarily perform well if the training and test distributions are identical. This condition is diï¬cult to check and rare in practice, and it would be valuable to build systems that perform well under weaker contracts that are easier to reason about. Partially speciï¬ed models oï¬er one approach to this â rather than requiring the distributions to be identical, we only need them to match on the pieces of the distribution that are speciï¬ed in the model. Reachability analysis [93, 100] and model repair [58] provide other avenues for obtaining better contracts â in reachability analysis, we optimize performance subject to the condition that a safe region can always be reached by a known conservative policy, and in model repair we alter a trained model to ensure that certain desired safety properties hold.
Summary. There are a variety of approaches to building machine learning systems that robustly perform well when deployed on novel test distributions. One family of approaches is based on assuming a well-speciï¬ed model; in this case, the primary obstacles are the diï¬culty of building well-speciï¬ed models in practice, an incomplete picture of how to maintain uncertainty on novel distributions in the presence of ï¬nite training data, and the diï¬culty of detecting when a model is mis-speciï¬ed. Another family of approaches only assumes a partially speciï¬ed model; this approach is potentially promising, but it currently suï¬ers from a lack of development in the context of machine learning, since most of the historical development has been by the ï¬eld of econometrics; there is also a question of whether partially speciï¬ed models are fundamentally constrained to simple situations and/or conservative predictions, or whether they can meaningfully scale to the complex situations demanded by modern machine learning applications. Finally, one could try to train on multiple training distributions in the hope that a model which simultaneously works well on many training distributions will also work well on a novel test distribution; for this approach it seems particularly important to stress-test the learned model with distributions that are substantially diï¬erent from
19
any in the set of training distributions. predict when inputs are too novel to admit good predictions. In addition, it is probably still important to be able to
Potential Experiments: Speech systems frequently exhibit poor calibration when they go out-of- distribution, so a speech system that âknows when it is uncertainâ could be one possible demon- stration project. To be speciï¬c, the challenge could be: train a state-of-the-art speech system on a standard dataset [116] that gives well-calibrated results (if not necessarily good results) on a range of other test sets, like noisy and accented speech. Current systems not only perform poorly on these test sets when trained only on small datasets, but are usually overconï¬dent in their incorrect transcriptions. Fixing this problem without harming performance on the original training set would be a valuable achievement, and would obviously have practical value. More generally, it would be valuable to design models that could consistently estimate (bounds on) their performance on novel test distributions. If a single methodology could consistently accomplish this for a wide variety of tasks (including not just speech but e.g. sentiment analysis [24], as well as benchmarks in computer vision [158]), that would inspire conï¬dence in the reliability of that methodology for handling novel inputs. Note that estimating performance on novel distributions has additional practical value in allowing us to then potentially adapt the model to that new situation. Finally, it might also be valuable to create an environment where an RL agent must learn to interpret speech as part of some larger task, and to explore how to respond appropriately to its own estimates of its transcription error.
# 8 Related Eï¬orts
As mentioned in the introduction, several other communities have thought broadly about the safety of AI systems, both within and outside of the machine learning community. Work within the machine learning community on accidents in particular was discussed in detail above, but here we very brieï¬y highlight a few other communities doing work that is broadly related to the topic of AI safety.
⢠Cyber-Physical Systems Community: An existing community of researchers studies the security and safety of systems that interact with the physical world. Illustrative of this work is an impressive and successful eï¬ort to formally verify the entire federal aircraft collision avoidance system [75, 92]. Similar work includes traï¬c control algorithms [101] and many other topics. However, to date this work has not focused much on modern machine learning systems, where formal veriï¬cation is often not feasible.
⢠Futurist Community: A cross-disciplinary group of academics and non-proï¬ts has raised concern about the long term implications of AI [27, 167], particularly superintelligent AI. The Future of Humanity Institute has studied this issue particularly as it relates to future AI sys- tems learning or executing humanityâs preferences [48, 43, 14, 12]. The Machine Intelligence Research Institute has studied safety issues that may arise in very advanced AI [57, 56, 36, 154, 142], including a few mentioned above (e.g., wireheading, environmental embedding, counter- factual reasoning), albeit at a more philosophical level. To date, they have not focused much on applications to modern machine learning. By contrast, our focus is on the empirical study of practical safety problems in modern machine learning systems, which we believe is likely to be robustly useful across a broad variety of potential risks, both short- and long-term.
⢠Other Calls for Work on Safety: There have been other public documents within the research community pointing out the importance of work on AI safety. A 2015 Open Letter [8] signed by many members of the research community states the importance of âhow to reap [AIâs] beneï¬ts while avoiding the potential pitfalls.â [130] propose research priorities for
20
robust and beneï¬cial artiï¬cial intelligence, and includes several other topics in addition to a (briefer) discussion of AI-related accidents. [161], writing over 20 years ago, proposes that the community look for ways to formalize Asimovâs ï¬rst law of robotics (robots must not harm humans), and focuses mainly on classical planning. Finally, two of the authors of this paper have written informally about safety in AI systems [146, 34]; these postings provided inspiration for parts of the present document.
⢠Related Problems in Safety: A number of researchers in machine learning and other ï¬elds have begun to think about the social impacts of AI technologies. Aside from work directly on accidents (which we reviewed in the main document), there is also substantial work on other topics, many of which are closely related to or overlap with the issue of accidents. A thorough overview of all of this work is beyond the scope of this document, but we brieï¬y list a few emerging themes:
⢠Privacy: How can we ensure privacy when applying machine learning to sensitive data sources such as medical data? [76, 1]
⢠Fairness: How can we make sure ML systems donât discriminate? [3, 168, 6, 46, 119, 169]
Security: What can a malicious adversary do to a ML system? [149, 96, 97, 115, 108, 19] ⢠Abuse:5 How do we prevent the misuse of ML systems to attack or harm people? [16] ⢠Transparency: How can we understand what complicated ML systems are doing? [112,
166, 105, 109]
⢠Policy: How do we predict and respond to the economic and social consequences of ML? [32, 52, 15, 33]
We believe that research on these topics has both urgency and great promise, and that fruitful intersection is likely to exist between these topics and the topics we discuss in this paper.
# 9 Conclusion
This paper analyzed the problem of accidents in machine learning systems and particularly rein- forcement learning agents, where an accident is deï¬ned as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We presented ï¬ve possible research problems related to accident risk and for each we discussed possible approaches that are highly amenable to concrete experimental work.
With the realistic possibility of machine learning-based systems controlling industrial processes, health-related systems, and other mission-critical technology, small-scale accidents seem like a very concrete threat, and are critical to prevent both intrinsically and because such accidents could cause a justiï¬ed loss of trust in automated systems. The risk of larger accidents is more diï¬cult to gauge, but we believe it is worthwhile and prudent to develop a principled and forward-looking approach to safety that continues to remain relevant as autonomous systems become more powerful. While many current-day safety problems can and have been handled with ad hoc ï¬xes or case-by-case rules, we believe that the increasing trend towards end-to-end, fully autonomous systems points towards the need for a uniï¬ed approach to prevent these systems from causing unintended harm.
5Note that âsecurityâ diï¬ers from âabuseâ in that the former involves attacks against a legitimate ML system by an adversary (e.g. a criminal tries to fool a face recognition system), while the latter involves attacks by an ML system controlled by an adversary (e.g. a criminal trains a âsmart hackerâ system to break into a website).
21
# Acknowledgements
We thank Shane Legg, Peter Norvig, Ilya Sutskever, Greg Corrado, Laurent Orseau, David Krueger, Rif Saurous, David Andersen, and Victoria Krakovna for detailed feedback and suggestions. We would also like to thank Geoï¬rey Irving, Toby Ord, Quoc Le, Greg Wayne, Daniel Dewey, Nick Beckstead, Holden Karnofsky, Chelsea Finn, Marcello Herreshoï¬, Alex Donaldson, Jared Kaplan, Greg Brockman, Wojciech Zaremba, Ian Goodfellow, Dylan Hadï¬eld-Menell, Jessica Taylor, Blaise Aguera y Arcas, David Berlekamp, Aaron Courville, and Jeï¬ Dean for helpful discussions and comments. Paul Christiano was supported as part of the Future of Life Institute FLI-RFP-AI1 program, grant #2015â143898. In addition a minority of the work done by Paul Christiano was performed as a contractor for Theiss Research and at OpenAI. Finally, we thank the Google Brain team for providing a supportive environment and encouraging us to publish this work.
# References
[1] Martin Abadi et al. âDeep Learning with Diï¬erential Privacyâ. In: (in press (2016)). [2] Pieter Abbeel and Andrew Y Ng. âExploration and apprenticeship learning in reinforcement learningâ. In: Proceedings of the 22nd international conference on Machine learning. ACM. 2005, pp. 1â8.
[3] Julius Adebayo, Lalana Kagal, and Alex Pentland. The Hidden Cost of Eï¬ciency: Fairness and Discrimination in Predictive Modeling. 2015.
[4] Alekh Agarwal et al. âTaming the monster: A fast and simple algorithm for contextual ban- ditsâ. In: (2014).
[5] Hana Ajakan et al. âDomain-adversarial neural networksâ. In: arXiv preprint arXiv:1412.4446
(2014). Ifeoma Ajunwa et al. âHiring by algorithm: predicting and preventing disparate impactâ. In: Available at SSRN 2746078 (2016).
6]
[7] Dario Amodei et al. âDeep Speech 2: End-to-End Speech Recognition in English and Man- darinâ. In: arXiv preprint arXiv:1512.02595 (2015).
[8] An Open Letter: Research Priorities for Robust and Beneï¬cial Artiï¬cial Intelligence. Open Letter. Signed by 8,600 people; see attached research agenda. 2015.
[9] Animashree Anandkumar, Daniel Hsu, and Sham M Kakade. âA method of moments for mixture models and hidden Markov modelsâ. In: arXiv preprint arXiv:1203.0683 (2012).
[10] Theodore W Anderson and Herman Rubin. âEstimation of the parameters of a single equation in a complete system of stochastic equationsâ. In: The Annals of Mathematical Statistics (1949), pp. 46â63.
[11] Theodore W Anderson and Herman Rubin. âThe asymptotic properties of estimates of the parameters of a single equation in a complete system of stochastic equationsâ. In: The Annals of Mathematical Statistics (1950), pp. 570â582.
[12] Stuart Armstrong. âMotivated value selection for artiï¬cial agentsâ. In: Workshops at the Twenty-Ninth AAAI Conference on Artiï¬cial Intelligence. 2015.
[13] Stuart Armstrong. The mathematics of reduced impact: help needed. 2012. [14] Stuart Armstrong. Utility indiï¬erence. Tech. rep. Technical Report 2010-1. Oxford: Future
of Humanity Institute, University of Oxford, 2010.
[15] Melanie Arntz, Terry Gregory, and Ulrich Zierahn. âThe Risk of Automation for Jobs in OECD Countriesâ. In: OECD Social, Employment and Migration Working Papers (2016). url: http://dx.doi.org/10.1787/5jlz9h56dvq7-en.
[16] Autonomous Weapons: An Open Letter from AI & Robotics Researchers. Open Letter. Signed by 20,000+ people. 2015.
22
[17] James Babcock, Janos Kramar, and Roman Yampolskiy. âThe AGI Containment Problemâ. In: The Ninth Conference on Artiï¬cial General Intelligence (2016).
[18] Krishnakumar Balasubramanian, Pinar Donmez, and Guy Lebanon. âUnsupervised super- vised learning ii: Margin-based classiï¬cation without labelsâ. In: The Journal of Machine Learning Research 12 (2011), pp. 3119â3145.
[19] Marco Barreno et al. âThe security of machine learningâ. In: Machine Learning 81.2 (2010), pp. 121â148.
[20] Tamer Ba¸sar and Pierre Bernhard. H-inï¬nity optimal control and related minimax design problems: a dynamic game approach. Springer Science & Business Media, 2008.
[21] Mich`ele Basseville. âDetecting changes in signals and systemsâa surveyâ. In: Automatica 24.3 (1988), pp. 309â326.
[22] F Berkenkamp, A Krause, and Angela P Schoellig. âBayesian optimization with safety con- straints: safe and automatic parameter tuning in robotics.â arXiv, 2016â. In: arXiv preprint arXiv:1602.04450 ().
[23] Jon Bird and Paul Layzell. âThe evolved radio and its implications for modelling the evolution of novel sensorsâ. In: Evolutionary Computation, 2002. CECâ02. Proceedings of the 2002 Congress on. Vol. 2. IEEE. 2002, pp. 1836â1841.
[24] John Blitzer, Mark Dredze, Fernando Pereira, et al. âBiographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classiï¬cationâ. In: ACL. Vol. 7. 2007, pp. 440â 447.
[25] John Blitzer, Sham Kakade, and Dean P Foster. âDomain adaptation with coupled sub- spacesâ. In: International Conference on Artiï¬cial Intelligence and Statistics. 2011, pp. 173â 181.
[26] Charles Blundell et al. âWeight uncertainty in neural networksâ. In: arXiv preprint arXiv:1505.05424 (2015).
[27] Nick Bostrom. Superintelligence: Paths, dangers, strategies. OUP Oxford, 2014. [28] L´eon Bottou. âTwo high stakes challenges in machine learningâ. Invited talk at the 32nd
International Conference on Machine Learning. 2015.
[29] L´eon Bottou et al. âCounterfactual Reasoning and Learning Systemsâ. In: arXiv preprint arXiv:1209.2355 (2012).
[30] L´eon Bottou et al. âCounterfactual reasoning and learning systems: The example of compu- tational advertisingâ. In: The Journal of Machine Learning Research 14.1 (2013), pp. 3207â 3260.
[31] Ronen I Brafman and Moshe Tennenholtz. âR-max-a general polynomial time algorithm for near-optimal reinforcement learningâ. In: The Journal of Machine Learning Research 3 (2003), pp. 213â231.
[32] Erik Brynjolfsson and Andrew McAfee. The second machine age: work, progress, and pros- perity in a time of brilliant technologies. WW Norton & Company, 2014.
[33] Ryan Calo. âOpen roboticsâ. In: Maryland Law Review 70.3 (2011). [34] Paul Christiano. AI Control. [Online; accessed 13-June-2016]. 2015. url: https://medium.
com/ai-control.
[35] Fabio Cozman and Ira Cohen. âRisks of semi-supervised learningâ. In: Semi-Supervised Learn- ing (2006), pp. 56â72.
[36] Andrew Critch. âParametric Bounded L¨obâs Theorem and Robust Cooperation of Bounded Agentsâ. In: (2016).
[37] Christian Daniel et al. âActive reward learningâ. In: Proceedings of Robotics Science & Sys- tems. 2014.
[38] Ernest Davis. âEthical guidelines for a superintelligence.â In: Artif. Intell. 220 (2015), pp. 121â 124.
[39] Alexander Philip Dawid and Allan M Skene. âMaximum likelihood estimation of observer error-rates using the EM algorithmâ. In: Applied statistics (1979), pp. 20â28.
23
[40] Peter Dayan and Geoï¬rey E Hinton. âFeudal reinforcement learningâ. In: Advances in neural information processing systems. Morgan Kaufmann Publishers. 1993, pp. 271â271.
[41] Kalyanmoy Deb. âMulti-objective optimizationâ. In: Search methodologies. Springer, 2014, pp. 403â449.
[42] Daniel Dewey. âLearning what to valueâ. In: Artiï¬cial General Intelligence. Springer, 2011, pp. 309â314.
[43] Daniel Dewey. âReinforcement learning and the reward engineering principleâ. In: 2014 AAAI Spring Symposium Series. 2014.
[44] Pinar Donmez, Guy Lebanon, and Krishnakumar Balasubramanian. âUnsupervised super- vised learning i: Estimating classiï¬cation and regression errors without labelsâ. In: The Jour- nal of Machine Learning Research 11 (2010), pp. 1323â1351.
[45] Gregory Druck, Gideon Mann, and Andrew McCallum. âLearning from labeled features using generalized expectation criteriaâ. In: Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval. ACM. 2008, pp. 595â602. [46] Cynthia Dwork et al. âFairness through awarenessâ. In: Proceedings of the 3rd Innovations
in Theoretical Computer Science Conference. ACM. 2012, pp. 214â226.
[47] Bradley Efron. âComputers and the theory of statistics: thinking the unthinkableâ. In: SIAM review 21.4 (1979), pp. 460â480.
[48] Owain Evans, Andreas Stuhlm¨uller, and Noah D Goodman. âLearning the preferences of ignorant, inconsistent agentsâ. In: arXiv preprint arXiv:1512.05832 (2015).
[49] Tom Everitt and Marcus Hutter. âAvoiding wireheading with value reinforcement learningâ. In: arXiv preprint arXiv:1605.03143 (2016).
[50] Tom Everitt et al. âSelf-Modiï¬cation of Policy and Utility Function in Rational Agentsâ. In: arXiv preprint arXiv:1605.03142 (2016).
[51] Chelsea Finn, Sergey Levine, and Pieter Abbeel. âGuided Cost Learning: Deep Inverse Op- timal Control via Policy Optimizationâ. In: arXiv preprint arXiv:1603.00448 (2016). [52] Carl Benedikt Frey and Michael A Osborne. âThe future of employment: how susceptible are
jobs to computerisationâ. In: Retrieved September 7 (2013), p. 2013.
[53] Yarin Gal and Zoubin Ghahramani. âDropout as a Bayesian approximation: Representing model uncertainty in deep learningâ. In: arXiv preprint arXiv:1506.02142 (2015).
[54] Joao Gama et al. âLearning with drift detectionâ. In: Advances in artiï¬cial intelligenceâSBIA 2004. Springer, 2004, pp. 286â295.
[55] Javier Garc´ıa and Fernando Fern´andez. âA Comprehensive Survey on Safe Reinforcement Learningâ. In: Journal of Machine Learning Research 16 (2015), pp. 1437â1480.
[56] Scott Garrabrant, Nate Soares, and Jessica Taylor. âAsymptotic Convergence in Online Learning with Unbounded Delaysâ. In: arXiv preprint arXiv:1604.05280 (2016).
[57] Scott Garrabrant et al. âUniform Coherenceâ. In: arXiv preprint arXiv:1604.05288 (2016). [58] Shalini Ghosh et al. âTrusted Machine Learning for Probabilistic Modelsâ. In: Reliable Ma-
chine Learning in the Wild at ICML 2016 (2016).
[59] Yolanda Gil et al. âAmplify scientiï¬c discovery with artiï¬cial intelligenceâ. In: Science 346.6206 (2014), pp. 171â172.
[60] Alec Go, Richa Bhayani, and Lei Huang. âTwitter sentiment classiï¬cation using distant
supervisionâ. In: CS224N Project Report, Stanford 1 (2009), p. 12. Ian Goodfellow et al. âGenerative adversarial netsâ. In: Advances in Neural Information Processing Systems. 2014, pp. 2672â2680. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. âExplaining and harnessing ad- versarial examplesâ. In: arXiv preprint arXiv:1412.6572 (2014).
61
62
[63] Charles AE Goodhart. Problems of monetary management: the UK experience. Springer, 1984.
[64] Alex Graves, Greg Wayne, and Ivo Danihelka. âNeural turing machinesâ. In: arXiv preprint arXiv:1410.5401 (2014).
24
[65] Sonal Gupta. âDistantly Supervised Information Extraction Using Bootstrapped Patternsâ. PhD thesis. Stanford University, 2015.
[66] Dylan Hadï¬eld-Menell et al. Cooperative Inverse Reinforcement Learning. 2016. [67] Dylan Hadï¬eld-Menell et al. âThe Oï¬-Switchâ. In: (2016). [68] Lars Peter Hansen. âLarge sample properties of generalized method of moments estimatorsâ.
In: Econometrica: Journal of the Econometric Society (1982), pp. 1029â1054.
[69] Lars Peter Hansen. âNobel Lecture: Uncertainty Outside and Inside Economic Modelsâ. In: Journal of Political Economy 122.5 (2014), pp. 945â987.
[70] Mark Herbster and Manfred K Warmuth. âTracking the best linear predictorâ. In: The Jour- nal of Machine Learning Research 1 (2001), pp. 281â309.
[71] Bill Hibbard. âModel-based utility functionsâ. In: Journal of Artiï¬cial General Intelligence 3.1 (2012), pp. 1â24.
[72] Thomas Hofmann, Bernhard Sch¨olkopf, and Alexander J Smola. âKernel methods in machine learningâ. In: The annals of statistics (2008), pp. 1171â1220.
[73] Garud N Iyengar. âRobust dynamic programmingâ. In: Mathematics of Operations Research 30.2 (2005), pp. 257â280.
[74] Ariel Jaï¬e, Boaz Nadler, and Yuval Kluger. âEstimating the accuracies of multiple classiï¬ers without labeled dataâ. In: arXiv preprint arXiv:1407.7644 (2014).
[75] Jean-Baptiste Jeannin et al. âA formally veriï¬ed hybrid system for the next-generation air- borne collision avoidance systemâ. In: Tools and Algorithms for the Construction and Analysis of Systems. Springer, 2015, pp. 21â36.
[76] Zhanglong Ji, Zachary C Lipton, and Charles Elkan. âDiï¬erential privacy and machine learn- ing: A survey and reviewâ. In: arXiv preprint arXiv:1412.7584 (2014).
[77] Fredrik D Johansson, Uri Shalit, and David Sontag. âLearning Representations for Counter- factual Inferenceâ. In: arXiv preprint arXiv:1605.03661 (2016).
78 79 Leslie Pack Kaelbling, Michael L Littman, and Anthony R Cassandra. âPlanning and acting in partially observable stochastic domainsâ. In: Artificial intelligence 101.1 (1998), pp. 99- 134. Lukasz Kaiser and Ilya Sutskever. âNeural GPUs learn algorithmsâ. In: arXiv preprint arXiv:1511.08228 (2015).
[80] Yoshinobu Kawahara and Masashi Sugiyama. âChange-Point Detection in Time-Series Data by Direct Density-Ratio Estimation.â In: SDM. Vol. 9. SIAM. 2009, pp. 389â400.
[81] F. Khani, M. Rinard, and P. Liang. âUnanimous Prediction for 100Learning Semantic Parsersâ. In: Association for Computational Linguistics (ACL). 2016.
[82] Alex Krizhevsky, Ilya Sutskever, and Geoï¬rey E Hinton. âImagenet classiï¬cation with deep convolutional neural networksâ. In: Advances in neural information processing systems. 2012, pp. 1097â1105.
[83] Volodymyr Kuleshov and Percy S Liang. âCalibrated Structured Predictionâ. In: Advances in Neural Information Processing Systems. 2015, pp. 3456â3464.
[84] Tejas D Kulkarni et al. âHierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivationâ. In: arXiv preprint arXiv:1604.06057 (2016).
[85] Neil Lawrence. Discussion of âSuperintelligence: Paths, Dangers, Strategiesâ. 2016. [86] Jesse Levinson et al. âTowards fully autonomous driving: Systems and algorithmsâ. In: In-
telligent Vehicles Symposium (IV), 2011 IEEE. IEEE. 2011, pp. 163â168.
[87] Lihong Li et al. âKnows what it knows: a framework for self-aware learningâ. In: Machine learning 82.3 (2011), pp. 399â443.
[88] Yu-Feng Li and Zhi-Hua Zhou. âTowards making unlabeled data never hurtâ. In: Pattern Analysis and Machine Intelligence, IEEE Transactions on 37.1 (2015), pp. 175â188. [89] Percy Liang. âOn the Elusiveness of a Speciï¬cation for AIâ. NIPS 2015, Symposium: Algo- rithms Among Us. 2015. url: http://research.microsoft.com/apps/video/default. aspx?id=260009&r=1.
25
[90] Percy Liang and Dan Klein. âAnalyzing the Errors of Unsupervised Learning.â In: ACL. 2008, pp. 879â887.
[91] Song Liu et al. âChange-point detection in time-series data by relative density-ratio estima- tionâ. In: Neural Networks 43 (2013), pp. 72â83.
[92] Sarah M Loos, David Renshaw, and Andr´e Platzer. âFormal veriï¬cation of distributed air- craft controllersâ. In: Proceedings of the 16th international conference on Hybrid systems: computation and control. ACM. 2013, pp. 125â130.
[93] John Lygeros, Claire Tomlin, and Shankar Sastry. âControllers for reachability speciï¬cations for hybrid systemsâ. In: Automatica 35.3 (1999), pp. 349â370.
[94] Gideon S Mann and Andrew McCallum. âGeneralized expectation criteria for semi-supervised learning with weakly labeled dataâ. In: The Journal of Machine Learning Research 11 (2010), pp. 955â984.
[95] John McCarthy and Patrick J Hayes. âSome philosophical problems from the standpoint of artiï¬cial intelligenceâ. In: Readings in artiï¬cial intelligence (1969), pp. 431â450.
[96] Shike Mei and Xiaojin Zhu. âThe Security of Latent Dirichlet Allocation.â In: AISTATS. 2015.
[97] Shike Mei and Xiaojin Zhu. âUsing Machine Teaching to Identify Optimal Training-Set At- tacks on Machine Learners.â In: AAAI. 2015, pp. 2871â2877.
[98] Bernard Merialdo. âTagging English text with a probabilistic modelâ. In: Computational linguistics 20.2 (1994), pp. 155â171.
[99] Mike Mintz et al. âDistant supervision for relation extraction without labeled dataâ. In: Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2- Volume 2. Association for Computational Linguistics. 2009, pp. 1003â1011. Ian M Mitchell, Alexandre M Bayen, and Claire J Tomlin. âA time-dependent Hamilton- Jacobi formulation of reachable sets for continuous dynamic gamesâ. In: Automatic Control, IEEE Transactions on 50.7 (2005), pp. 947â957.
[101] Stefan Mitsch, Sarah M Loos, and Andr´e Platzer. âTowards formal veriï¬cation of freeway traï¬c controlâ. In: Cyber-Physical Systems (ICCPS), 2012 IEEE/ACM Third International Conference on. IEEE. 2012, pp. 171â180.
[102] Volodymyr Mnih et al. âHuman-level control through deep reinforcement learningâ. In: Nature 518.7540 (2015), pp. 529â533.
[103] Shakir Mohamed and Danilo Jimenez Rezende. âVariational Information Maximisation for Intrinsically Motivated Reinforcement Learningâ. In: Advances in Neural Information Pro- cessing Systems. 2015, pp. 2116â2124.
[104] Teodor Mihai Moldovan and Pieter Abbeel. âSafe exploration in markov decision processesâ. In: arXiv preprint arXiv:1205.4810 (2012).
[105] Alexander Mordvintsev, Christopher Olah, and Mike Tyka. âInceptionism: Going deeper into neural networksâ. In: Google Research Blog. Retrieved June 20 (2015).
[106] Jersey Neyman. âSur les applications de la th´eorie des probabilit´es aux experiences agricoles: Essai des principesâ. In: Roczniki Nauk Rolniczych 10 (1923), pp. 1â51.
[107] Andrew Y Ng, Stuart J Russell, et al. âAlgorithms for inverse reinforcement learning.â In: Icml. 2000, pp. 663â670.
[108] Anh Nguyen, Jason Yosinski, and Jeï¬ Clune. âDeep neural networks are easily fooled: High conï¬dence predictions for unrecognizable imagesâ. In: Computer Vision and Pattern Recog- nition (CVPR), 2015 IEEE Conference on. IEEE. 2015, pp. 427â436.
[109] Anh Nguyen et al. âSynthesizing the preferred inputs for neurons in neural networks via deep generator networksâ. In: arXiv preprint arXiv:1605.09304 (2016).
[110] Kamal Nigam et al. âLearning to classify text from labeled and unlabeled documentsâ. In: AAAI/IAAI 792 (1998).
26
[111] Arnab Nilim and Laurent El Ghaoui. âRobust control of Markov decision processes with uncertain transition matricesâ. In: Operations Research 53.5 (2005), pp. 780â798.
[112] Christopher Olah. Visualizing Representations: Deep Learning and Human Beings. 2015. url: http://colah.github.io/posts/2015-01-Visualizing-Representations/.
[113] Laurent Orseau and Stuart Armstrong. âSafely Interruptible Agentsâ. In: (2016). [114]
Ian Osband et al. âDeep Exploration via Bootstrapped DQNâ. In: arXiv preprint arXiv:1602.04621 (2016).
[115] Nicolas Papernot et al. âPractical Black-Box Attacks against Deep Learning Systems using Adversarial Examplesâ. In: arXiv preprint arXiv:1602.02697 (2016).
[116] Douglas B Paul and Janet M Baker. âThe design for the Wall Street Journal-based CSR corpusâ. In: Proceedings of the workshop on Speech and Natural Language. Association for Computational Linguistics. 1992, pp. 357â362.
[117] Judea Pearl et al. âCausal inference in statistics: An overviewâ. In: Statistics Surveys 3 (2009), pp. 96â146.
[118] Martin Pecka and Tomas Svoboda. âSafe exploration techniques for reinforcement learningâan overviewâ. In: Modelling and Simulation for Autonomous Systems. Springer, 2014, pp. 357â 375.
[119] Dino Pedreshi, Salvatore Ruggieri, and Franco Turini. âDiscrimination-aware data miningâ. In: Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM. 2008, pp. 560â568.
[120] Jonas Peters et al. âCausal discovery with continuous additive noise modelsâ. In: The Journal of Machine Learning Research 15.1 (2014), pp. 2009â2053.
[121] Emmanouil Antonios Platanios. âEstimating accuracy from unlabeled dataâ. MA thesis. Carnegie Mellon University, 2015.
[122] Emmanouil Antonios Platanios, Avrim Blum, and Tom Mitchell. âEstimating accuracy from unlabeled dataâ. In: (2014).
[123] Walter W Powell and Laurel Smith-Doerr. âNetworks and economic lifeâ. In: The handbook of economic sociology 368 (1994), p. 380.
[124] Joaquin Quinonero-Candela et al. Dataset shift in machine learning, ser. Neural information processing series. 2009.
[125] Rajat Raina et al. âSelf-taught learning: transfer learning from unlabeled dataâ. In: Proceed- ings of the 24th international conference on Machine learning. ACM. 2007, pp. 759â766.
[126] Bharath Ramsundar et al. âMassively multitask networks for drug discoveryâ. In: arXiv preprint arXiv:1502.02072 (2015).
[127] Mark Ring and Laurent Orseau. âDelusion, survival, and intelligent agentsâ. In: Artiï¬cial General Intelligence. Springer, 2011, pp. 11â20.
[128] St´ephane Ross, Geoï¬rey J Gordon, and J Andrew Bagnell. âA reduction of imitation learning and structured prediction to no-regret online learningâ. In: arXiv preprint arXiv:1011.0686 (2010).
[129] Donald B Rubin. âEstimating causal eï¬ects of treatments in randomized and nonrandomized studies.â In: Journal of educational Psychology 66.5 (1974), p. 688.
[130] Stuart Russell et al. âResearch priorities for robust and beneï¬cial artiï¬cial intelligenceâ. In: Future of Life Institute (2015).
[131] Christoph Salge, Cornelius Glackin, and Daniel Polani. âEmpowermentâan introductionâ. In: Guided Self-Organization: Inception. Springer, 2014, pp. 67â114.
[132] J Denis Sargan. âThe estimation of relationships with autocorrelated residuals by the use of instrumental variablesâ. In: Journal of the Royal Statistical Society. Series B (Methodological) (1959), pp. 91â105.
[133] John D Sargan. âThe estimation of economic relationships using instrumental variablesâ. In: Econometrica: Journal of the Econometric Society (1958), pp. 393â415.
27
[134] John Schulman et al. âHigh-dimensional continuous control using generalized advantage es- timationâ. In: arXiv preprint arXiv:1506.02438 (2015).
[135] D Sculley et al. âMachine Learning: The High-Interest Credit Card of Technical Debtâ. In: (2014).
[136] Glenn Shafer and Vladimir Vovk. âA tutorial on conformal predictionâ. In: The Journal of Machine Learning Research 9 (2008), pp. 371â421.
[137] Uri Shalit, Fredrik Johansson, and David Sontag. âBounding and Minimizing Counterfactual Errorâ. In: arXiv preprint arXiv:1606.03976 (2016).
[138] Hidetoshi Shimodaira. âImproving predictive inference under covariate shift by weighting the log-likelihood functionâ. In: Journal of statistical planning and inference 90.2 (2000), pp. 227â 244.
[139] Jaeho Shin et al. âIncremental knowledge base construction using deepdiveâ. In: Proceedings of the VLDB Endowment 8.11 (2015), pp. 1310â1321.
[140] David Silver et al. âMastering the game of Go with deep neural networks and tree searchâ. In: Nature 529.7587 (2016), pp. 484â489.
[141] SNES Super Mario World (USA) âarbitrary code executionâ. Tool-assisted movies. 2014. url: http://tasvideos.org/2513M.html.
[142] Nate Soares and Benja Fallenstein. âToward idealized decision theoryâ. In: arXiv preprint arXiv:1507.01986 (2015).
[143] Ray J Solomonoï¬. âA formal theory of inductive inference. Part Iâ. In: Information and control 7.1 (1964), pp. 1â22.
[144] Ray J Solomonoï¬. âA formal theory of inductive inference. Part IIâ. In: Information and control 7.2 (1964), pp. 224â254.
[145] J Steinebach. âEL Lehmann, JP Romano: Testing statistical hypothesesâ. In: Metrika 64.2 (2006), pp. 255â256.
[146] Jacob Steinhardt. Long-Term and Short-Term Challenges to Ensuring the Safety of AI Sys- tems. [Online; accessed 13-June-2016]. 2015. url: https://jsteinhardt.wordpress.com/ 2015/06/24/long- term- and- short- term- challenges- to- ensuring- the- safety- of- ai-systems/.
[147] Jacob Steinhardt and Percy Liang. âUnsupervised Risk Estimation with only Structural Assumptionsâ. In: (2016).
[148] Jacob Steinhardt and Russ Tedrake. âFinite-time regional veriï¬cation of stochastic non-linear systemsâ. In: The International Journal of Robotics Research 31.7 (2012), pp. 901â923. [149] Jacob Steinhardt, Gregory Valiant, and Moses Charikar. âAvoiding Imposters and Delin- quents: Adversarial Crowdsourcing and Peer Predictionâ. In: arxiv prepring arXiv:1606.05374 (2016). url: http://arxiv.org/abs/1606.05374.
[150] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 1998.
[151] Adith Swaminathan and Thorsten Joachims. âCounterfactual risk minimization: Learning from logged bandit feedbackâ. In: arXiv preprint arXiv:1502.02362 (2015).
[152] Christian Szegedy et al. âIntriguing properties of neural networksâ. In: arXiv preprint arXiv:1312.6199 (2013).
[153] Aviv Tamar, Yonatan Glassner, and Shie Mannor. âPolicy gradients beyond expectations: Conditional value-at-riskâ. In: arXiv preprint arXiv:1404.3862 (2014).
[154] Jessica Taylor. âQuantilizers: A Safer Alternative to Maximizers for Limited Optimizationâ. In: forthcoming). Submitted to AAAI (2016).
[155] Matthew E Taylor and Peter Stone. âTransfer learning for reinforcement learning domains: A surveyâ. In: Journal of Machine Learning Research 10.Jul (2009), pp. 1633â1685. [156] Philip S Thomas, Georgios Theocharous, and Mohammad Ghavamzadeh. âHigh-Conï¬dence
# Oï¬-Policy Evaluation.â In: AAAI. 2015, pp. 3000â3006.
[157] Adrian Thompson. Artiï¬cial evolution in the physical world. 1997.
28
[158] Antonio Torralba and Alexei A Efros. âUnbiased look at dataset biasâ. In: Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. IEEE. 2011, pp. 1521â1528.
[159] Matteo Turchetta, Felix Berkenkamp, and Andreas Krause. âSafe Exploration in Finite Markov Decision Processes with Gaussian Processesâ. In: arXiv preprint arXiv:1606.04753 (2016).
[160] Stefan Wager and Susan Athey. âEstimation and Inference of Heterogeneous Treatment Ef- fects using Random Forestsâ. In: arXiv preprint arXiv:1510.04342 (2015).
[161] Daniel Weld and Oren Etzioni. âThe ï¬rst law of robotics (a call to arms)â. In: AAAI. Vol. 94. 1994. 1994, pp. 1042â1047.
[162] Keenon Werling et al. âOn-the-job learning with bayesian decision theoryâ. In: Advances in Neural Information Processing Systems. 2015, pp. 3447â3455.
[163] Jason Weston et al. âTowards ai-complete question answering: A set of prerequisite toy tasksâ. In: arXiv preprint arXiv:1502.05698 (2015).
[164] Wolfram Wiesemann, Daniel Kuhn, and Ber¸c Rustem. âRobust Markov decision processesâ. In: Mathematics of Operations Research 38.1 (2013), pp. 153â183.
[165] Roman V Yampolskiy. âUtility function security in artiï¬cially intelligent agentsâ. In: Journal of Experimental & Theoretical Artiï¬cial Intelligence 26.3 (2014), pp. 373â389.
[166] Jason Yosinski et al. âUnderstanding neural networks through deep visualizationâ. In: arXiv preprint arXiv:1506.06579 (2015).
[167] Eliezer Yudkowsky. âArtiï¬cial intelligence as a positive and negative factor in global riskâ. In: Global catastrophic risks 1 (2008), p. 303.
[168] Muhammad Bilal Zafar et al. âLearning Fair Classiï¬ersâ. In: stat 1050 (2015), p. 29. [169] Richard S Zemel et al. âLearning Fair Representations.â In: ICML (3) 28 (2013), pp. 325â333. [170] Yuchen Zhang et al. âSpectral methods meet EM: A provably optimal algorithm for crowd- sourcingâ. In: Advances in neural information processing systems. 2014, pp. 1260â1268.
29 | {
"id": "1507.01986"
} |
1606.06160 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | We propose DoReFa-Net, a method to train convolutional neural networks that
have low bitwidth weights and activations using low bitwidth parameter
gradients. In particular, during backward pass, parameter gradients are
stochastically quantized to low bitwidth numbers before being propagated to
convolutional layers. As convolutions during forward/backward passes can now
operate on low bitwidth weights and activations/gradients respectively,
DoReFa-Net can use bit convolution kernels to accelerate both training and
inference. Moreover, as bit convolutions can be efficiently implemented on CPU,
FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low
bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet
datasets prove that DoReFa-Net can achieve comparable prediction accuracy as
32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has
1-bit weights, 2-bit activations, can be trained from scratch using 6-bit
gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The
DoReFa-Net AlexNet model is released publicly. | http://arxiv.org/pdf/1606.06160 | Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou | cs.NE, cs.LG | null | null | cs.NE | 20160620 | 20180202 | 8 1 0 2
b e F 2 ] E N . s c [
3 v 0 6 1 6 0 . 6 0 6 1 : v i X r a
DoReFa-Net
DOREFA-NET: TRAINING LOW BITWIDTH CONVOLU- TIONAL NEURAL NETWORKS WITH LOW BITWIDTH GRADIENTS
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou Megvii Inc. {zsc, wyx, nzk, zxy, wenhe, zouyuheng}@megvii.com
# ABSTRACT
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradi- ents. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional lay- ers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efï¬ciently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
# INTRODUCTION
Recent progress in deep Convolutional Neural Networks (DCNN) has considerably changed the landscape of computer vision (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012a) and NLP (Bahdanau et al., 2014).
However, a state-of-the-art DCNN usually has a lot of parameters and high computational complex- ity, which both impedes its application in embedded devices and slows down the iteration of its research and development.
For example, the training process of a DCNN may take up to weeks on a modern multi-GPU server for large datasets like ImageNet (Deng et al., 2009). In light of this, substantial research efforts are invested in speeding up DCNNs at both run-time and training-time, on both general-purpose (Vanhoucke et al., 2011; Gong et al., 2014; Han et al., 2015b) and specialized computer hardware (Farabet et al., 2011; Pham et al., 2012; Chen et al., 2014a;b). Various approaches like quantization (Wu et al., 2015) and sparsiï¬cation (Han et al., 2015a) have also been proposed.
Recent research efforts (Courbariaux et al., 2014; Kim & Smaragdis, 2016; Rastegari et al., 2016; Merolla et al., 2016) have considerably reduced both model size and computation complexity by using low bitwidth weights and low bitwidth activations. In particular, in BNN (Courbariaux & Bengio, 2016) and XNOR-Net (Rastegari et al., 2016), both weights and input activations of convo- lutional layers1 are binarized. Hence during the forward pass the most computationally expensive convolutions can be done by bitwise operation kernels, thanks to the following formula which com- putes the dot product of two bit vectors x and y using bitwise operations, where bitcount counts the number of bits in a bit vector:
x · y = bitcount(and(x, y)), xi, yi â {0, 1} âi. (1)
1Note fully-connected layers are special cases of convolutional layers.
1
DoReFa-Net
# 2
However, to the best of our knowledge, no previous work has succeeded in quantizing gradients to numbers with bitwidth less than 8 during the backward pass, while still achieving comparable prediction accuracy. In some previous research (Gupta et al., 2015; Courbariaux et al., 2014), con- volutions involve at least 10-bit numbers. In BNN and XNOR-Net, though weights are binarized, gradients are in full precision, therefore the backward-pass still requires convolution between 1-bit numbers and 32-bit ï¬oating-points. The inability to exploit bit convolution during the backward pass means that most training time of BNN and XNOR-Net will be spent in backward pass.
This paper makes the following contributions:
1. We generalize the method of binarized neural networks to allow creating DoReFa-Net, a CNN that has arbitrary bitwidth in weights, activations, and gradients. As convolutions dur- ing forward/backward passes can then operate on low bit weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both the forward pass and the backward pass of the training process.
2. As bit convolutions can be efï¬ciently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate low bitwidth neural network training on these hardware. In particular, with the power efï¬ciency of FPGA and ASIC, we may consider- ably reduce energy consumption of low bitwidth neural network training.
3. We explore the conï¬guration space of bitwidth for weights, activations and gradients for DoReFa-Net. E.g., training a network using 1-bit weights, 1-bit activations and 2-bit gradi- ents can lead to 93% accuracy on SVHN dataset. In our experiments, gradients in general require larger bitwidth than activations, and activations in general require larger bitwidth than weights, to lessen the degradation of prediction accuracy compared to 32-bit precision counterparts. We name our method âDoReFa-Netâ to take note of these phenomena.
4. We release in TensorFlow (Abadi et al.) format a DoReFa-Net 3 derived from AlexNet (Krizhevsky et al., 2012) that gets 46.1% in single-crop top-1 accuracy on ILSVRC12 validation set. A reference implementation for training of a DoReFa-net on SVHN dataset is also available.
# 2 DOREFA-NET
In this section we detail our formulation of DoReFa-Net, a method to train neural network that has low bitwidth weights, activations with low bitwidth parameter gradients. We note that while weights and activations can be deterministically quantized, gradients need to be stochastically quantized.
We ï¬rst outline how to exploit bit convolution kernel in DoReFa-Net and then elaborate the method to quantize weights, activations and gradients to low bitwidth numbers.
2.1 USING BIT CONVOLUTION KERNELS IN LOW BITWIDTH NEURAL NETWORK
The 1-bit dot product kernel speciï¬ed in Eqn. 1 can also be used to compute dot product, and consequently convolution, for low bitwidth ï¬xed-point integers. Assume x is a sequence of M -bit ï¬xed-point integers s.t. x = PM â1 m=0 cm(x)2m and y is a sequence of K-bit ï¬xed-point integers s.t. y = PKâ1 k=0 are bit vectors, the dot product of x and
2When x and y are vectors of {â1, 1}, Eqn. 1 has a variant that uses xnor instead:
x · y = N â 2 à bitcount(xnor(x, y)), xi, yi â {â1, 1} âi. (2)
3The model and supplement materials are available at https://github.com/ppwwyyxx/ tensorpack/tree/master/examples/DoReFa-Net
2
DoReFa-Net
y can be computed by bitwise operations as:
x · y = M â1 X Kâ1 X 2m+k bitcount[and(cm(x), ck(y))], m=0 k=0 (3)
cm(x)i, ck(y)i â {0, 1} âi, m, k. (4)
In the above equation, the computation complexity is O(M K), i.e., directly proportional to bitwidth of x and y.
2.2 STRAIGHT-THROUGH ESTIMATOR
The set of real numbers representable by a low bitwidth number k only has a small ordinality 2k. However, mathematically any continuous function whose range is a small ï¬nite set would necessar- ily always have zero gradient with respect to its input. We adopt the âstraight-through estimatorâ (STE) method (Hinton et al., 2012b; Bengio et al., 2013) to circumvent this problem. An STE can be thought of as an operator that has arbitrary forward and backward operations.
A simple example is the STE deï¬ned for Bernoulli sampling with probability p â [0, 1]:
# Forward: q â¼ Bernoulli(p)
Backward: âc âp = âc âq .
Here c denotes the objective function. As sampling from a Bernoulli distribution is not a differen- tiable function, â âq âp â is not well deï¬ned, hence the backward pass cannot be directly constructed from the forward pass using chain rule. Nevertheless, because q is on expectation equal to p, we âq as an approximation for âc may use the well-deï¬ned gradient âc âp and construct a STE as above. In other words, STE construction gives a custom-deï¬ned â âq
An STE we will use extensively in this work is quantizek that quantizes a real number input ri â [0, 1] to a k-bit number output ro â [0, 1]. This STE is deï¬ned as below:
1 2k â 1 âc âro
Forward: ro = round((2k â 1)ri) (5)
Backward: âc âri = . (6)
It is obvious by construction that the output q of quantizek STE is a real number representable by k bits. Also, since ro is a k-bit ï¬xed-point integer, the dot product of two sequences of such k-bit real numbers can be efï¬ciently calculated, by using ï¬xed-point integer dot product in Eqn. 3 followed by proper scaling.
# 2.3 LOW BITWIDTH QUANTIZATION OF WEIGHTS
In this section we detail our approach to getting low bitwidth weights.
In previous works, STE has been used to binarize the weights. For example in BNN, weights are binarized by the following STE:
Forward: ro = sign(ri) âc âri
Here sign(ri) = 2Iriâ¥0 â 1 returns one of two possible values: {â1, 1}. In XNOR-Net, weights are binarized by the following STE, with the difference being that weights are scaled after binarized:
Forward: ro = sign(ri) Ã EF (|ri|) Backward: âc âri = âc âro .
3
DoReFa-Net
In XNOR-Net, the scaling factor EF (|ri|) is the mean of absolute value of each output channel of weights. The rationale is that introducing this scaling factor will increase the value range of weights, while still being able to exploit bit convolution kernels. However, the channel-wise scaling factors will make it impossible to exploit bit convolution kernels when computing the convolution between gradients and the weights during back propagation. Hence, in our experiments, we use a constant scalar to scale all ï¬lters instead of doing channel-wise scaling. We use the following STE for all neural networks that have binary weights in this paper:
Forward: ro = sign(ri) Ã E(|ri|) (7)
Backward: âc âri = âc âro . (8)
In case we use k-bit representation of the weights with k > 1, we apply the STE f k follows:
Forward: ro = f k Ï(ri) = 2 quantizek( tanh(ri) 2 max(| tanh(ri)|) + 1 2 ) â 1. (9)
Backward: âc âri = âro âri âc âro 4 (10)
Note here we use tanh to limit the value range of weights to [â1, 1] before quantizing to k-bit. By 2 is a number in [0, 1], where the maximum is taken over all weights construction, in that layer. quantizek will then quantize this number to k-bit ï¬xed-point ranging in [0, 1]. Finally an afï¬ne transform will bring the range of f k
Note that when k = 1, Eqn. 9 is different from Eqn. 7, providing a different way of binarizing weights. Nevertheless, we ï¬nd this difference insigniï¬cant in experiments.
2.4 LOW BITWIDTH QUANTIZATION OF ACTIVATIONS
Next we detail our approach to getting low bitwidth activations that are input to convolutions, which is of critical importance in replacing ï¬oating-point convolutions by less computation-intensive bit convolutions.
In BNN and XNOR-Net, activations are binarized in the same way as weights. However, we fail to reproduce the results of XNOR-Net if we follow their methods of binarizing activations, and the binarizing approach in BNN is claimed by (Rastegari et al., 2016) to cause severe prediction accuracy degradation when applied on ImageNet models like AlexNet. Hence instead, we apply an STE on input activations r of each weight layer. Here we assume the output of the previous layer has passed through a bounded activation function h, which ensures r â [0, 1]. In DoReFa-Net, quantization of activations r to k-bit is simply:
# f k α(r) = quantizek(r).
(11)
2.5 LOW BITWIDTH QUANTIZATION OF GRADIENTS
We have demonstrated deterministic quantization to produce low bitwidth weights and activations. However, we ï¬nd stochastic quantization is necessary for low bitwidth gradients to be effective. This is in agreement with experiments of (Gupta et al., 2015) on 16-bit weights and 16-bit gradients.
To quantize gradients to low bitwidth, it is important to note that gradients are unbounded and may have signiï¬cantly larger value range than activations. Recall in Eqn. 11, we can map the range of activations to [0, 1] by passing values through differentiable nonlinear functions. However, this kind of construction does not exist for gradients. Therefore we designed the following function for k-bit quantization of gradients:
dr 4 1) 1 2 fi (dr) = 2max(({dr|) quantizes (dr) 5
# 4Here âro âri
is well-deï¬ned because we already deï¬ned quantizek as an STE
4
DoReFa-Net
Here dr = âc âr is the back-propagated gradient of the output r of some layer, and the maximum is taken over all axis of the gradient tensor dr except for the mini-batch axis (therefore each instance in a mini-batch will have its own scaling factor). The above function ï¬rst applies an afï¬ne transform on the gradient, to map it into [0, 1], and then inverts the transform after quantization.
To further compensate the potential bias introduced by gradient quantization, we introduce an extra noise function N (k) = Ï 2kâ1 where Ï â¼ U nif orm(â0.5, 0.5). 5 The noise therefore has the same magnitude as the possible quantization error. We ï¬nd that the artiï¬cial noise to be critical for achieving good performance. Finally, the expression weâll use to quantize gradients to k-bit numbers is as follows:
ad 2maxg(|dr|) | 2 fi (dr) = 2maxo(|dr|) | quantize, [ + N(k)| »)| . (12)
The quantization of gradient is done on the backward pass only. Hence we apply the following STE on the output of each convolution layer:
Forward: ro = ri (13)
Backward: âc âri = f k γ ( âc âro ). (14)
Algorithm 1 Training a L-layer DoReFa-Net with W -bit weights and A-bit activations using G-bit gradients. Weights, activations and gradients are quantized according to Eqn. 9, Eqn. 11, Eqn. 12, respectively. Require: a minibatch of inputs and targets (a0, aâ), previous weights W , learning rate η Ensure: updated weights W t+1
{1. Computing the parameter gradients:} {1.1 Forward propagation:} 1: for k = 1 to L do 2: W b 3: 4: 5: 6: 7: 8: 9: end for k â f W Ï (Wk) Ëak â forward(ab ak â h(Ëak) if k < L then k â f A ab end if Optionally apply pooling kâ1, W b k ) α (ak) {1.2 Backward propagation:} Compute gaL = âC âaL knowing aL and aâ. 10: for k = L to 1 do 11: 12: 13: 14: 15: 16: end for Back-propagate gak through activation function h gb ak gakâ1 â backward input(gb ak â backward weight(gb gW b ak Back-propagate gradients through pooling layer if there is one â f G γ (gak ) , W b k ) , ab kâ1) k {2. Accumulating the parameters gradients:} 17: for k = 1 to L do 18: 19: W t+1 20: end for âW b k âWk gWk = gW b k k â U pdate(Wk, gWk , η)
5Note here we do not need clip value of N (k) as the two end points of a uniform distribution are almost surely never attained.
5
DoReFa-Net
2.6 THE ALGORITHM FOR DOREFA-NET
We give a sample training algorithm of DoReFa-Net as Algorithm 1. W.l.o.g., the network is assumed to have a feed-forward linear topology, and details like batch normalization and pool- ing layers are omitted. Note that all the expensive operations forward, backward input, backward weight, in convolutional as well as fully-connected layers, are now operating on low bitwidth numbers. By construction, there is always an afï¬ne mapping between these low bitwidth numbers and ï¬xed-point integers. As a result, all the expensive operations can be accelerated signif- icantly by the ï¬xed-point integer dot product kernel (Eqn. 3).
2.7 FIRST AND THE LAST LAYER
Among all layers in a DCNN, the ï¬rst and the last layers appear to be different from the rest, as they are interfacing the input and output of the network. For the ï¬rst layer, the input is often an image, which may contain 8-bit features. On the other hand, the output layer typically produce approximately one-hot vectors, which are close to bit vectors by deï¬nition. It is an interesting question whether these differences would cause the ï¬rst and last layer to exhibit different behavior when converted to low bitwidth counterparts.
In the related work of (Han et al., 2015b) which converts network weights to sparse tensors, in- troducing the same ratio of zeros in the ï¬rst convolutional layer is found to cause more prediction accuracy degradation than in the other convolutional layers. Based on this intuition as well as the observation that the inputs to the ï¬rst layer often contain only a few channels and constitutes a small proportion of total computation complexity, we perform most of our experiments by not quantizing the weights of the ï¬rst convolutional layer, unless noted otherwise. Nevertheless, the outputs of the ï¬rst convolutional layer are quantized to low bitwidth as they would be used by the consequent convolutional layer.
Similarly, when the output number of class is small, to stay away from potential degradation of pre- diction accuracy, we leave the last fully-connected layer intact unless noted otherwise. Nevertheless, the gradients back-propagated from the ï¬nal FC layer are properly quantized.
We will give the empirical evidence in Section 3.3.
2.8 REDUCING RUN-TIME MEMORY FOOTPRINT BY FUSING NONLINEAR FUNCTION AND ROUNDING
One of the motivations for creating low bitwidth neural network is to save run-time memory footprint in inference. A naive implementation of Algorithm 1 would store activations h(ak) in full-precision numbers, consuming much memory during run-time. In particular, if h involves ï¬oating-point arith- metics, there will be non-negligible amount of non-bitwise operations related to computations of h(ak).
There are simple solutions to this problem. Notice that it is possible to fuse Step 3, Step 4, Step 6 to avoid storing intermediate results in full-precision. Apart from this, when h is monotonic, fα · h is also monotonic, the few possible values of ab k corresponds to several non-overlapping value ranges of ak, hence we can implement computation of ab k = fα(h(ak)) by several comparisons between ï¬xed point numbers and avoid generating intermediate results.
Similarly, it would also be desirable to fuse Step 11 â¼ Step 12, and Step 13 of previous iteration to avoid generation and storing of gak . The situation would be more complex when there are inter- mediate pooling layers. Nevertheless, if the pooling layer is max-pooling, we can do the fusion as quantizek function commutes with max function:
quantizek(max(a, b)) = max(quantizek(a), quantizek(b))), (15)
# hence again gb
ak can be generated from gak by comparisons between ï¬xed-point numbers.
6
DoReFa-Net
Table 1: Comparison of prediction accuracy for SVHN with different choices of Bit-width in a DoReFa-Net. W , A, G are bitwidths of weights, activations and gradients respectively. When bitwidth is 32, we simply remove the quantization functions.
W A G Training Complexity Inference Complexity Storage Relative Size Model A Accuracy Model B Accuracy Model C Accuracy 1 1 2 3 1 1 0.934 0.924 0.910 1 1 4 5 1 1 0.968 0.961 0.916 1 1 8 9 1 1 0.970 0.962 0.902 1 1 32 - - 1 0.971 0.963 0.921 1 2 2 4 2 1 0.909 0.930 0.900 1 2 3 5 2 1 0.968 0.964 0.934 1 2 4 6 2 1 0.975 0.969 0.939 2 1 2 6 2 2 0.927 0.928 0.909 2 1 4 10 2 2 0.969 0.957 0.904 1 2 8 10 2 1 0.975 0.971 0.946 1 2 32 - - 1 0.976 0.970 0.950 1 3 3 6 3 1 0.968 0.964 0.946 1 3 4 7 3 1 0.974 0.974 0.959 1 3 6 9 3 1 0.977 0.974 0.949 1 4 2 6 4 1 0.815 0.898 0.911 1 4 4 8 4 1 0.975 0.974 0.962 1 4 8 12 4 1 0.977 0.975 0.955 2 2 2 8 4 1 0.900 0.919 0.856 8 8 8 - - 8 0.970 0.803 0.846 0.828 0.841 0.808 0.878 0.878 0.846 0.827 0.866 0.865 0.887 0.897 0.916 0.868 0.915 0.895 0.842 0.955
3 EXPERIMENT RESULTS
3.1 CONFIGURATION SPACE EXPLORATION
We explore the conï¬guration space of combinations of bitwidth of weights, activations and gradients by experiments on the SVHN dataset.
The SVHN dataset (Netzer et al., 2011) is a real-world digit recognition dataset consisting of photos of house numbers in Google Street View images. We consider the âcroppedâ format of the dataset: 32-by-32 colored images centered around a single character. There are 73257 digits for training, 26032 digits for testing, and 531131 less difï¬cult samples which can be used as extra training data. The images are resized to 40x40 before fed into network.
For convolutions in a DoReFa-Net, if we have W -bit weights, A-bit activations and G-bit gradients, the relative forward and backward computation complexity, storage relative size, can be computed from Eqn. 3 and we list them in Table 1. As it would not be computationally efï¬cient to use bit con- volution kernels for convolutions between 32-bit numbers, and noting that previous works like BNN and XNOR-net have already compared bit convolution kernels with 32-bit convolution kernels, we will omit the complexity comparison of computation complexity for the 32-bit control experiments.
7
# DoReFa-Net
We use the prediction accuracy of several CNN models on SVHN dataset to evaluate the efï¬cacy of conï¬gurations. Model A is a CNN that costs about 80 FLOPs for one 40x40 image, and it consists of seven convolutional layers and one fully-connected layer.
Model B, C, D is derived from Model A by reducing the number of channels for all seven convo- lutional layers by 50%, 75%, 87.5%, respectively. The listed prediction accuracy is the maximum accuracy on test set over 200 epochs. We use ADAM (Kingma & Ba, 2014) learning rule with 0.001 as learning rate.
In general, having low bitwidth weights, activations and gradients will cause degradation in predic- tion accuracy. But it should be noted that low bitwidth networks will have much reduced resource requirement.
As balancing between multiple factors like training time, inference time, model size and accuracy is more a problem of practical trade-off, there will be no deï¬nite conclusion as which combination of (W, A, G) one should choose. Nevertheless, we ï¬nd in these experiments that weights, activations and gradients are progressively more sensitive to bitwidth, and using gradients with G ⤠4 would signiï¬cantly degrade prediction accuracy. Based on these observations, we take (W, A) = (1, 2) and G ⥠4 as rational combinations and use them for most of our experiments on ImageNet dataset.
Table 1 also shows that the relative number of channels signiï¬cantly affect the prediction quality degradation resulting from bitwidth reduction. For example, there is no signiï¬cant loss of prediction accuracy when going from 32-bit model to DoReFa-Net for Model A, which is not the case for Model C. We conjecture that âmore capableâ models like those with more channels will be less sensitive to bitwidth differences. On the other hand, Table 1 also suggests a method to compensate for the prediction quality degradation, by increasing bitwidth of activations for models with less channels, at the cost of increasing computation complexity for inference and training. However, optimal bitwidth of gradient seems less related to model channel numbers and prediction quality saturates with 8-bit gradients most of the time.
3.2 IMAGENET
We further evaluates DoReFa-Net on ILSVRC12 (Deng et al., 2009) image classiï¬cation dataset, which contains about 1.2 million high-resolution natural images for training that spans 1000 cat- egories of objects. The validation set contains 50k images. We report our single-crop evaluation result using top-1 accuracy. The images are resized to 224x224 before fed into the network.
The results are listed in Table 2. The baseline AlexNet model that scores 55.9% single-crop top-1 accuracy is a best-effort replication of the model in (Krizhevsky et al., 2012), with the second, fourth and ï¬fth convolutions split into two parallel blocks. We replace the Local Contrast Renormalization layer with Batch Normalization layer (Ioffe & Szegedy, 2015). We use ADAM learning rule with learning rate 10â4 at the start, and later decrease learning rate to 10â5 and consequently 10â6 when accuracy curves become ï¬at.
From the table, it can be seen that increasing bitwidth of activation from 1-bit to 2-bit and even to 4- bit, while still keep 1-bit weights, leads to signiï¬cant accuracy increase, approaching the accuracy of model where both weights and activations are 32-bit. Rounding gradients to 6-bit produces similar accuracies as 32-bit gradients, in experiments of â1-1-6â v.s. â1-1-32â, â1-2-6â v.s. â1-2-32â, and â1-3-6â v.s. â1-3-32â.
The rows with âinitializedâ means the model training has been initialized with a 32-bit model. It can be seen that there is a considerable gap between the best accuracy of a trained-from-scratch-model and an initialized model. Closing this gap is left to future work. Nevertheless, it show the potential in improving accuracy of DoReFa-Net.
3.2.1 TRAINING CURVES
Figure 1 shows the evolution of accuracy v.s. epoch curves of DoReFa-Net. It can be seen that quantizing gradients to be 6-bit does not cause the training curve to be signiï¬cantly different from not quantizing gradients. However, using 4-bit gradients as in â1-2-4â leads to signiï¬cant accuracy degradation.
8
DoReFa-Net
Table 2: Comparison of prediction accuracy for ImageNet with different choices of bitwidth in a DoReFa-Net. W , A, G are bitwidths of weights, activations and gradients respectively. Single- crop top-1 accuracy is given. Note the BNN result is reported by (Rastegari et al., 2016), not by original authors. We do not quantize the ï¬rst and last layers of AlexNet to low bitwidth, as BNN and XNOR-Net do.
W A G Training Complexity Inference Complexity Storage Relative Size AlexNet Accuracy 1 1 6 7 1 1 0.395 1 1 8 9 1 1 0.395 1 1 32 - 1 1 0.279 (BNN) 1 1 32 - 1 1 0.442 (XNOR-Net) 1 1 32 - 1 1 0.401 1 1 32 - 1 1 0.436 (initialized) 1 2 6 8 2 1 0.461 1 2 8 10 2 1 0.463 1 2 32 - 2 1 0.477 1 2 32 - 2 1 0.498 (initialized) 1 3 6 9 3 1 0.471 1 3 32 - 3 1 0.484 1 4 6 - 4 1 0.482 1 4 32 - 4 1 0.503 1 4 32 - 4 1 0.530 (initialized) 8 8 8 - - 8 0.530 32 32 32 - - 32 0.559
3.2.2 HISTOGRAM OF WEIGHTS, ACTIVATIONS AND GRADIENTS
Figure 2 shows the histogram of gradients of layer âconv3â of â1-2-6â AlexNet model at epoch 5 and 35. As the histogram remains mostly unchanged with epoch number, we omit the histograms of the other epochs for clarity.
Figure 3(a) shows the histogram of weights of layer âconv3â of â1-2-6â AlexNet model at epoch 5, 15 and 35. Though the scale of the weights changes with epoch number, the distribution of weights are approximately symmetric.
Figure 3(b) shows the histogram of activations of layer âconv3â of â1-2-6â AlexNet model at epoch 5, 15 and 35. The distributions of activations are stable throughout the training process.
3.3 MAKING FIRST AND LAST LAYER LOW BITWIDTH
To answer the question whether the ï¬rst and the last layer need to be treated specially when quan- tizing to low bitwidth, we use the same models A, B, C from Table 1 to ï¬nd out if it is cost-effective to quantize the ï¬rst and last layer to low bitwidth, and collect the results in Table 3.
It can be seen that quantizing ï¬rst and the last layer indeed leads to signiï¬cant accuracy degradation, and models with less number of channels suffer more. The degradation to some extent justiï¬es the practices of BNN and XNOR-net of not quantizing these two layers.
9
DoReFa-Net
Accuracy Epoch
Figure 1: Prediction accuracy of AlexNet variants on Validation Set of ImageNet indexed by epoch number. âW-A-Gâ gives the speciï¬cation of bitwidths of weights, activations and gradients. E.g., â1-2-4â stands for the case when weights are 1-bit, activations are 2-bit and gradients are 4-bit. The ï¬gure is best viewed in color.
(a) (b)
Figure 2: Histogram of gradients of layer âconv3â of â1-2-6â AlexNet model at epoch 5 and 35. The y-axis is in logarithmic scale.
# 4 DISCUSSION AND RELATED WORK
By binarizing weights and activations, binarized neural networks like BNN and XNOR-Net have enabled acceleration of the forward pass of neural network with bit convolution kernel. However, the backward pass of binarized networks still requires convolutions between ï¬oating-point gradients and weights, which could not efï¬ciently exploit bit convolution kernel as gradients are in general not low bitwidth numbers.
10
DoReFa-Net
(b)
(a)
Figure 3: (a) is histogram of weights of layer âconv3â of â1-2-6â AlexNet model at epoch 5, 15 and 35. There are two possible values at a speciï¬c epoch since the weights are scaled 1-bit. (b) is histogram of activation of layer âconv3â of â1-2-6â AlexNet model at epoch 5, 15 and 35. There are four possible values at a speciï¬c epoch since the activations are 2-bit.
Table 3: Control experiments for investigation on theh degredation cost by quantizing the ï¬rst con- volutional layer and the last FC layer to low bitwidth. The row with â(1, 2, 4)â stands for the baseline case of (W, A, G) = (1, 2, 4) and not quantizing the ï¬rst and last layers. â+ ï¬rstâ means addition- ally quantizing the weights and gradients of the ï¬rst convolutional layer (outputs of the ï¬rst layer are already quantized in the base â(1,2,4)â scheme). â+ lastâ means quantizing the inputs, weights and gradients of the last FC layer. Note that outputs of the last layer do not need quantization.
Scheme Model A Accuracy Model B Accuracy Model C Accuracy (1, 2, 4) 0.975 0.969 0.939 (1, 2, 4) + ï¬rst 0.972 0.963 0.932 (1, 2, 4) + last 0.973 0.969 0.927 (1, 2, 4) + ï¬rst + last 0.971 0.961 0.928
(Lin et al., 2015) makes a step further towards low bitwidth gradients by converting some multipli- cations to bit-shift. However, the number of additions between high bitwidth numbers remains at the same order of magnitude as before, leading to reduced overall speedup.
There is also another series of work (Seide et al., 2014) that quantizes gradients before communi- cation in distributed computation settings. However, the work is more concerned with decreasing the amount of communication trafï¬c, and does not deal with the bitwidth of gradients used in back- propagation. In particular, they use full precision gradients during the backward pass, and quantize the gradients only before sending them to other computation nodes. In contrast, we quantize gradi- ents each time before they reach the selected convolution layers during the backward pass.
To the best of our knowledge, our work is the ï¬rst to reduce the bitwidth of gradient to 6-bit and lower, while still achieving comparable prediction accuracy without altering other aspects of neural network model, such as increasing the number of channels, for models as large as AlexNet on ImageNet dataset.
11
DoReFa-Net
# 5 CONCLUSION AND FUTURE WORK
We have introduced DoReFa-Net, a method to train a convolutional neural network that has low bitwidth weights and activations using low bitwidth parameter gradients. We ï¬nd that weights and activations can be deterministically quantized while gradients need to be stochastically quantized.
As most convolutions during forward/backward passes are now taking low bitwidth weights and activations/gradients respectively, DoReFa-Net can use the bit convolution kernels to accelerate both training and inference process. Our experiments on SVHN and ImageNet datasets demonstrate that DoReFa-Net can achieve comparable prediction accuracy as their 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1% top-1 accuracy on ImageNet validation set.
As future work, it would be interesting to investigate using FPGA to train DoReFa-Net, as the O(B2) resource requirement of computation units for B-bit arithmetic on FPGA strongly favors low bitwidth convolutions.
# REFERENCES
Abadi, Martın, Agarwal, Ashish, Barham, Paul, Brevdo, Eugene, Chen, Zhifeng, Citro, Craig, Cor- rado, Greg S, Davis, Andy, Dean, Jeffrey, Devin, Matthieu, et al. Tensorï¬ow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorï¬ow. org.
Bahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
Bengio, Yoshua, L´eonard, Nicholas, and Courville, Aaron. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
Chen, Tianshi, Du, Zidong, Sun, Ninghui, Wang, Jia, Wu, Chengyong, Chen, Yunji, and Temam, Olivier. Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning. In ACM Sigplan Notices, volume 49, pp. 269â284. ACM, 2014a.
Chen, Yunji, Luo, Tao, Liu, Shaoli, Zhang, Shijin, He, Liqiang, Wang, Jia, Li, Ling, Chen, Tianshi, Xu, Zhiwei, Sun, Ninghui, et al. Dadiannao: A machine-learning supercomputer. In Proceedings of the 47th Annual IEEE/ACM International Symposium on Microarchitecture, pp. 609â622. IEEE Computer Society, 2014b.
Courbariaux, Matthieu and Bengio, Yoshua. Binarynet: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830, 2016.
Courbariaux, Matthieu, Bengio, Yoshua, and David, Jean-Pierre. Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024, 2014.
Deng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, and Fei-Fei, Li. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248â255. IEEE, 2009.
Farabet, Cl´ement, LeCun, Yann, Kavukcuoglu, Koray, Culurciello, Eugenio, Martini, Berin, Ak- selrod, Polina, and Talay, Selcuk. Large-scale fpga-based convolutional networks. Scaling up Machine Learning: Parallel and Distributed Approaches, pp. 399â419, 2011.
Gong, Yunchao, Liu, Liu, Yang, Ming, and Bourdev, Lubomir. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115, 2014.
Gupta, Suyog, Agrawal, Ankur, Gopalakrishnan, Kailash, and Narayanan, Pritish. Deep learning with limited numerical precision. arXiv preprint arXiv:1502.02551, 2015.
Han, Song, Mao, Huizi, and Dally, William J. Deep compression: Compressing deep neural net- works with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015a.
12
DoReFa-Net
Han, Song, Pool, Jeff, Tran, John, and Dally, William. Learning both weights and connections for efï¬cient neural network. In Advances in Neural Information Processing Systems, pp. 1135â1143, 2015b.
Hinton, Geoffrey, Deng, Li, Yu, Dong, Dahl, George E, Mohamed, Abdel-rahman, Jaitly, Navdeep, Senior, Andrew, Vanhoucke, Vincent, Nguyen, Patrick, Sainath, Tara N, et al. Deep neural net- works for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82â97, 2012a.
Hinton, Geoffrey, Srivastava, Nitsh, and Swersky, Kevin. Neural networks for machine learning. Coursera, video lectures, 264, 2012b.
Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
Kim, Minje and Smaragdis, Paris. Bitwise neural networks. arXiv preprint arXiv:1601.06071, 2016.
Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classiï¬cation with deep convo- lutional neural networks. In Advances in neural information processing systems, pp. 1097â1105, 2012.
Li, Fengfu and Liu, Bin. Ternary weight networks. arXiv preprint arXiv:1605.04711, 2016.
Lin, Zhouhan, Courbariaux, Matthieu, Memisevic, Roland, and Bengio, Yoshua. Neural networks with few multiplications. arXiv preprint arXiv:1510.03009, 2015.
Merolla, Paul, Appuswamy, Rathinakumar, Arthur, John, Esser, Steve K, and Modha, Dharmendra. Deep neural networks are robust to weight binarization and other non-linear distortions. arXiv preprint arXiv:1606.01981, 2016.
Netzer, Yuval, Wang, Tao, Coates, Adam, Bissacco, Alessandro, Wu, Bo, and Ng, Andrew Y. Read- ing digits in natural images with unsupervised feature learning. In NIPS workshop on deep learn- ing and unsupervised feature learning, volume 2011, pp. 5. Granada, Spain, 2011.
Pham, Phi-Hung, Jelaca, Darko, Farabet, Clement, Martini, Berin, LeCun, Yann, and Culurciello, Eugenio. Neuï¬ow: Dataï¬ow vision processing system-on-a-chip. In Circuits and Systems (MWS- CAS), 2012 IEEE 55th International Midwest Symposium on, pp. 1044â1047. IEEE, 2012.
Rastegari, Mohammad, Ordonez, Vicente, Redmon, Joseph, and Farhadi, Ali. Xnor-net: Ima- genet classiï¬cation using binary convolutional neural networks. arXiv preprint arXiv:1603.05279, 2016.
Seide, Frank, Fu, Hao, Droppo, Jasha, Li, Gang, and Yu, Dong. 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech dnns. In INTERSPEECH, pp. 1058â1062, 2014.
Vanhoucke, Vincent, Senior, Andrew, and Mao, Mark Z. Improving the speed of neural networks on cpus. In Proc. Deep Learning and Unsupervised Feature Learning NIPS Workshop, volume 1, 2011.
Wu, Jiaxiang, Leng, Cong, Wang, Yuhang, Hu, Qinghao, and Cheng, Jian. Quantized convolutional neural networks for mobile devices. arXiv preprint arXiv:1512.06473, 2015.
13 | {
"id": "1502.03167"
} |
1606.04460 | Model-Free Episodic Control | State of the art deep reinforcement learning algorithms take many millions of
interactions to attain human-level performance. Humans, on the other hand, can
very quickly exploit highly rewarding nuances of an environment upon first
discovery. In the brain, such rapid learning is thought to depend on the
hippocampus and its capacity for episodic memory. Here we investigate whether a
simple model of hippocampal episodic control can learn to solve difficult
sequential decision-making tasks. We demonstrate that it not only attains a
highly rewarding strategy significantly faster than state-of-the-art deep
reinforcement learning algorithms, but also achieves a higher overall reward on
some of the more challenging domains. | http://arxiv.org/pdf/1606.04460 | Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis | stat.ML, cs.LG, q-bio.NC | null | null | stat.ML | 20160614 | 20160614 | 6 1 0 2
n u J 4 1 ] L M . t a t s [ 1 v 0 6 4 4 0 . 6 0 6 1 : v i X r a
# Model-Free Episodic Control
Charles Blundell Google DeepMind cblundell@google.com
Benigno Uria Google DeepMind buria@google.com
Alexander Pritzel Google DeepMind apritzel@google.com
# Yazhe Li Google DeepMind yazhe@google.com
Avraham Ruderman Google DeepMind aruderman@google.com
Joel Z Leibo Google DeepMind jzl@google.com
Jack Rae Google DeepMind jwrae@google.com
Daan Wierstra Google DeepMind wierstra@google.com
Demis Hassabis Google DeepMind demishassabis@google.com
# Abstract
State of the art deep reinforcement learning algorithms take many millions of inter- actions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon ï¬rst discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difï¬cult sequential decision- making tasks. We demonstrate that it not only attains a highly rewarding strategy signiï¬cantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
# 1 Introduction
Deep reinforcement learning has recently achieved notable successes in a variety of domains [23, 32]. However, it is very data inefï¬cient. For example, in the domain of Atari games [2], deep Reinforcement Learning (RL) systems typically require tens of millions of interactions with the game emulator, amounting to hundreds of hours of game play, to achieve human-level performance. As pointed out by [13], humans learn to play these games much faster. This paper addresses the question of how to emulate such fast learning abilities in a machineâwithout any domain-speciï¬c prior knowledge.
Current deep RL algorithms may happen upon, or be shown, highly rewarding sequences of actions. Unfortunately, due to their slow gradient-based updates of underlying policy or value functions, these algorithms require large numbers of steps to assimilate such information and translate it into policy improvement. Thus these algorithms lack the ability to rapidly latch onto successful strategies. Episodic control, introduced by [16], is a complementary approach that can rapidly re-enact observed, successful policies. Episodic control records highly rewarding experiences and follows a policy that replays sequences of actions that previously yielded high returns.
In the brain, this form of very fast learning is critically supported by the hippocampus and related medial temporal lobe structures [1, 34]. For example, a ratâs performance on a task requiring navigation to a hidden platform is impaired by lesions to these structures [24, 36]. Hippocampal learning is thought to be instance-based [18, 35], in contrast to the cortical system which represents generalised statistical summaries of the input distribution [20, 27, 41]. The hippocampal system may be used to guide sequential decision-making by co-representing environment states with the returns
achieved from the various possible actions. After such encoding, at a given probe state, the return associated to each possible action could be retrieved by pattern completion in the CA3 subregion [9, 21, 26, 40]. The ï¬nal value achieved by a sequence of actions could quickly become associated with each of its component state-action pairs by the reverse-ordered replay of hippocampal place cell activations that occurs after a rewarding event [7].
Humans and animals utilise multiple learning, memory, and decision systems each best suited to different settings [5, 33]. For example, when an accurate model of the environment is available, and there are sufï¬cient time and working memory resources, the best strategy is model-based planning associated with prefrontal cortex [5]. However, when there is no time or no resources available for planning, the less compute-intensive immediate decision systems must be employed [29]. This presents a problem early on in the learning of a new environment as the model-free decision system will be even less accurate in this case since it has not yet had enough repeated experience to learn an accurate value function. In contrast, this is the situation where model-free episodic control may be most useful. Thus the argument for hippocampal involvement in model-free control parallels the argument for its involvement in model-based control. In both cases quick-to-learn instance-based control policies serve as a rough approximation while a slower more generalisable decision system is trained up [16].
The domain of applicability of episodic control may be hopelessly limited by the complexity of the world. In real environments the same exact situation is rarely, if ever, revisited. In RL terms, repeated visits to the exactly the same state are also extremely rare. Here we show that the commonly used Atari environments do not have this property. In fact, we show that the agents developed in this work re-encounter exactly the same Atari states between 10-60% of the time. As expected, the episodic controller works well in such a setting. The key test for this approach is whether it can also work in more realistic environments where states are never repeated and generalisation over similar states is essential. Critically, we also show that our episodic control model still performs well in such (3D) environments where the same state is essentially never re-visited.
# 2 The episodic controller
In reinforcement learning [e.g.|37], an agent interacts with an environment through a sequence of states, s, ⬠S; actions, a; ⬠A; and rewards r;41 ⬠IR. Actions are determined by the agentâs policy 1 (az|8z), a probability distribution over the action a;. The goal of the agent is to learn a policy that maximises the expected discounted return Ry = yet 7~1r447 where T is the time step at which each episode ends, and + ⬠(0, 1] the discount rate. Upon executing an action a, the agent transitions from state s;, to state $141.
Environments with deterministic state transitions and rewards are common in daily experience. For example, in navigation, when you exit a room and then return back, you usually end up in the room where you started. This property of natural environments can be exploited by RL algorithms or brains. However, most existing scalable deep RL algorithms (such as DQN [23] and A3C [22]) do not do so. They were designed with more general environments in mind. Thus, in principle, they could operate in regimes with high degrees of stochasticity in both transitions and rewards. This generality comes at the cost of longer learning times. DQN and A3C both attempt to ï¬nd a policy with maximal expected return. Evaluating the expected return requires many examples in order to get accurate estimates. Additionally, these algorithms are further slowed down by gradient descent learning, typically in lock-step with the rate at which actions are taken in the environment.
Given the ubiquity of such near-deterministic situations in the real world, it would be surprising if the brain did not employ specialised learning mechanisms to exploit this structure and thereby learn more quickly in such cases. The episodic controller model of hippocampal instance-based learning we propose here is just such a mechanism. It is a non-parametric model that rapidly records and replays the sequence of actions that so far yielded the highest return from a given start state. In its simplest form, it is a growing table, indexed by states and actions. By analogy with RL value functions, we denote this table QEC(s, a). Each entry contains the highest return ever obtained by taking action a from state s.
2
The episodic control policy picks the action with the highest value in QEC for the given state. At the end of each episode, QEC is updated according to the return received as follows:
7 EC EC Ry, if (st,a4) ⬠Qh, Qâ¢(st,41) BC ( . ) (1) max {Q®°(s;,a1), Ri} otherwise,
where Rt is the discounted return received after taking action at in state st. Note that (1) is not a general purpose RL learning update: since the stored value can never decrease, it is not suited to rational action selection in stochastic environments.1
Tabular RL methods suffer from two key deï¬ciencies: ï¬rstly, for large problems they consume a large amount of memory, and secondly, they lack a way to generalise across similar states. To address the ï¬rst problem, we limit the size of the table by removing the least recently updated entry once a maximum size has been reached. Such forgetting of older, less frequently accessed memories also occurs in the brain [8].
In large scale RL problems (such as real life) novel states are common; the real world, in general, also has this property. We address the problem of what to do in novel states and how to generalise values across common experiences by taking QEC to be a non-parametric nearest-neighbours model. Let us assume that the state space S is imbued with a metric distance. For states that have never been visited, QEC is approximated by averaging the value of the k nearest states. Thus if s is a novel state then QEC is estimated as
OF (5,0) = i na Q) ,a) otherwise,
where s(i), i = 1, . . . , k are the k states with the smallest distance to state s.2
Algorithm 1 describes the most basic form of the model-free episodic control. The algorithm has two phases. First, the policy implied by QEC is executed for a full episode, recording the rewards received at each step. This is done by projecting each observation from the environment ot via an embedding function Ï to a state in an appropriate state space: st = Ï(ot), then selecting the action with the highest estimated return according to QEC. In the second phase, the rewards, actions and states from an episode are associated via a backward replay process into QEC to improve the policy. Interestingly, this backward replay process is a potential algorithmic instance of the awake reverse replay of hippocampal states shown by [7], although as yet, we are unaware of any experiments testing this interesting use of hippocampus.
# Algorithm 1 Model-Free Episodic Control.
1: for each episode do 2 for t = 1,2,3,...,T do 3 Receive observation o; from environment. 4: Let s, = $(0). 5: Estimate return for each action a via 6 Let a; = arg max, QFC(s;, a) 7 Take action a;, receive reward rp41 8: end for 9: fort =7,Tâ1,...,1do 10: Update QF°(s;, az) using R; according to 11: end for 12: end for
The episodic controller acts according to the returns recorded in QEC, in an attempt to replay successful sequences of actions and recreate past successes. The values stored in QEC(s, a) thus do
1Following a policy that picks the action with the highest QEC value would yield a strong risk seeking behaviour in stochastic environments. It is also possible to, instead, remove the max operator and store Rt directly. This yields a less optimistic estimate and performed worse in preliminary experiments.
2 In practice, we implemented this by having one kNN buffer for each action a â A and ï¬nding the k closest states in each buffer to state s.
3
not correspond to estimates of the expected return, rather they are estimates of the highest potential return for a given state and action, based upon the states, rewards and actions seen. Computing and behaving according to such a value function is useful in regimes where exploitation is more important than exploration, and where there is relatively little noise in the environment.
# 3 Representations
In the brain, the hippocampus operates on a representation which notably includes the output of the ventral stream [3, 15, 38]. Thus it is expected to generalise along the dimensions of that representation space [19]. Similarly, the feature mapping, Ï, can play a critical role in how our episodic control algorithm performs when it encounters novel states3.
Whilst the original observation space could be used, this may not work in practice. For example, each frame in the environments we consider in Section 4 would occupy around 28 KBytes of memory and would require more than 300 gigabytes of memory for our experiments. Instead we consider two different embeddings of observations into a state space, Ï, each having quite distinctive properties in setting the inductive bias of the QEC estimator.
One way of decreasing memory and computation requirements is to utilise a random projection of the original observations into a smaller-dimensional space, i.e. ¢ : x â Az, where A ⬠R?*? and F' < D where D is the dimensionality of the observation. For a random matrix A with entries drawn from a standard Gaussian, the Johnson-Lindenstrauss lemma implies that this transformation approximately preserves relative distances in the original space [10]. We expect this representation to be sufficient when small changes in the original observation space correspond to small changes in the underlying return.
For some environments, many aspects of the observation space are irrelevant for value prediction. For example, illumination and textured surfaces in 3D environments (e.g. Labyrinth in Section 4), and scrolling backgrounds in 2D environments (e.g. River Raid in Section 4) may often be irrele- vant. In these cases, small distances in the original observation space may not be correlated with small distances in action-value. A feature extraction method capable of extracting a more abstract representation of the state space (e.g. 3D geometry or the position of sprites in the case of 2D video-games) could result in a more suitable distance calculation. Abstract features can be obtained by using latent-variable probabilistic models. Variational autoencoders (VAE; [12, 30]), further described in the supplementary material, have shown a great deal of promise across a wide range of unsupervised learning problems on images. Interestingly, the latent representations learnt by VAEs in an unsupervised fashion can lie on well structured manifolds capturing salient factors of variation [12, Figures 4(a) and (b)]; [30, Figure 3(b)]. In our experiments, we train the VAEs on frames from an agent acting randomly. Using a different data source will yield different VAE features, and in principle features from one task can be used in another. Furthermore, the distance metric for comparing embeddings could also be learnt. We leave these two interesting extensions to future work.
# 4 Experimental results
We tested our algorithm on two environments: the Arcade Learning Environment (Atari) [2], and a ï¬rst-person 3-dimensional environment called Labyrinth [22]. Videos of the trained agents are available online4.
The Arcade Learning Environment is a suite of arcade games originally developed for the Atari-2600 console. These games are relatively simple visually but require complex and precise policies to achieve high expected reward [23].
Labyrinth provides a more complex visual experience, but requires relatively simple policies e.g. turning when in the presence of a particular visual cue. The three Labyrinth environments are foraging tasks with appetitive, adversive and sparse appetitive reward structures, respectively.
3One way to understand this is that this feature mapping Ï determines the dynamic discretization of the state-space into Voronoi cells implied by the k-nearest neighbours algorithm underlying the episodic controller. 4https://sites.google.com/site/episodiccontrol/
4
For each environment, we tested the performance of the episodic controller using two embeddings of the observations Ï: (1) 64 random-projections of the pixel observations and (2) the 64 parameters of a Gaussian approximation to the posterior over the latent dimensions in a VAE.
For the experiments that use latent features from a VAE, a random policy was used for one million frames at the beginning of training, these one million observations were used to train the VAE. The episodic controller is started after these one million frames, and uses the features obtained from the VAE. Both mean and log-standard-deviation parameters were used as dimensions in the calculation of Euclidean distances. To account for the initial phase of training we displaced performance curves for agents that use VAE features by one million frames.
# 4.1 Atari
For the Atari experiments we considered a set of ï¬ve games, namely: Ms. PAC-MAN, Q*bert, River Raid, Frostbite, and Space Invaders. We compared our algorithm to the original DQN algorithm [23], to DQN with prioritised replay [31], and to the asynchronous advantage actor-critic [22] (A3C), a state-of-the-art policy gradient method 5. Following [23], observations were rescaled to 84 by 84 pixels and converted to gray-scale. The Atari simulator produces 60 observations (frames) per second of game play. The agents interact with the environment 15 times per second, as actions are repeated 4 times to decrease the computational requirements. An hour of game play corresponds to approximately 200,000 frames.
In the episodic controller, the size of each buffer (one per action) of state-value pairs was limited to one million entries. If the buffer is full and a new state-value pair has to be introduced, the least recently used state is discarded. The k-nearest-neighbour lookups used k = 11. The discount rate was set to y = 1. Exploration is achieved by using an ¢-greedy policy with « = 0.005. We found that higher exploration rates were not as beneficial, as more exploration makes exploiting what is known harder. Note that previously published exploration rates (e.g., [22||23]) are at least a factor of ten higher. Thus interestingly, our method attains good performance on a wide range of domains with relatively little random exploration.
Results are shown in the top two rows of Figure 1. In terms of data efï¬ciency the episodic controller outperformed all other algorithms during the initial learning phase of all games. On Q*bert and River Raid, the episodic controller is eventually overtaken by some of the parametric controllers (not shown in Figure 1). After an initial phase of fast learning the episodic controller was limited by the decrease in the relative amount of new experience that could be obtained in each episode as these become longer. In contrast the parametric controllers could utilise their non-local generalisation capabilities to handle the later stages of the games.
The two different embeddings (random projections and VAE) did not have a notable effect on the performance of the episodic control policies. Both representations proved more data efï¬cient than the parametric policies. The only exception is Frostbite where the VAE features perform noticeably worse. This may be due to the inability of a random policy to reach very far in the game, which results in a very poor training-set for the VAE.
Deep Q-networks and A3C exhibited a slow pace of policy improvement in Atari. For Frostbite and Ms. PAC-MAN, this has, sometimes, been attributed to naive exploration techniques [13}|28]. Our results demonstrate that a simple exploration technique like e-greedy can result in much faster policy improvements when combined with a system that is able to learn in a one-shot fashion.
The Atari environment has deterministic transitions and rewards. Each episode starts at one of thirty possible initial states. Therefore a sizeable percentage of states-action pairs are exactly matched in the buffers of Q-values: about 10% for Frostbite, 60% for Q*bert, 50% for Ms. PAC-MAN, 45% for Space Invaders, and 10% for River Raid. In the next section we report experiments on a set of more realistic environments where the same exact experience is seldom encountered twice.
5We are forever indebted to Tom Schaul for the prioritised replay baseline and Andrei Rusu for the A3C baseline.
5
Ms. Pac-Man Space Invaders Frostbite 2.5 4.0 ¥ 6 5 2.0 35 i) . 3.0 34 2.5 i 15 . F 3 2.0 £ 1.0 3? 0 S 0.5 . gl 0.5 % 0 0.0 0.0 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50 Q*bert River Raid wu 14 12 212 rf 10 10 2. ® E 6 <£ 6 S 2 2 0 ) 0 10 20 30 40 50 0 10 20 30 40 50 35 Forage 14 Forage & Avoid Double T-Maze 30 25 20 15 10 12 10 Scores ON BODO PORN WAU 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50 Millions of Frames Millions of Frames Millions of Frames
ââ DQN = Prioritised DQN âââ A3C ââ= EC-VAE ââ EC-RP
Figure 1: Average reward vs. number of frames (in millions) experienced for ï¬ve Atari games and three Labyrinth environments. Dark curves show the mean of ï¬ve runs (results from only one run were available for DQN baselines) initialised with different random number seeds. Light shading shows the standard error of the mean across runs. Episodic controllers (orange and blue curves) outperform parametric Q-function estimators (light green and pink curves) and A3C (dark green curve) in the initial phase of learning. VAE curves start after one million frames to account for their training using a random policy.
# 4.2 Labyrinth
The Labyrinth experiments involved three levels (screenshots are shown in Figure 2). The environment runs at 60 observations (frames) per simulated second of physical time. Observations are gray-scale images of 84 by 84 pixels. The agent interacts with the environment 15 times per second; actions are automatically repeated for 4 frames (to reduce computational requirements). The agent has eight different actions available to it (move-left, move-right, turn-left, turn-right, move-forward, move- backwards, move-forward and turn-left, move-forward and turn-right). In the episodic controller, the size of each buffer (one per action) of state-value pairs was limited to one hundred thousand entries. When the buffer was full and a new state-value pair had to be introduced, the least recently used
6
(b) (c)
(a)
Figure 2: High-resolution screenshots of the Labyrinth environments. (a) Forage and Avoid showing the apples (positive rewards) and lemons (negative rewards). (b) Double T-maze showing cues at the turning points. (c) Top view of a Double T-maze conï¬guration. The cues indicate the reward is located at the top left.
state was discarded. The k-nearest-neighbour lookups used k = 50. The discount rate was set to 7 = 0.99. Exploration is achieved by using an e-greedy policy with « = 0.005. As a baseline, we used A3C [22]. Labyrinth levels have deterministic transitions and rewards, but the initial location and facing direction are randomised, and the environment is much richer, being 3-dimensional. For this reason, unlike Atari, experiments on Labyrinth encounter very few exact matches in the buffers of QF°-values; less than 0.1% in all three levels.
Each level is progressively more difï¬cult. The ï¬rst level, called Forage, requires the agent to collect apples as quickly as possible by walking through them. Each apple provides a reward of 1. A simple policy of turning until an apple is seen and then moving towards it sufï¬ces here. Figure 1 shows that the episodic controller found an apple seeking policy very quickly. Eventually A3C caught up, and ï¬nal outperforms the episodic controller with a more efï¬cient strategy for picking up apples.
The second level, called Forage and Avoid involves collecting apples, which provide a reward of 1, while avoiding lemons which incur a reward of â1. The level is depicted in Figure 2(a). This level requires only a slightly more complicated policy then Forage (same policy plus avoid lemons) yet A3C took over 40 million steps to the same performance that episodic control attained in fewer than 3 million frames.
The third level, called Double-T-Maze, requires the agent to walk in a maze with four ends (a map is shown in Figure 2(c)) one of the ends contains an apple, while the other three contain lemons. At each intersection the agent is presented with a colour cue that indicates the direction in which the apple is located (see Figure 2(b)): left, if red, or right, if green. If the agent walks through a lemon it incurs a reward of â1. However, if it walks through the apple, it receives a reward of 1, is teleported back to the starting position and the location of the apple is resampled. The duration of an episode is limited to 1 minute in which it can reach the apple multiple times if it solves the task fast enough. Double-T-Maze is a difï¬cult RL problem: rewards are sparse. In fact, A3C never achieved an expected reward above zero. Due to the sparse reward nature of the Double T-Maze level, A3C did not update the policy strongly enough in the few instances in which a reward is encountered through random diffusion in the state space. In contrast, the episodic controller exhibited behaviour akin to one-shot learning on these instances, and was able to learn from the very few episodes that contain any rewards different from zero. This allowed the episodic controller to observe between 20 and 30 million frames to learn a policy with positive expected reward, while the parametric policies never learnt a policy with expected reward higher than zero. In this case, episodic control thrived in sparse reward environment as it rapidly latched onto an effective strategy.
# 4.3 Effect of number of nearest neighbours on ï¬nal score
Finally, we compared the effect of varying k (the number of nearest neighbours) on both Labyrinth and Atari tasks using VAE features. In our experiments above, we noticed that on Atari re-visiting the same state was common, and that random projections typically performed the same or better than VAE features. One further interesting feature is that the learnt VAEs on Atari games do not yield a higher score as the number of neighbours increases, except on one game, Q*bert, where VAEs perform reasonably well (see Figure 3a). On Labyrinth levels, we observed that the VAEs outperformed random projections and the agent rarely encountered the same state more than once. Interestingly for this case, Figure 3b shows that increasing the number of nearest neighbours has a
7
(a) Atari games. (b) Labyrinth levels.
Figure 3: Effect of number of neighbours, k, on on ï¬nal score (y axis).
signiï¬cant effect on the ï¬nal performance of the agent in Labyrinth levels. This strongly suggests that VAE features provide the episodic control agent with generalisation in Labyrinth.
# 5 Discussion
This work tackles a critical deï¬ciency in current reinforcement learning systems, namely their inability to learn in a one-shot fashion. We have presented a fast-learning system based on non-parametric memorisation of experience. We showed that it can learn good policies faster than parametric function approximators. However, it may be overtaken by them at later stages of training. It is our hope that these ideas will ï¬nd application in practical systems, and result in data-efï¬cient model-free methods. These results also provide support for the hypothesis that episodic control could be used by the brain, especially in the early stages of learning in a new environment. Note also that there are situations in which the episodic controller is always expected to outperform. For example, when hiding food for later consumption, some birds (e.g., scrub jays) are better off remembering their hiding spot exactly than searching according to a distribution of likely locations [4]. These considerations support models in which the brain uses multiple control systems and an arbitration mechanism to determine which to act according to at each point in time [5, 16].
We have referred to this approach as model-free episodic control to distinguish it from model-based episodic planning. We conjecture that both such strategies may be used by the brain in addition to the better-known habitual and goal-directed systems associated with dorsolateral striatum and prefrontal cortex respectively [5]. The tentative picture to emerge from this work is one in which the amount of time and working memory resources available for decision making is a key determiner of which control strategies are available. When decisions must be made quickly, planning-based approaches are simply not an option. In such cases, the only choice is between the habitual model-free system and the episodic model-free system. When decisions are not so rushed, the planning-based approaches become available and the brain must then arbitrate between planning using semantic (neocortical) information or episodic (hippocampal) information. In both timing regimes, the key determiner of whether to use episodic information or not is how much uncertainty remains in the estimates provided by the slower-to-learn system. This prediction agrees with those of [5, 16] with respect to the statistical trade-offs between systems. It builds on their work by highlighting the potential impact of rushed decisions and insufï¬cient working memory resources in accord with [29]. These ideas could be tested experimentally by manipulations of decision timing or working memory, perhaps by orthogonal tasks, and fast measurements of coherence between medial temporal lobe and output structures under different statistical conditions.
8
# Acknowledgements
We are grateful to Dharshan Kumaran and Koray Kavukcuoglu for their detailed feedback on this manuscript. We are indebted to Marcus Wainwright and Max Cant for generating the images in Figure 2. We would also like to thank Peter Dayan, Shane Legg, Ian Osband, Joel Veness, Tim Lillicrap, Theophane Weber, Remi Munos, Alvin Chua, Yori Zwols and many others at Google DeepMind for fruitful discussions.
# References
[1] Per Andersen, Richard Morris, David Amaral, Tim Bliss, and John OKeefe. The hippocampus book. Oxford University Press, 2006.
[2] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artiï¬cial Intelligence Research, 47:253â279, 06 2013.
[3] Malcolm W Brown and John P Aggleton. Recognition memory: what are the roles of the perirhinal cortex and hippocampus? Nature Reviews Neuroscience, 2(1):51â61, 2001.
[4] Nicola S Clayton and Anthony Dickinson. Episodic-like memory during cache recovery by scrub jays. Nature, 395(6699):272â274, 1998.
[5] Nathaniel D Daw, Yael Niv, and Peter Dayan. Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nature neuroscience, 8(12):1704â1711, 2005.
[6] Alexey Dosovitskiy, Jost Tobias Springenberg, and Thomas Brox. Learning to generate chairs with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1538â1546, 2015.
[7] David J Foster and Matthew A Wilson. Reverse replay of behavioural sequences in hippocampal place cells during the awake state. Nature, 440(7084):680â683, 2006.
[8] Oliver Hardt, Karim Nader, and Lynn Nadel. Decay happens: the role of active forgetting in memory. Trends in cognitive sciences, 17(3):111â120, 2013.
[9] John J Hopï¬eld. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences, 79(8):2554â2558, 1982.
[10] William B Johnson and Joram Lindenstrauss. Extensions of lipschitz mappings into a hilbert space. Contemporary mathematics, 26(189-206):1, 1984.
[11] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi- supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pages 3581â3589, 2014.
[12] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
[13] Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines that learn and think like people. arXiv preprint arXiv:1604.00289, 2016.
[14] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998.
[15] Joel Z. Leibo, Julien Cornebise, Sergio Gomez, and Demis Hassabis. Approximate hubel-wiesel modules and the data structures of neural computation. arxiv:1512.08457 [cs.NE], 2015.
[16] M. Lengyel and P. Dayan. Hippocampal contributions to control: The third way. In NIPS, volume 20, pages 889â896, 2007.
[17] David JC MacKay. Information theory, inference and learning algorithms. Cambridge university press, 2003.
9
[18] D Marr. Simple memory: A theory for archicortex. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, pages 23â81, 1971.
[19] James L McClelland and Nigel H Goddard. Considerations arising from a complementary learning systems perspective on hippocampus and neocortex. Hippocampus, 6(6):654â665, 1996.
[20] James L McClelland, Bruce L McNaughton, and Randall C OâReilly. Why there are comple- mentary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychological review, 102(3):419, 1995.
[21] Bruce L McNaughton and Richard GM Morris. Hippocampal synaptic enhancement and information storage within a distributed memory system. Trends in neurosciences, 10(10):408â 415, 1987.
[22] Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lill- icrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. CoRR, abs/1602.01783, 2016.
[23] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
[24] RGM Morris, P Garrud, and JNP Rawlinst. Place navigation impaired in rats with hippocampal lesions. Nature, 297:681, 1982.
[25] Vinod Nair and Geoffrey E Hinton. Rectiï¬ed linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 807â814, 2010.
[26] Kazu Nakazawa, Michael C Quirk, Raymond A Chitwood, Masahiko Watanabe, Mark F Yeckel, Linus D Sun, Akira Kato, Candice A Carr, Daniel Johnston, Matthew A Wilson, et al. Requirement for hippocampal ca3 nmda receptors in associative memory recall. Science, 297(5579):211â218, 2002.
[27] Kenneth A Norman and Randall C OâReilly. Modeling hippocampal and neocortical contri- butions to recognition memory: a complementary-learning-systems approach. Psychological review, 110(4):611, 2003.
[28] Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action- In Advances in Neural conditional video prediction using deep networks in atari games. Information Processing Systems, pages 2845â2853, 2015.
[29] A Ross Otto, Samuel J Gershman, Arthur B Markman, and Nathaniel D Daw. The curse of planning dissecting multiple reinforcement-learning systems by taxing the central executive. Psychological science, page 0956797612463080, 2013.
[30] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of The 31st International Conference on Machine Learning, pages 1278â1286, 2014.
[31] Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. CoRR, abs/1511.05952, 2015.
[32] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driess- che, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mas- tering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016.
[33] Larry R Squire. Memory and the hippocampus: a synthesis from ï¬ndings with rats, monkeys, and humans. Psychological review, 99(2):195, 1992.
10
[34] Larry R Squire. Memory systems of the brain: a brief history and current perspective. Neurobi- ology of learning and memory, 82(3):171â177, 2004.
[35] Robert J Sutherland and Jerry W Rudy. Conï¬gural association theory: The role of the hip- pocampal formation in learning, memory, and amnesia. Psychobiology, 17(2):129â144, 1989.
[36] Robert J Sutherland, Ian Q Whishaw, and Bob Kolb. A behavioural analysis of spatial localiza- tion following electrolytic, kainate-or colchicine-induced damage to the hippocampal formation in the rat. Behavioural brain research, 7(2):133â153, 1983.
[37] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 1998.
[38] Wendy L Suzuki and David G Amaral. Perirhinal and parahippocampal cortices of the macaque monkey: cortical afferents. Journal of comparative neurology, 350(4):497â533, 1994.
[39] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4:2, 2012.
[40] Alessandro Treves and Edmund T Rolls. Computational analysis of the role of the hippocampus in memory. Hippocampus, 4(3):374â391, 1994.
[41] Endel Tulving, CA Hayman, and Carol A Macdonald. Long-lasting perceptual priming and semantic learning in amnesia: a case experiment. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17(4):595, 1991.
# A Variational autoencoders for representation learning
Variational autoencoders (VAE; [12, 30]) are latent-variable probabilistic models inspired by compres- sion theory. A VAE (shown in Figure 4) is composed of two artiï¬cial neural networks: the encoder, which takes observations and maps them into messages; and a decoder, that receives messages and approximately recovers the observations. VAEs are designed to minimise the cost of transmitting observations from the encoder to the decoder through the communication channel. In order to minimise the transmission cost, a VAE must learn to capture the statistics of the distribution of observations [e.g. 17]. For our representation learning purposes, we use the encoder network as our feature mapping, Ï. for several data sets, representations learned by a VAE encoder have been shown to capture the independent factors of variation in the underlying generative process of the data [11].
In more detail, the encoder receives an observation, x, and outputs the parameter-values for a distribution of messages, q(z|x = x). The communication channel determines the cost of a message by a prior distribution over messages p(z). The decoder receives a message, z, drawn at random from q(z|x = x) and decodes it by outputting the parameters of a distribution over observations p(x|z = z). VAEs are trained to minimise cost of exactly recovering the original observation, given by the sum of expected communication cost KL (q(z|x) || p(z)) and expected correction cost E [p(x = x|z)]. In all our experiments, x â R7056 (84 by 84 gray-scale pixels, with range [0, 1]), and z â R32. We chose distributions q(z|x), p(z), and p(x|z) to be Gaussians with diagonal covariance matrices. In all experiments the encoder network has four convolutional [14] layers using {32, 32, 64, 64} kernels respectively, kernel sizes {4, 5, 5, 4}, kernel strides {2, 2, 2, 2} , no padding, and ReLU [25] non-linearity. The convolutional layer are followed by a fully connected layer of 512 ReLU units, from which a linear layer outputs the means and log-standard-deviations of the approximate posterior q(z|x). The decoder is setup mirroring the encoder, with a fully connected layer of 512 ReLU units followed by four reverse convolutions [6] with {64, 64, 32, 32} kernels respectively, kernel sizes {4, 5, 5, 4}, kernel strides {2, 2, 2, 2}, no padding, followed by a reverse convolution with two output kernels âone for the mean and one for the log-standard-deviation of p(x|z). The standard deviation of each dimension in p(x|z) is not set to 0.05 if the value output by the network is smaller. The VAEs were trained to model a million observations obtained by executing a random policy on each environment. The parameters of the VAEs were optimised by running 400,000 steps of stochastic-gradient descent using the RmsProp optimiser [39], step size of 1eâ5, and minibatches of size 100.
11
Transmission Z Encoder Decoder x x
Figure 4: Diagram of a variational autoencoder.
12 | {
"id": "1512.08457"
} |
1606.04199 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | 6 1 0 2
l u J 3 2 ] L C . s c [
3 v 9 9 1 4 0 . 6 0 6 1 : v i X r a
# Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Jie Zhou Ying Cao Xuguang Wang Peng Li Wei Xu Baidu Research - Institute of Deep Learning Baidu Inc., Beijing, China {zhoujie01,caoying03,wangxuguang,lipeng17,wei.xu}@baidu.com
# Abstract
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast- forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward con- nections play an essential role in propagat- ing the gradients and building a deep topol- ogy of depth 16. On the WMTâ14 English- to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the ï¬rst time that a sin- gle NMT model achieves state-of-the-art per- formance and outperforms the best conven- tional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special han- dling of unknown words and model ensem- bling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difï¬cult WMTâ14 English-to-German task.
# Introduction
Neural machine translation (NMT) has attracted a lot of interest in solving the machine translation (Kalchbrenner and (MT) problem in recent years
Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015). Unlike conventional statistical ma- chine translation (SMT) systems (Koehn et al., 2003; Durrani et al., 2014) which consist of multi- ple separately tuned components, NMT models en- code the source sequence into continuous represen- tation space and generate the target sequence in an end-to-end fashon. Moreover, NMT models can also be easily adapted to other tasks such as dialog systems (Vinyals and Le, 2015), question answering systems (Yu et al., 2015) and image caption genera- tion (Mao et al., 2015).
In general, there are two types of NMT topolo- gies: the encoder-decoder network (Sutskever et al., 2014) and the attention network (Bahdanau et al., 2015). The encoder-decoder network represents the source sequence with a ï¬xed dimensional vector and the target sequence is generated from this vector word by word. The attention network uses the repre- sentations from all time steps of the input sequence to build a detailed relationship between the target words and the input words. Recent results show that the systems based on these models can achieve sim- ilar performance to conventional SMT systems (Lu- ong et al., 2015; Jean et al., 2015).
However, a single neural model of either of the above types has not been competitive with the best conventional system (Durrani et al., 2014) when evaluated on the WMTâ14 English-to-French task. The best BLEU score from a single model with six layers is only 31.5 (Luong et al., 2015) while the conventional method of (Durrani et al., 2014) achieves 37.0.
We focus on improving the single model perfor-
mance by increasing the model depth. Deep topol- ogy has been proven to outperform the shallow ar- chitecture in computer vision. In the past two years the top positions of the ImageNet contest have al- ways been occupied by systems with tens or even hundreds of layers (Szegedy et al., 2015; He et al., 2016). But in NMT, the biggest depth used success- fully is only six (Luong et al., 2015). We attribute this problem to the properties of the Long Short- Term Memory (LSTM) (Hochreiter and Schmid- huber, 1997) which is widely used in NMT. In the LSTM, there are more non-linear activations than in convolution layers. These activations signiï¬cantly decrease the magnitude of the gradient in the deep topology, especially when the gradient propagates in recurrent form. There are also many efforts to increase the depth of the LSTM such as the work by Kalchbrenner et al. (2016), where the shortcuts do not avoid the nonlinear and recurrent computation.
In this work, we introduce a new type of lin- ear connections for multi-layer recurrent networks. These connections, which are called fast-forward connections, play an essential role in building a deep topology with depth of 16. In addition, we in- troduce an interleaved bi-directional architecture to stack LSTM layers in the encoder. This topology can be used for both the encoder-decoder network and the attention network. On the WMTâ14 English- to-French task, this is the deepest NMT topology that has ever been investigated. With our deep at- tention model, the BLEU score can be improved to 37.7 outperforming the shallow model which has six layers (Luong et al., 2015) by 6.2 BLEU points. This is also the ï¬rst time on this task that a single NMT model achieves state-of-the-art performance and outperforms the best conventional SMT sys- tem (Durrani et al., 2014) with an improvement of 0.7. Even without using the attention mechanism, we can still achieve 36.3 with a single model. After model ensembling and unknown word processing, the BLEU score can be further improved to 40.4. When evaluated on the subset of the test corpus without unknown words, our model achieves 41.4. As a reference, previous work showed that oracle re- scoring of the 1000-best sequences generated by the SMT model can achieve the BLEU score of about 45 (Sutskever et al., 2014). Our models are also validated on the more difï¬cult WMTâ14 English-to-
German task.
# 2 Neural Machine Translation
Neural machine translation aims at generating the target word sequence y = {y1, . . . , yn} given the source word sequence x = {x1, . . . , xm} with neu- ral models. In this task, the likelihood p(y | x, θ) of the target sequence will be maximized (Forcada and ËNeco, 1997) with parameter θ to learn:
m+1 p(y | v0) = TT rly | yoj1@:8) jel
where y0:jâ1 is the sub sequence from y0 to yjâ1. y0 and ym+1 denote the start mark and end mark of target sequence respectively.
The process can be explicitly split into an encod- ing part, a decoding part and the interface between these two parts. In the encoding part, the source se- quence is processed and transformed into a group of vectors e = {e1, · · · , em} for each time step. Fur- ther operations will be used at the interface part to extract the ï¬nal representation c of the source se- quence from e. At the decoding step, the target se- quence is generated from the representation c.
Recently, there have been two types of NMT mod- els which are different in the interface part. In the encoder-decoder model (Sutskever et al., 2014), a single vector extracted from e is used as the rep- In the attention model (Bahdanau et resentation. al., 2015), c is dynamically obtained according to the relationship between the target sequence and the source sequence.
The recurrent neural network (RNN), or its spe- ciï¬c form the LSTM, is generally used as the basic unit of the encoding and decoding part. However, the topology of most of the existing models is shal- low. In the attention network, the encoding part and the decoding part have only one LSTM layer respec- tively. In the encoder-decoder network, researchers have used at most six LSTM layers (Luong et al., 2015). Because machine translation is a difï¬cult problem, we believe more complex encoding and decoding architecture is needed for modeling the re- lationship between the source sequence and the tar- get sequence. In this work, we focus on enhancing the complexity of the encoding/decoding architec- ture by increasing the model depth.
Deep neural models have been studied in a wide range of problems. In computer vision, models with more than ten convolution layers outperform shallow ones on a series of image tasks in recent years (Srivastava et al., 2015; He et al., 2016; Szegedy et al., 2015). Different kinds of shortcut connections are proposed to decrease the length of the gradient propagation path. Training networks based on LSTM layers, which are widely used in language problems, is a much more challenging task. Because of the existence of many more nonlin- ear activations and the recurrent computation, gradi- ent values are not stable and are generally smaller. Following the same spirit for convolutional net- works, a lot of effort has also been spent on training deep LSTM networks. Yao et al. (2015) introduced depth-gated shortcuts, connecting LSTM cells at ad- jacent layers, to provide a fast way to propagate the gradients. They validated the modiï¬cation of these shortcuts on an MT task and a language modeling task. However, the best score was obtained using models with three layers. Similarly, Kalchbrenner et al. (2016) proposed a two dimensional structure for the LSTM. Their structure decreases the number of nonlinear activations and path length. However, the gradient propagation still relies on the recurrent computation. The investigations were also made on question-answering to encode the questions, where at most two LSTM layers were stacked (Hermann et al., 2015).
Based on the above considerations, we propose new connections to facilitate gradient propagation in the following section.
# 3 Deep Topology
We build the deep LSTM network with the new pro- posed linear connections. The shortest paths through the proposed connections do not include any non- linear transformations and do not rely on any recur- rent computation. We call these connections fast- forward connections. Within the deep topology, we also introduce an interleaved bi-directional architec- ture to stack the LSTM layers.
# 3.1 Network
Our entire deep neural network is shown in Fig. 2. This topology can be divided into three parts: the
encoder part (P-E) on the left, the decoder part (P- D) on the right and the interface between these two parts (P-I) which extracts the representation of the source sequence. We have two instantiations of this topology: Deep-ED and Deep-Att, which corre- spond to the extension of the encoder-decoder net- work and the attention network respectively. Our main innovation is the novel scheme for connecting adjacent recurrent layers. We will start with the ba- sic RNN model for the sake of clarity. Recurrent layer: When an input sequence {x1, . . . , xm} is given to a recurrent layer, the out- put ht at each time step t can be computed as (see Fig. 1 (a))
ht = Ï(Wf xt + Wrhtâ1) = RNN (Wf xt, htâ1) = RNN (ft, htâ1), (2)
where the bias parameter is not included for simplic- ity. We use a red circle and a blue empty square to denote an input and a hidden state. A blue square with a â-â denotes the previous hidden state. A dot- ted line means that the hidden state is used recur- rently. This computation can be equivalently split into two consecutive steps:
⢠Feed-Forward computation: ft = Wf xt. Left part in Fig. 1 (b). âfâ block.
RNN (ft, htâ1). Right part and the sum operation (+) followed by activation in Fig. 1 (b). ârâ block.
For a deep topology with stacked recurrent layers, the input of each block âfâ at recurrent layer k (de- noted by f k) is usually the output of block ârâ at its previous recurrent layer k â 1 (denoted by hkâ1). In our work, we add fast-forward connections (F-F connections) which connect two feed-forward com- putation blocks âfâ of adjacent recurrent layers. It means that each block âfâ at recurrent layer k takes both the outputs of block âfâ and block ârâ at its pre- vious layer as input (Fig. 1 (c)). F-F connections are denoted by dashed red lines in Fig. 1 (c) and Fig. 2. The path of F-F connections contains neither non- linear activations nor recurrent computation. It pro- vides a fast path for information to propagate, so we call this path fast-forward connections.
block r block f X
Figure 1: RNN models. The recurrent use of a hidden state is denoted by dotted lines. A â-â mark denotes the hidden value of the previous time step. (a): Basic RNN. (b): Basic RNN with intermediate computational state and the sum operation (+) followed by activation. It consists of block âfâ and block ârâ, and is equivalent to (a). (c):Two stacked RNN layers with F-F connections denoted by dashed red lines.
in order to learn more temporal dependencies, the sequences can be processed in different directions at each pair of adjacent recurrent layers. This is quantitatively expressed in Eq. 3:
t = W k f k f k t = W k t = RNNk (f k hk f · [f kâ1 t f xt , hkâ1 t ], k > 1 k = 1 t , hk t+(â1)k )
The opposite directions are marked by the direction term (â1)k. At the ï¬rst recurrent layer, the block âfâ takes xt as the input. [ , ] denotes the concatenation of vectors. This is shown in Fig. 1 (c). The two changes are summarized here:
t and f kâ1 . , our model will be reduced to the ⢠We add a connection between f k t Without f kâ1 traditional stacked model. t
⢠We alternate the RNN direction at different lay- ers k with the direction term (â1)k. If we ï¬x the direction term to â1, all layers work in the forward direction.
LSTM layer: In our experiments, instead of an RNN, a speciï¬c type of recurrent layer called LSTM (Hochreiter and Schmidhuber, 1997; Graves et al., 2009) is used. The LSTM is structurally more
(3)
complex than the basic RNN in Eq. 2. We de- ï¬ne the computation of the LSTM as a function which maps the input f and its state-output pair (h, s) at the previous time step to the current state- output pair. The exact computations for (ht, st) = LSTM(ft, htâ1, stâ1) are the following:
[z, zÏ, zÏ, zÏ] = ft + Wrhtâ1 st = Ïi(z) ⦠Ïg(zÏ + stâ1 ⦠θÏ) + Ïg(zÏ + stâ1 ⦠θÏ) ⦠stâ1 ht = Ïo(st) ⦠Ïg(zÏ + st ⦠θÏ) (4)
where [z, zÏ, zÏ, zÏ] is the concatenation of four vec- tors of equal size, ⦠means element-wise multiplica- tion, Ïi is the input activation function, Ïo is the out- put activation function, Ïg is the activation function for gates, and Wr, θÏ, θÏ, and Î¸Ï are the parame- ters of the LSTM. It is slightly different from the standard notation in that we do not have a matrix to multiply with the input f in our notation.
With this notation, we can write down the com- putations for our deep bi-directional LSTM model with F-F connections:
fh =WF URL hE, k> 1 ff = When, k=l (nk, s*) = LSTM* (#, WE ayes shar) (5)
# (hk
where xt is the input to the deep bi-directional LSTM model. For the encoder, xt is the embedding of the tth word in the source sentence. For the de- coder xt is the concatenation of the embedding of the tth word in the target sentence and the encoder representation for step t.
In our ï¬nal model two additional operations are used with Eq. 5, which is shown in Eq. 6. Half(f ) denotes the ï¬rst half of the elements of f , and Dr(h) is the dropout operation (Hinton et al., 2012) which randomly sets an element of h to zero with a cer- tain probability. The use of Half(·) is to reduce the parameter size and does not affect the perfor- mance. We observed noticeable performance degra- dation when using only the ï¬rst third of the elements of âfâ.
t = W k f k f · [Half(f kâ1 t ), Dr(hkâ1 t )], k > 1 (6)
With the F-F connections, we build a fast channel to propagate the gradients in the deep topology. F-F
(5)
Encoder Decoder i le] t \ ta at f f OO (a= aoan SO OC ¥ f
Figure 2: The network. It includes three parts from left to right: encoder part (P-E), interface (P-I) and decoder part (P-D). We only show the topology of Deep-Att as an example. âfâ and ârâ blocks correspond to the feed-forward part and the subsequent LSTM computation. The F-F connections are denoted by dashed red lines.
connections can accelerate the model convergence and while improving the performance. A similar idea was also used in (He et al., 2016; Zhou and Xu, 2015). Encoder: The LSTM layers are stacked following Eq. 5. We call this type of encoder interleaved bi- directional encoder. In addition, there are two sim- ilar columns (a1 and a2) in the encoder part. Each column consists of ne stacked LSTM layers. There is no connection between the two columns. The ï¬rst layers of the two columns process the word repre- sentations of the source sequence in different direc- tions. At the last LSTM layers, there are two groups of vectors representing the source sequence. The group size is the same as the length of the input se- quence. Interface: Prior encoder-decoder models and atten- tion models are different in their method of extract- ing the representations of the source sequences. In our work, as a consequence of the introduced F-F connections, we have 4 output vectors (hne t and f ne t of both columns). The representations are modiï¬ed for both Deep-ED and Deep-Att.
mentary information but do not affect the perfor- mance much. et is used as the ï¬nal representation ct.
For Deep-Att, we do not need the above two op- erations. We only concatenate the 4 output vectors at each time step to obtain et, and a soft attention mechanism (Bahdanau et al., 2015) is used to calcu- late the ï¬nal representation ct from et. et is summa- rized as:
# Deep-ED: et
m , Max(hne,a2 [hne,a1 t ), Max(f ne,a1 t ), Max(f ne,a2 t Deep-Att: et [hne,a1 t , hne,a2 t , f ne,a1 t , f ne,a2 t ] )] (7)
Note that the vector dimensionality of f is four times larger than that of h (see Eq. 4). ct is summarized as:
Deep-ED: c; =e, (const) m Deep-Att: c; = S- ay Wey v=l (8)
a," is the normalized attention weight computed by:
For Deep-ED, et is static and consists of four 1: The last time step output hne m of the parts. 2: Max-operation Max(·) of hne ï¬rst column. t at all time steps of the second column, denoted by Max(hne,a2 ). Max(·) denotes obtaining the maximal value for each dimension over t. 3: Max(f ne,a1 ). The max-operation t t and last time step state extraction provide compli-
exp(a(Wpev, nyeâ)) ry 1,de ivr exp(a(Wpee, hyâ7°)) (9) Ot"
h1,dec tâ1 is the ï¬rst hidden layer output in the decoding part. a(·) is an alignment model described in (Bah- danau et al., 2015). For Deep-Att, in order to re- duce the memory cost, we linearly project (with Wp)
the concatenated vector et to a vector with 1/4 di- mension size, denoted by the (fully connected) block âfcâ in Fig. 2. Decoder: The decoder follows Eq. 5 and Eq. 6 with ï¬xed direction term â1. At the ï¬rst layer, we use the following xt:
xt = [ct, ytâ1] (10)
ytâ1 is the target word embedding at the previous time step and y0 is zero. There is a single column of nd stacked LSTM layers. We also use the F-F connections like those in the encoder and all layers are in the forward direction. Note that at the last LSTM layer, we only use ht to make the prediction with a softmax layer.
Although the network is deep, the training tech- nique is straightforward. We will describe this in the next part.
# 3.2 Training technique
We take the parallel data as the only input without using any monolingual data for either word repre- sentation pre-training or language modeling. Be- cause of the deep bi-directional structure, we do not need to reverse the sequence order as Sutskever et al. (2014).
The deep topology brings difï¬culties for the model training, especially when ï¬rst order methods such as stochastic gradient descent (SGD) (LeCun et al., 1998) are used. The parameters should be properly initialized and the converging process can be slow. We tried several optimization techniques such as AdaDelta (Zeiler, 2012), RMSProp (Tiele- man and Hinton, 2012) and Adam (Kingma and Ba, 2015). We found that all of them were able to speed up the process a lot compared to simple SGD while no signiï¬cant performance difference was ob- served among them. In this work, we chose Adam for model training and do not present a detailed com- parison with other optimization methods.
Dropout (Hinton et al., 2012) is also used to avoid It is utilized on the LSTM nodes hk over-ï¬tting. t (See Eq. 5) with a ratio of pd for both the encoder and decoder.
During the whole model training process, we keep all hyper parameters ï¬xed without any intermediate interruption. The hyper parameters are selected ac- cording to the performance on the development set.
For such a deep and large network, it is not easy to determine the tuning strategy and this will be con- sidered in future work.
# 3.3 Generation
We use the common left-to-right beam-search method for sequence generation. At each time step t, the word yt can be predicted by:
Ëyt = arg max y P(y|Ëy0:tâ1, x; θ) (11)
where Ëyt is the predicted target word. Ëy0:tâ1 is the generated sequence from time step 0 to t â 1. We keep nb best candidates according to Eq. 11 at each time step, until the end of sentence mark is gener- ated. The hypotheses are ranked by the total like- lihood of the generated sequence, although normal- ized likelihood is used in some works (Jean et al., 2015).
# 4 Experiments
We evaluate our method mainly on the widely used WMTâ14 English-to-French translation task. In or- der to validate our model on more difï¬cult lan- guage pairs, we also provide results on the WMTâ14 English-to-German translation task. Our models are implemented in the PADDLE (PArallel Distributed Deep LEarning) platform.
# 4.1 Data sets
For both tasks, we use the full WMTâ14 parallel cor- pus as our training data. The detailed data sets are listed below:
⢠English-to-French: Europarl v7, Common Crawl, UN, News Commentary, Gigaword
⢠English-to-German: Europarl v7, Common Crawl, News Commentary
In total, the English-to-French corpus includes 36 million sentence pairs, and the English-to-German corpus includes 4.5 million sentence pairs. The news-test-2012 and news-test-2013 are concate- nated as our development set, and the news-test- 2014 is the test set. Our data partition is consistent with previous works on NMT (Luong et al., 2015; Jean et al., 2015) to ensure fair comparison.
For the source language, we select the most fre- quent 200K words as the input vocabulary. For the target language we select the most frequent 80K French words and the most frequent 160K German words as the output vocabulary. The full vocab- ulary of the German corpus is larger (Jean et al., 2015), so we select more German words to build the target vocabulary. Out-of-vocabulary words are re- placed with the unknown symbol (unk). For com- plete comparison to previous work on the English- to-French task, we also show the results with a smaller vocabulary of 30K input words and 30K out- put words on the sub train set with selected 12M par- allel sequences (Schwenk, 2014; Sutskever et al., 2014; Cho et al., 2014).
# 4.2 Model settings
We have two models as described above, named Deep-ED and Deep-Att. Both models have exactly the same conï¬guration and layer size except the in- terface part P-I.
We use 256 dimensional word embeddings for both the source and target languages. All LSTM layers, including the 2Ãne layers in the encoder and the nd layers in the decoder, have 512 memory cells. The output layer size is the same as the size of the target vocabulary. The dimension of ct is 5120 and 1280 for Deep-ED and Deep-Att respectively. For each LSTM layer, the activation functions for gates, inputs and outputs are sigmoid, tanh, and tanh re- spectively.
Our network is narrow on word embeddings and LSTM layers. Note that in previous work (Sutskever et al., 2014; Bahdanau et al., 2015), 1000 dimensional word embeddings and 1000 di- mensional LSTM layers are used. We also tried larger scale models but did not obtain further im- provements.
# 4.3 Optimization
Note that each LSTM layer includes two parts as described in Eq. 3, feed-forward computation and recurrent computation. Since there are non-linear activations in the recurrent computation, a larger learning rate lr = 5 Ã 10â4 is used, while for the feed-forward computation a smaller learning rate lf = 4 Ã 10â5 is used. Word embeddings and the softmax layer also use this learning rate lf . We refer
all the parameters not used for recurrent computa- tion as non-recurrent part of the model.
Because of the large model size, we use strong L2 regularization to constrain the parameter matrix v in the following way:
v â v â l · (g + r · v) (12)
Here r is the regularization strength, l is the corre- sponding learning rate, g stands for the gradients of v. The two embedding layers are not regularized. All the other layers have the same r = 2.
The parameters of the recurrent computation part are initialized to zero. All non-recurrent parts are randomly initialized with zero mean and standard deviation of 0.07. A detailed guide for setting hyper- parameters can be found in (Bengio, 2012).
The dropout ratio pd is 0.1. In each batch, there are 500 â¼ 800 sequences in our work. The exact number depends on the sequence lengths and model size. We also ï¬nd that larger batch size results in better convergence although the improvement is not large. However, the largest batch size is constrained by the GPU memory. We use 4 â¼ 8 GPU machines (each has 4 K40 GPU cards) running for 10 days to train the full model with parallelization at the data batch level. It takes nearly 1.5 days for each pass.
One thing we want to emphasize here is that our deep model is not sensitive to these settings. Small variation does not affect the ï¬nal performance.
# 4.4 Results
We evaluate the same way as previous NMT works (Sutskever et al., 2014; Luong et al., 2015; Jean et al., 2015). All reported BLEU scores are computed with the multi-bleu.perl1 script which is also used in the above works. The results are for tokenized and case sensitive evaluation.
4.4.1 Single models English-to-French: First we list our single model results on the English-to-French task in Tab. 1. In the ï¬rst block we show the results with the full corpus. The previous best single NMT encoder- decoder model (Enc-Dec) with six layers achieves BLEU=31.5 (Luong et al., 2015). From Deep-ED,
1https://github.com/moses-smt/
mosesdecoder/blob/master/scripts/generic/ multi-bleu.perl
we obtain the BLEU score of 36.3, which outper- forms Enc-Dec model by 4.8 BLEU points. This result is even better than the ensemble result of eight Enc-Dec models, which is 35.6 (Luong et al., 2015). This shows that, in addition to the convolutional lay- ers for computer vision, deep topologies can also work for LSTM layers. For Deep-Att, the perfor- mance is further improved to 37.7. We also list the previous state-of-the-art performance from a con- ventional SMT system (Durrani et al., 2014) with the BLEU of 37.0. This is the ï¬rst time that a single NMT model trained in an end-to-end form beats the best conventional system on this task.
We also show the results on the smaller data set with 12M sentence pairs and 30K vocabulary The two attention mod- in the second block. els, RNNsearch (Bahdanau et al., 2015) and RNNsearch-LV (Jean et al., 2015), achieve BLEU scores of 28.5 and 32.7 respectively. Note that RNNsearch-LV uses a large output vocabulary of 500K words based on the standard attention model RNNsearch. We obtain BLEU=35.9 which outper- forms its corresponding shallow model RNNsearch by 7.4 BLEU points. The SMT result from (Schwenk, 2014) is also listed and falls behind our model by 2.6 BLEU points.
Methods Enc-Dec (Luong,2015) SMT (Durrani,2014) Deep-ED (Ours) Deep-Att (Ours) RNNsearch (Bahdanau,2014) RNNsearch-LV (Jean,2015) SMT (Schwenk,2014) Deep-Att (Ours) Data Voc 36M 80K 36M Full 36M 80K 36M 80K 12M 30K 12M 500K 12M Full 12M 30K BLEU 31.5 37.0 36.3 37.7 28.5 32.7 33.3 35.9
Table 1: English-to-French task: BLEU scores of single neural models. We also list the conventional SMT system for comparison.
Moreover, during the generation process, we ob- tained the best BLEU score with beam size = 3 (when the beam size is 2, there is only a 0.1 dif- ference in BLEU score). This is different from other works listed in Tab. 1, where the beam size is 12 (Jean et al., 2015; Sutskever et al., 2014). We at- tribute this difference to the improved model per- formance, where the ground truth generally exists in the top hypothesis. Consequently, with the much
smaller beam size, the generation efï¬ciency is sig- niï¬cantly improved.
Next we list the effect of the novel F-F connec- tions in our Deep-Att model of shallow topology in Tab. 2. When ne = 1 and nd = 1, the BLEU scores are 31.2 without F-F and 32.3 with F-F. Note that the model without F-F is exactly the standard attention model (Bahdanau et al., 2015). Since there is only a single layer, the use of F-F connections means that at the interface part we include ft into the represen- tation (see Eq. 7). We ï¬nd F-F connections bring an improvement of 1.1 in BLEU. After we increase our model depth to ne = 2 and nd = 2, the improve- ment is enlarged to 1.4. When the model is trained with larger depth without F-F connections, we ï¬nd that the parameter exploding problem (Bengio et al., 1994) happens so frequently that we could not ï¬nish training. This suggests that F-F connections provide a fast way for gradient propagation.
F-F Models Deep-Att No Deep-Att Yes No Deep-Att Deep-Att Yes ne 1 1 2 2 nd 1 1 2 2 BLEU 31.2 32.3 33.3 34.7
Table 2: The effect of F-F. We list the BLEU scores of Deep-Att with and without F-F. Because of the param- eter exploding problem, we can not list the model per- formance of larger depth without F-F. For ne = 1 and nd = 1, F-F connections only contribute to the represen- tation at interface part (see Eq. 7).
Removing F-F connections also reduces the cor- responding model size. In order to ï¬gure out the effect of F-F comparing models with the same pa- rameter size, we increase the LSTM layer width of Deep-Att without F-F. In Tab. 3 we show that, after using a two times larger LSTM layer width of 1024, we can only obtain a BLEU score of 33.8, which is still worse than the corresponding Deep-Att with F-F.
We also notice that the interleaved bi-directional encoder starts to work when the encoder depth is larger than 1. The effect of the interleaved bi- directional encoder is shown in Tab. 4. For our largest model with ne = 9 and nd = 7, we compared the BLEU scores of the interleaved bi-directional encoder and the uni-directional encoder (where all LSTM layers work in forward direction). We ï¬nd
F-F Models No Deep-Att Deep-Att No Deep-Att Yes ne 2 2 2 nd width BLEU 33.3 2 33.8 2 34.7 2 512 1024 512
Table 3: BLEU scores with different LSTM layer width in Deep-Att. After using two times larger LSTM layer width of 1024, we can only obtain BLEU score of 33.8. It is still behind the corresponding Deep-Att with F-F.
there is a gap of about 1.5 points between these two encoders for both Deep-Att and Deep-ED.
Models Deep-Att Deep-Att Deep-ED Deep-ED Encoder Bi Uni Bi Uni ne 9 9 9 9 nd 7 7 7 7 BLEU 37.7 36.2 36.3 34.9
Table 4: The effect of the interleaved bi-directional en- coder. We list the BLEU scores of our largest Deep-Att and Deep-ED models. The encoder term Bi denotes that the interleaved bi-directional encoder is used. Uni de- notes a model where all LSTM layers work in forward direction.
Next we look into the effect of model depth. In Tab. 5, starting from ne = 1 and nd = 1 and gradu- ally increasing the model depth, we signiï¬cantly in- crease BLEU scores. With ne = 9 and nd = 7, the best score for Deep-Att is 37.7. We tried to increase the LSTM width based on this, but obtained little improvement. As we stated in Sec.2, the complexity of the encoder and decoder, which is related to the model depth, is more important than the model size. We also tried a larger depth, but the results started to get worse. With our topology and training tech- nique, ne = 9 and nd = 7 is the best depth we can achieve.
Models F-F Deep-Att Yes Deep-Att Yes Deep-Att Yes Deep-Att Yes Deep-Att Yes ne 1 2 5 9 9 nd 1 2 3 7 7 Col BLEU 32.3 2 34.7 2 36.0 2 37.7 2 36.6 1
Table 5: BLEU score of Deep-Att with different model depth. With ne = 1 and nd = 1, F-F connections only contribute to the representation at interface part where ft is included (see Eq. 7).
The last line in Tab. 5 shows the BLEU score of
36.6 of our deepest model, where only one encod- ing column (Col = 1) is used. We ï¬nd a 1.1 BLEU points degradation with a single encoding column. Note that the uni-directional models in Tab. 4 with In uni-direction still have two encoding columns. order to ï¬nd out whether this is caused by the de- creased parameter size, we test a wider model with It is 1024 memory blocks for the LSTM layers. shown in Tab. 6 that there is a minor improvement of only 0.1. We attribute this to the complementary in- formation provided by the double encoding column.
Models F-F Deep-Att Yes Deep-Att Yes Deep-Att Yes ne 9 9 9 nd 7 7 7 Col width BLEU 37.7 2 36.6 1 36.7 1 512 512 1024
Table 6: Comparison of encoders with different number of columns and LSTM layer width.
English-to-German: We also validate our deep The topology on the English-to-German task. English-to-German task is considered a relatively more difï¬cult task, because of the lower similarity between these two languages. Since the German vo- cabulary is much larger than the French vocabulary, we select 160K most frequent words as the target vo- cabulary. All the other hyper parameters are exactly the same as those in the English-to-French task.
We list our single model Deep-Att performance in Tab. 7. Our single model result with BLEU=20.6 is similar to the conventional SMT result of 20.7 (Buck et al., 2014). We also outperform the shallow at- tention models as shown in the ï¬rst two lines in Tab. 7. All the results are consistent with those in the English-to-French task.
Methods RNNsearch (Jean,2015) RNNsearch-LV (Jean,2015) SMT (Buck,2014) Deep-Att (Ours) Data Voc 4.5M 50K 4.5M 500K 4.5M Full 4.5M 160K BLEU 16.5 16.9 20.7 20.6
Table 7: English-to-German task: BLEU scores of single neural models. We also list the conventional SMT system for comparison.
# 4.4.2 Post processing
Two post processing techniques are used to im- prove the performance further on the English-to- French task.
First, three Deep-Att models are built for ensem- ble results. They are initialized with different ran- in addition, the training corpus dom parameters; for these models is shufï¬ed with different random seeds. We sum over the predicted probabilities of the target words and normalize the ï¬nal distribution to generate the next word. It is shown in Tab. 8 that the model ensemble can improve the performance further to 38.9. In Luong et al. (2015) and Jean et al. (2015) there are eight models for the best scores, but we only use three models and we do not obtain further gain from more models.
Methods Model Deep-ED Single Single Deep-Att Single+PosUnk Deep-Att Ensemble Deep-Att Ensemble+PosUnk Deep-Att Durrani, 2014 SMT Ensemble+PosUnk Enc-Dec Data Voc 36M 80K 36M 80K 36M 80K 36M 80K 36M 80K 36M Full 36M 80K BLEU 36.3 37.7 39.2 38.9 40.4 37.0 37.5
Table 8: BLEU scores of different models. The ï¬rst two blocks are our results of two single models and mod- els with post processing. In the last block we list two baselines of the best conventional SMT system and NMT system.
Second, we recover the unknown words in the generated sequences with the Positional Unknown (PosUnk) model introduced in (Luong et al., 2015). The full parallel corpus is used to obtain the word mappings (Liang et al., 2006). We ï¬nd this method provides an additional 1.5 BLEU points, which is consistent with the conclusion in Luong et al. (2015). We obtain the new BLEU score of 39.2 with a single Deep-Att model. For the ensemble models of Deep-Att, the BLEU score rises to 40.4. In the last two lines, we list the conventional SMT model (Durrani et al., 2014) and the previous best neural models based system Enc-Dec (Luong et al., 2015) for comparison. We ï¬nd our best score outperforms the previous best score by nearly 3 points.
# 4.5 Analysis
# 4.5.1 Length
On the English-to-French task, we analyze the effect of the source sentence length on our mod- els as shown in Fig. 3. Here we show ï¬ve curves: our Deep-Att single model, our Deep-Att ensemble model, our Deep-ED model, a previously proposed Enc-Dec model with four layers (Sutskever et al., 2014) and an SMT model (Durrani et al., 2014). We ï¬nd our Deep-Att model works better than the
40) fi âSâ Deep-Att single model i â+âDeep-Att ensemble 3 models 10 â+â Deep-ED single model = + ~EneâDec 4 layers (Sutskever, 2014) = © <SMT (Durrani, 2014) 4 7 8 2 7 2 8 35 79 Sentences by Length
Figure 3: BLEU scores vs. source sequence length. Five lines are our Deep-Att single model, Deep-Att ensem- ble model, our Deep-ED model, previous Enc-Dec model with four layers and SMT model.
previous two models (Enc-Dec and SMT) on nearly all sentence lengths. It is also shown that for very long sequences with length over 70 words, the per- formance of our Deep-Att does not degrade, when compared to another NMT model Enc-Dec. Our Deep-ED also has much better performance than the shallow Enc-Dec model on nearly all lengths, al- though for long sequences it degrades and starts to fall behind Deep-Att.
# 4.5.2 Unknown words
Next we look into the detail of the effect of un- known words on the English-to-French task. We select the subset without unknown words on target sentences from the original test set. There are 1705 such sentences (56.8%). We compute the BLEU scores on this subset and the results are shown in Tab. 9. We also list the results from SMT model (Durrani et al., 2014) as a comparison.
We ï¬nd that the BLEU score of Deep-Att on this subset rises to 40.3, which has a gap of 2.6 with
Model Deep-Att Ensemble SMT(Durrani) Deep-Att Ensemble SMT(Durrani) Test set Ratio(%) BLEU 37.7 100.0 Full 38.9 100.0 Full 37.0 100.0 Full 40.3 56.8 Subset 41.4 56.8 Subset 37.5 56.8 Subset
Table 9: BLEU scores of the subset of the test set without considering unknown words.
0.30 028 A. ca Test 0.38 â*ân.=9 n,=7 , âân.=5n,=3 | 0.40 âs-nsinget 039 036 034 032 0.30 Train
Figure 4: Token error rate on train set vs. test set. Square: Deep-Att (ne = 9, nd = 7). Circle: Deep-Att (ne = 5, nd = 3). Triagle: Deep-Att (ne = 1, nd = 1).
the score 37.7 on the full test set. On this sub- set, the SMT model achieves 37.5, which is simi- lar to its score 37.0 on the full test set. This sug- gests that the difï¬culty on this subset is not much different from that on the full set. We therefore at- tribute the larger gap for Deep-att to the existence of unknown words. We also compute the BLEU score on the subset of the ensemble model and ob- tain 41.4. As a reference related to human perfor- mance, in Sutskever et al. (2014), it has been tested that the BLEU score of oracle re-scoring the LIUM 1000-best results (Schwenk, 2014) is 45.
# 4.5.3 Over-ï¬tting
Deep models have more parameters, and thus have a stronger ability to ï¬t the large data set. However, our experimental results suggest that deep models are less prone to the problem of over-ï¬tting. In Fig. 4, we show three results from models with a different depth on the English-to-French task. These three models are evaluated by token error rate, which is deï¬ned as the ratio of incorrectly predicted
words in the whole target sequence with correct his- torical input. The curve with square marks corre- sponds to Deep-Att with ne = 9 and nd = 7. The curve with circle marks corresponds to ne = 5 and nd = 3. The curve with triangle marks corresponds to ne = 1 and nd = 1. We ï¬nd that the deep model has better performance on the test set when the token error rate is the same as that of the shallow models on the training set. This shows that, with decreased token error rate, the deep model is more advanta- geous in avoiding the over-ï¬tting phenomenon. We only plot the early training stage curves because, during the late training stage, the curves are not smooth.
# 5 Conclusion
With the introduction of fast-forward connections to the deep LSTM network, we build a fast path with neither non-linear transformations nor recur- rent computation to propagate the gradients from the top to the deep bottom. On this path, gradients de- cay much slower compared to the standard deep net- work. This enables us to build the deep topology of NMT models.
We trained NMT models with depth of 16 in- cluding 25 LSTM layers and evaluated them mainly on the WMTâ14 English-to-French translation task. This is the deepest topology that has been in- vestigated in the NMT area on this task. We showed that our Deep-Att exhibits 6.2 BLEU points improvement over the previous best single model, achieving a 37.7 BLEU score. This single end-to- end NMT model outperforms the best conventional SMT system (Durrani et al., 2014) and achieves a state-of-the-art performance. After utilizing un- known word processing and model ensemble of three models, we obtained a BLEU score of 40.4, an improvement of 2.9 BLEU points over the pre- vious best result. When evaluated on the subset of the test corpus without unknown words, our model achieves 41.4. Our model is also validated on the more difï¬cult English-to-German task.
Our model is also efï¬cient in sequence genera- tion. The best results from both a single model and model ensemble are obtained with a beam size of 3, much smaller than previous NMT systems where beam size is about 12 (Jean et al., 2015; Sutskever
et al., 2014). From our analysis, we ï¬nd that deep models are more advantageous for learning for long sequences and that the deep topology is resistant to the over-ï¬tting problem.
We tried deeper models and did not obtain further improvements with our current topology and train- ing techniques. However, the depth of 16 is not very deep compared to the models in computer vi- sion (He et al., 2016). We believe we can beneï¬t from deeper models, with new designs of topologies and training techniques, which remain as our future work.
# References
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of In- ternational Conference on Learning Representations. Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradi- ent descent is difï¬cult. IEEE Transactions on Neural Networks, 5(2):157â166.
Yoshua Bengio, 2012. Practical Recommendations for Gradient-Based Training of Deep Architectures, pages 437â478. Springer Berlin Heidelberg, Berlin, Heidel- berg.
Christian Buck, Kenneth Heaï¬eld, and Bas van Ooyen. 2014. N-gram counts and language models from the common crawl. In Proceedings of the Language Re- sources and Evaluation Conference.
Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Ben- gio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine transla- tion. In Proceedings of the Empiricial Methods in Nat- ural Language Processing.
Nadir Durrani, Barry Haddow, Philipp Koehn, and Ken- neth Heaï¬eld. 2014. Edinburghâs phrase-based ma- In Proceed- chine translation systems for WMT-14. ings of the Ninth Workshop on Statistical Machine Translation.
Mikel L. Forcada and Ram´on P. ËNeco. 1997. Recur- In sive hetero-associative memories for translation. Biological and Artiï¬cial Computation: From Neuro- science to Technology, Berlin, Heidelberg. Springer Berlin Heidelberg.
Alex Graves, Marcus Liwicki, Santiago Fernandez, Ro- man Bertolami, Horst Bunke, and J¨urgen Schmid- huber. 2009. A novel connectionist system for un- IEEE Transac- constrained handwriting recognition.
tions on Pattern Analysis and Machine Intelligence, 31(5):855â868.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In IEEE Conference on Computer Vision and Pattern Recognition.
Karl Moritz Hermann, Tom´aËs KoËcisk´y, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems.
Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2012. Im- proving neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735â 1780.
S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vo- cabulary for neural machine translation. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the Empirical Methods in Natural Language Processing. Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. 2016. Grid long short-term memory. In Proceedings of International Conference on Learning Representa- tions.
Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of International Conference on Learning Representa- tions.
P. Koehn, F. J. Och, and D. Marcu. 2003. Statistical phrase-based translation. In Proceedings of the North American Chapter of the Association for Computa- tional Linguistics on Human Language Technology. Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to Proceedings of the IEEE, document recognition. 86(11):2278â2324.
Percy Liang, Ben Taskar, and Dan Klein. 2006. Align- In Proceedings of the North ment by agreement. American Chapter of the Association of Computa- tional Linguistics on Human Language Technology. Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In Pro- ceedings of the 53rd Annual Meeting of the Associa- tion for Computational Linguistics and the 7th Inter- national Joint Conference on Natural Language Pro- cessing.
Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, and Alan L. Yuille. 2015. Deep captioning with multimodal recurrent neural networks (m-RNN). In Proceedings of International Conference on Learn- ing Representations.
Holger Schwenk. 2014. http://www-lium.univ- ac-
lemans.fr/â¼schwenk/cslm joint paper [online; cessed 03-september-2014]. University Le Mans. Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Highway networks. In Proceed- ings of the 32nd International Conference on Machine Learning, Deep Learning Workshop.
Ilya Sutskever, Oriol Vinyals, and Quoc Le. 2014. Se- quence to sequence learning with neural networks. In Advances in Neural Information Processing Systems. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Ser- manet, Scott Reed, Dragomir Anguelov, Dumitru Er- han, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In IEEE Con- ference on Computer Vision and Pattern Recognition. Tijmen Tieleman and Geoffrey Hinton. 2012. Lecture 6.5-rmsprop: Divide the gradient by a running aver- age of its recent magnitude. COURSERA: Neural Net- works for Machine Learning, 4.
Oriol Vinyals and Quoc Le. 2015. A neural conver- In Proceedings of the 32nd Interna- sational model. tional Conference on Machine Learning, Deep Learn- ing Workshop.
Kaisheng Yao, Trevor Cohn, Katerina Vylomova, Kevin Duh, and Chris Dyer. 2015. Depth-gated LSTM. arXiv:1508.03790.
Yang Yu, Wei Zhang, Chung-Wei Hang, Bing Xiang, and Bowen Zhou. 2015. Empirical study on deep learning models for QA. arXiv:1510.07526.
Matthew D. Zeiler. 2012. ADADELTA: An adaptive learning rate method. arXiv:1212.5701.
Jie Zhou and Wei Xu. 2015. End-to-end learning of se- mantic role labeling using recurrent neural networks. In Proceedings of the 53rd Annual Meeting of the As- sociation for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. | {
"id": "1508.03790"
} |
1606.03152 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | 6 1 0 2
p e S 2 1 ] L C . s c [
4 v 2 5 1 3 0 . 6 0 6 1 : v i X r a
# Policy Networks with Two-Stage Training for Dialogue Systems
# Mehdi Fatemi Layla El Asri Hannes Schulz Jing He Kaheer Suleman
# Maluuba Research Le 2000 Peel, Montr´eal, QC H3A 2W5 first.last@maluuba.com
# Abstract
In this paper, we propose to use deep pol- icy networks which are trained with an advantage actor-critic method for statisti- cally optimised dialogue systems. First, we show that, on summary state and ac- tion spaces, deep Reinforcement Learn- ing (RL) outperforms Gaussian Processes methods. Summary state and action spaces lead to good performance but re- quire pre-engineering effort, RL knowl- edge, and domain expertise. In order to remove the need to deï¬ne such summary spaces, we show that deep RL can also be trained efï¬ciently on the original state and action spaces. Dialogue systems based on partially observable Markov decision processes are known to require many di- alogues to train, which makes them un- appealing for practical deployment. We show that a deep RL method based on an actor-critic architecture can exploit a small amount of data very efï¬ciently. Indeed, with only a few hundred dialogues col- lected with a handcrafted policy, the actor- critic deep learner is considerably boot- strapped from a combination of supervised and batch RL. In addition, convergence to an optimal policy is signiï¬cantly sped up compared to other deep RL methods ini- tialized on the data with batch RL. All ex- periments are performed on a restaurant domain derived from the Dialogue State Tracking Challenge 2 (DSTC2) dataset.
# Introduction
The statistical optimization of dialogue manage- ment in dialogue systems through Reinforcement Learning (RL) has been an active thread of re-
search for more than two decades (Levin et al., 1997; Lemon and Pietquin, 2007; Laroche et al., 2010; GaËsi´c et al., 2012; Daubigney et al., 2012). Dialogue management has been successfully mod- elled as a Partially Observable Markov Decision Process (POMDP) (Williams and Young, 2007; GaËsi´c et al., 2012), which leads to systems that can learn from data and which are robust to noise. In this context, a dialogue between a user and a di- alogue system is framed as a sequential process where, at each turn, the system has to act based on what it has understood so far of the userâs utter- ances.
Unfortunately, POMDP-based dialogue man- agers have been unï¬t for online deployment be- cause they typically require several thousands of dialogues for training (GaËsi´c et al., 2010, 2012). Nevertheless, recent work has shown that it is pos- sible to train a POMDP-based dialogue system on just a few hundred dialogues corresponding to on- line interactions with users (GaËsi´c et al., 2013). However, in order to do so, pre-engineering ef- forts, prior RL knowledge, and domain expertise must be applied. Indeed, summary state and ac- tion spaces must be used and the set of actions must be restricted depending on the current state so that notoriously bad actions are prohibited.
In order to alleviate the need for a summary state space, deep RL (Mnih et al., 2013) has recently been applied to dialogue management (Cuay´ahuitl et al., 2015) in the context of negoti- ations. It was shown that deep RL performed sig- niï¬cantly better than other heuristic or supervised approaches. The authors performed learning over a large action space of 70 actions and they also had to use restricted action sets in order to learn efï¬ciently over this space. Besides, deep RL was not compared to other RL methods, which we do in this paper. In (Cuay´ahuitl, 2016), a simplistic implementation of deep Q Networks is presented,
again with no comparison to other RL methods.
In this paper, we propose to efï¬ciently alleviate the need for summary spaces and restricted actions using deep RL. We analyse four deep RL mod- els: Deep Q Networks (DQN) (Mnih et al., 2013), Double DQN (DDQN) (van Hasselt et al., 2015), Deep Advantage Actor-Critic (DA2C) (Sutton et al., 2000) and a version of DA2C initialized with supervised learning (TDA2C)1 (similar idea to Silver et al. (2016)). All models are trained on a restaurant-seeking domain. We use the Dialogue State Tracking Challenge 2 (DSTC2) dataset to train an agenda-based user simulator (Schatzmann and Young, 2009) for online learning and to per- form batch RL and supervised learning.
We ï¬rst show that, on summary state and ac- tion spaces, deep RL converges faster than Gaus- sian Processes SARSA (GPSARSA) (GaËsi´c et al., 2010). Then we show that deep RL enables us to work on the original state and action spaces. Al- though GPSARSA has also been tried on origi- nal state space (GaËsi´c et al., 2012), it is extremely slow in terms of wall-clock time due to its grow- ing kernel evaluations. Indeed, contrary to meth- ods such as GPSARSA, deep RL performs efï¬- cient generalization over the state space and mem- ory requirements do not increase with the num- ber of experiments. On the simple domain speci- ï¬ed by DSTC2, we do not need to restrict the ac- tions in order to learn efï¬ciently. In order to re- move the need for restricted actions in more com- plex domains, we advocate for the use of TDA2C and supervised learning as a pre-training step. We show that supervised learning on a small set of dialogues (only 706 dialogues) signiï¬cantly boot- straps TDA2C and enables us to start learning with a policy that already selects only valid ac- tions, which makes for a safe user experience in deployment. Therefore, we conclude that TDA2C is very appealing for the practical deployment of POMDP-based dialogue systems.
In Section 2 we brieï¬y review POMDP, RL and GPSARSA. The value-based deep RL models in- vestigated in this paper (DQN and DDQN) are de- scribed in Section 3. Policy networks and DA2C are discussed in Section 4. We then introduce the two-stage training of DA2C in Section 5. Experi- mental results are presented in Section 6. Finally, Section 7 concludes the paper and makes sugges- tions for future research.
1Teacher DA2C
# 2 Preliminaries
The reinforcement learning problem consists of an environment (the user) and an agent (the system) (Sutton and Barto} {1998). The environment is de- scribed as a set of continuous or discrete states S and at each state s ⬠S, the system can perform an action from an action space A(s). The actions can be continuous, but in our case they are assumed to be discrete and finite. At time t, as a consequence of an action A; = a ⬠A(s), the state transitions from S; = s to Si4, = sâ ⬠S. In addition, a reward signal Ri+1 = R(S;, At, Si41) ⬠R pro- vides feedback on the quality of the transitior?| The agentâs task is to maximize at each state the expected discounted sum of rewards received after visiting this state. For this purpose, value func- tions are computed. The action-state value func- tion Q is defined as: Q" (St, At) = [Rist + Rive +P Ru3t..- | Si = s, At =al, (1)
where γ is a discount factor in [0, 1]. In this equa- tion, the policy Ï speciï¬es the systemâs behaviour, i.e., it describes the agentâs action selection pro- cess at each state. A policy can be a deterministic mapping Ï(s) = a, which speciï¬es the action a to be selected when state s is met. On the other hand, a stochastic policy provides a probability distribu- tion over the action space at each state: Ï(a|s) = P[At = a|St = s].
The agentâs goal is to ï¬nd a policy that maximizes the Q-function at each state.
It is important to note that here the system does not have direct access to the state s. Instead, it sees this state through a perception process which typically includes an Automatic Speech Recogni- tion (ASR) step, a Natural Language Understand- ing (NLU) step, and a State Tracking (ST) step. This perception process injects noise in the state of the system and it has been shown that mod- elling dialogue management as a POMDP helps to overcome this noise (Williams and Young, 2007; Young et al., 2013).
Within the POMDP framework, the state at time t, St, is not directly observable. Instead, the sys- tem has access to a noisy observation Ot.3 A 2In this paper, upper-case letters are used for random vari- ables, lower-case letters for non-random values (known or unknown), and calligraphy letters for sets.
3Here, the representation of the userâs goal and the userâs utterances.
POMDP is a tuple (S,.A, P, R,O, Z,7, bo) where S is the state space, A is the action space, P is the function encoding the transition probability: P,(s, 8â) = P(Si41 = 8â | Sp = 5, Ay = a), Ris the reward function, O is the observation space, Z encodes the observation probabilities Z,(s,0) = P(Q, = 0 | S; = s, Ay = a), 7 is a discount fac- tor, and bo is an initial belief state. The belief state is a distribution over states. Starting from bo, the state tracker maintains and updates the belief state according to the observations perceived during the dialogue. The dialogue manager then operates on this belief state. Consequently, the value functions as well as the policy of the agent are computed on the belief states B;:
Q" (Bi, At) = x |S Regs | Br, At U>t [At = a|Bi = bj]. (3) m(alb) =
In this paper, we use GPSARSA as a baseline as it has been proved to be a successful algorithm for training POMDP-based dialogue managers (Engel et al., 2005; GaËsi´c et al., 2010). Formally, the Q- function is modelled as a Gaussian process, en- tirely deï¬ned by a mean and a kernel: Q(B, A) â¼ GP(m, (k(B, A), k(B, A))). The mean is usually initialized at 0 and it is then jointly updated with the covariance based on the systemâs observations (i.e., the visited belief states and actions, and the In order to avoid intractability in the rewards). number of experiments, we use kernel span spar- siï¬cation (Engel et al., 2005). This technique con- sists of approximating the kernel on a dictionary of linearly independent belief states. This dictio- nary is incrementally built during learning. Kernel span sparsiï¬cation requires setting a threshold on the precision to which the kernel is computed. As discussed in Section 6, this threshold needs to be ï¬ne-tuned for a good tradeoff between precision and performance.
# 3 Value-Based Deep Reinforcement Learning
Broadly speaking, there are two main streams of methodologies in the RL literature: value approxi- mation and policy gradients. As suggested by their names, the former tries to approximate the value function whereas the latter tries to directly approx- imate the policy. Approximations are necessary for large or continuous belief and action spaces.
Indeed, if the belief space is large or continuous it would not be possible to store a value for each state in a table, so generalization over the state space is necessary. In this context, some of the beneï¬ts of deep RL techniques are the following:
⢠Generalisation over the belief space is efï¬- cient and the need for summary spaces is eliminated, normally with considerably less wall-clock training time comparing to GP- SARSA, for example.
⢠Memory requirements are limited and can be determined in advance unlike with methods such as GPSARSA.
⢠Deep architectures with several hidden layers can be efï¬ciently used for complex tasks and environments.
# 3.1 Deep Q Networks
A Deep Q-Network (DQN) is a multi-layer neu- ral network which maps a belief state Bt to the values of the possible actions At â A(Bt = b) at that state, QÏ(Bt, At; wt), where wt is the weight vector of the neural network. Neural net- works for the approximation of value functions have long been investigated (Bertsekas and Tsit- siklis, 1996). However, these methods were previ- ously quite unstable (Mnih et al., 2013). In DQN, Mnih et al. (2013, 2015) proposed two techniques to overcome this instability-namely experience re- play and the use of a target network. In experi- ence replay, all the transitions are put in a ï¬nite pool D (Lin, 1993). Once the pool has reached its predeï¬ned maximum size, adding a new tran- sition results in deleting the oldest transition in the pool. During training, a mini-batch of tran- sitions is uniformly sampled from the pool, i.e. (Bt, At, Rt+1, Bt+1) â¼ U (D). This method re- moves the instability arising from strong corre- lation between the subsequent transitions of an episode (a dialogue). Additionally, a target net- work with weight vector wâ is used. This target network is similar to the Q-network except that its weights are only copied every Ï steps from the Q-network, and remain ï¬xed during all the other steps. The loss function for the Q-network at iter-
ation t takes the following form:
Li(we) = EC, Ae, Rey1,Bey1)~U(D) [ (Re + ymax Q"(Br41,4'; wy, ) a 2 â Q" (Bi, Aes w:)) | : (4)
# 3.2 Double DQN: Overcoming
# Overestimation and Instability of DQN
The max operator in Equation 4 uses the same value network (i.e., the target network) to se- lect actions and evaluate them. This increases the probability of overestimating the value of the state-action pairs (van Hasselt, 2010; van Hasselt et al., 2015). To see this more clearly, the target part of the loss in Equation 4 can be rewritten as follows:
Rt+1 + γQÏ(Bt+1, argmax a QÏ(Bt+1, a; wâ t ); wâ t ).
In this equation, the target network is used twice. Decoupling is possible by using the Q-network for action selection as follows (van Hasselt et al., 2015):
Rt+1 + γQÏ(Bt+1, argmax a QÏ(Bt+1, a; wt); wâ t ).
Then, similarly to DQN, the Q-network is trained using experience replay and the target network is updated every Ï steps. This new version of DQN, called Double DQN (DDQN), uses the two value networks in a decoupled manner, and alleviates the overestimation issue of DQN. This generally re- sults in a more stable learning process (van Hasselt et al., 2015).
In the following section, we present deep RL models which perform policy search and output a stochastic policy rather than value approximation with a deterministic policy.
# 4 Policy Networks and Deep Advantage Actor-Critic (DA2C)
A policy network is a parametrized probabilistic mapping between belief and action spaces:
Ïθ(a|b) = Ï(a|b; θ) = P(At = a|Bt = b, θt = θ),
where θ is the parameter vector (the weight vec- tor of a neural network).4 In order to train policy
4For parametrization, we use w for value networks and θ for policy networks.
networks, policy gradient algorithms have been developed (Williams, 1992; Sutton et al., 2000). Policy gradient algorithms are model-free meth- ods which directly approximate the policy by parametrizing it. The parameters are learnt using a gradient-based optimization method.
We ï¬rst need to deï¬ne an objective function J that will lead the search for the parameters θ. This objective function deï¬nes policy quality. One way of deï¬ning it is to take the average over the re- wards received by the agent. Another way is to compute the discounted sum of rewards for each trajectory, given that there is a designated start state. The policy gradient is then computed ac- cording to the Policy Gradient Theorem (Sutton et al., 2000).
Theorem 1 (Policy Gradient) For any differen- tiable policy Ïθ(b, a) and for the average reward or the start-state objective function, the policy gradient can be computed as
âθJ(θ) = EÏθ [âθ log Ïθ(a|b)QÏθ (b, a)].
Policy gradient methods have been used success- fully in different domains. Two recent examples are AlphaGo by DeepMind (Silver et al., 2016) and MazeBase by Facebook AI (Sukhbaatar et al., 2016).
One way to exploit Theorem 1 is to parametrize QÏθ (b, a) separately (with a parameter vector w) and learn the parameter vector during training in a similar way as in DQN. The trained Q-network can then be used for policy evaluation in Equa- tion 5. Such algorithms are known in general as actor-critic algorithms, where the Q approximator is the critic and Ïθ is the actor (Sutton, 1984; Barto et al., 1990; Bhatnagar et al., 2009). This can be achieved with two separate deep neural networks: a Q-Network and a policy network.
However, a direct use of Equation 5 with Q as critic is known to cause high variance (Williams, 1992). An important property of Equation 5 can be used in order to overcome this issue: subtract- ing any differentiable function Ba expressed over the belief space from QÏθ will not change the gra- dient. A good selection of Ba, which is called the baseline, can reduce the variance dramatically (Sutton and Barto, 1998). As a result, Equation 5 may be rewritten as follows:
âθJ(θ) = EÏθ [âθ log Ïθ(a|b)Ad(b, a)],
(6)
where Ad(b, a) = Q7¢(b, a) â Ba(b) is called the advantage function. A good baseline is the value function Vâ¢â, for which the advantage function becomes Ad(b,a) = Q7°(b,a) â V(b). How- ever, in this setting, we need to train two sepa- rate networks to parametrize Qâ° and Vâ°. A bet- ter approach is to use the TD error 6 = Ri41 + V7 (Bi+1) â V7( By) as advantage function. It can be proved that the expected value of the TD error is Qâ¢Â¢(b,a) â V79(b). If the TD error is used, only one network is needed, to parametrize V7(B,) = V⢠(Bi; wz). We call this network the value network. We can use a DQN-like method to train the value network using both experience re- play and a target network. For a transition B, = b, A, = a, Riz, = r and By+1 = 0bâ, the advantage function is calculated as in:
bp =7 + V(b; w,) â V7(d; we). (7)
Because the gradient in Equation 6 is weighted by the advantage function, it may become quite large. In fact, the advantage function may act as a large learning rate. This can cause the learning process to become unstable. To avoid this issue, we add L2 regularization to the policy objective function. We call this method Deep Advantage Actor-Critic (DA2C).
In the next section, we show how this architec- ture can be used to efï¬ciently exploit a small set of handcrafted data.
# 5 Two-stage Training of the Policy Network
By deï¬nition, the policy network provides a prob- ability distribution over the action space. As a re- sult and in contrast to value-based methods such as DQN, a policy network can also be trained with direct supervised learning (Silver et al., 2016). Supervised training of RL agents has been well- studied in the context of Imitation Learning (IL). In IL, an agent learns to reproduce the behaviour of an expert. Supervised learning of the policy was one of the ï¬rst techniques used to solve this prob- lem (Pomerleau, 1989; Amit and Mataric, 2002). This direct type of imitation learning requires that the learning agent and the expert share the same characteristics. If this condition is not met, IL can be done at the level of the value functions rather than the policy directly (Piot et al., 2015). In this paper, the data that we use (DSTC2) was collected with a dialogue system similar to the one we train
so in our case, the demonstrator and the learner share the same characteristics.
Similarly to Silver et al. (2016), here, we ini- tialize both the policy network and the value net- work on the data. The policy network is trained by minimising the categorical cross-entropy between the predicted action distribution and the demon- strated actions. The value network is trained di- rectly through RL rather than IL to give more ï¬ex- ibility in the kind of data we can use. Indeed, our goal is to collect a small number of dialogues and learn from them. IL usually assumes that the data corresponds to expert policies. However, di- alogues collected with a handcrafted policy or in a Wizard-of-Oz (WoZ) setting often contain both optimal and sub-optimal dialogues and RL can be used to learn from all of these dialogues. Super- vised training can also be done on these dialogues as we show in Section 6.
Supervised actor-critic architectures following this idea have been proposed in the past (Ben- brahim and Franklin, 1997; Si et al., 2004); the actor works together with a human supervisor to gain competence on its task even if the criticâs es- timations are poor. For instance, a human can help a robot move by providing the robot with valid ac- tions. We advocate for the same kind of methods for dialogue systems. It is easy to collect a small number of high-quality dialogues and then use su- pervised learning on this data to teach the system valid actions. This also eliminates the need to de- ï¬ne restricted action sets.
In all the methods above, Adadelta will be used as the gradient-decent optimiser, which in our experiments works noticeably better than other methods such as Adagrad, Adam, and RMSProp.
# 6 Experiments
# 6.1 Comparison of DQN and GPSARSA
6.1.1 Experimental Protocol In this section, as a ï¬rst argument in favour of deep RL, we perform a comparison between GPSARSA and DQN on simulated dialogues. We trained an agenda-based user simulator which at each dia- logue turn, provides one or several dialogue act(s) in response to the latest machine act (Schatzmann et al., 2007; Schatzmann and Young, 2009). The dataset used for training this user-simulator is the Dialogue State Tracking Challenge 2 (DSTC2) (Henderson et al., 2014) dataset. State tracking is also trained on this dataset. DSTC2 includes
â pan â GPSARSA ââ DAQN-no-summary Average dialogue length 0 5 10 15 1 = _ 2) eS = OLF J ~ 2 o D> g-1 g <= -2 0 5 10 15 x1000 training dialogues & 20 f= â Dan 5 ââ ppan ® 15 ââ pa2zc = ist 10 os o D 5 gs g zo oO 5 10 15 1 a <j Yo 2 o D g-1 g <x -2 oO 5 10 15 x1000 training dialogues
(a) Comparison of GPSARSA on summary spaces and DQN on summary (DQN) and original spaces (DQN-no- summary). (b) Comparison of DA2C, DQN and DDQN on original spaces.
Figure 1: Comparison of different algorithms on simulated dialogues, without any pre-training.
dialogues with users who are searching for restau- rants in Cambridge, UK.
In each dialogue, the user has a goal containing constraint slots and request slots. The constraint and request slots available in DSTC2 are listed in Appendix A. The constraints are the slots that the user has to provide to the system (for instance the user is looking for a speciï¬c type of food in a given area) and the requests are the slots that the user must receive from the system (for instance the user wants to know the address and phone number of the restaurant found by the system).
Similarly, the belief state is composed of two parts: constraints and requests. The constraint part includes the probabilities of the top two values for each constraint slot as returned by the state tracker (the value might be empty with a probability zero if the slot has not been mentioned). The request part, on the other hand, includes the probability of each request slot. For instance the constraint part might be [food: (Italian, 0.85) (Indian, 0.1) (Not mentioned, 0.05)] and the request part might be [area: 0.95] meaning that the user is probably looking for an Italian restaurant and that he wants to know the area of the restaurant found by the sys- tem. To compare DQN to GPSARSA, we work on a summary state space (GaËsi´c et al., 2012, 2013). Each constraint is mapped to a one-hot vector, with 1 corresponding to the tuple in the grid vec-
tor gc = [(1, 0), (.8, .2), (.6, .2), (.6, .4), (.4, .4)] that minimizes the Euclidean distance to the top two probabilities. Similarly, each request slot is mapped to a one-hot vector according to the grid gr = [1, .8, .6, .4, 0.]. The ï¬nal belief vector, known as the summary state, is deï¬ned as the con- catenation of the constraint and request one-hot vectors. Each summary state is a binary vector of length 60 (12 one-hot vectors of length 5) and the total number of states is 512.
We also work on a summary action space and we use the act types listed in Table 1 in Appendix A. We add the necessary slot information as a post processing step. For example, the request act means that the system wants to request a slot from the user, e.g. request(food). In this case, the se- lection of the slot is based on min-max probabil- ity, i.e., the most ambiguous slot (which is the slot we want to request) is assumed to be the one for which the value with maximum probability has the minimum probability compared to the most cer- tain values of the other slots. Note that this heuris- tic approach to compute the summary state and ac- tion spaces is a requirement to make GPSARSA tractable; it is a serious limitation in general and should be avoided.
As reward, we use a normalized scheme with a reward of +1 if the dialogue ï¬nishes successfully
before 30 turns,5 a reward of -1 if the dialogue is not successful after 30 turns, and a reward of -0.03 for each turn. A reward of -1 is also distributed to the system if the user hangs up. In our settings, the user simulator hangs up every time the system pro- poses a restaurant which does not match at least one of his constraints.
For the deep @-network, a Multi-Layer Percep- tron (MLP) is used with two fully connected hid- den layers, each having a tanh activation. The output layer has no activation and it provides the value for each of the summary machine acts. The summary machine acts are mapped to orig- inal acts using the heuristics explained previ- ously. Both algorithms are trained with 15000 dialogues. GPSARSA is trained with eâ¬-softmax exploration, which, with probability 1 â â¬, se- lects an action based on the logistic distribution Q(b,a) Plalb] = F eahay Oba) lects an action in a uniformly random way. From our experiments, this exploration scheme works best in terms of both convergence rate and vari- ance. For DQN, we use a simple e-greedy explo- ration which, with probability ⬠(same ⬠as above), uniformly selects an action and, with probability 1âe, selects an action maximizing the Q-function. For both algorithms, ¢ is annealed to less than 0.1 over the course of training. and, with probability â¬, se-
In a second experiment, we remove both summary state and action spaces for DQN, i.e., we do not perform the Euclidean-distance map- ping as before but instead work directly on the probabilities themselves. Additionally, the state is augmented with the probability (returned by the state tracker) of each user act (see Table 2 in Appendix A), the dialogue turn, and the number of results returned by the database (0 if there was no query). Consequently, the state consists of 31 continuous values and two discrete values. The original action space is composed of 11 actions: offer6, select-food, request-area, select-pricerange, request-pricerange, request-food, expl-conf-area, expl-conf-food, expl-conf-pricerange, repeat. There
5A dialogue is successful if the user retrieves all the re- quest slots for a restaurant matching all the constraints of his goal.
6This act consists of proposing a restaurant to the user. In order to be consistent with the DSTC2 dataset, an offer al- ways contains the values for all the constraints understood by the system, e.g. offer(name = Super Ramen, food = Japanese, price range = cheap).
is no post-processing via min-max selection anymore since the slot is part of the action, e.g., select-area.
The policies are evaluated after each 1000 train- ing dialogues on 500 test dialogues without explo- ration.
6.1.2 Results Figure 1 illustrates the performance of DQN com- pared to GPSARSA. In our experiments with GP- SARSA we found that it was difï¬cult to ï¬nd a good tradeoff between precision and efï¬ciency. Indeed, for low precision, the algorithm learned rapidly but did not reach optimal behaviour, whereas higher precision made learning extremely slow but resulted in better end-performance. On summary spaces, DQN outperforms GPSARSA in terms of convergence. Indeed, GPSARSA re- quires twice as many dialogues to converge. It is also worth mentioning here that the wall-clock training time of GPSARSA is considerably longer than the one of DQN due to kernel evaluation. The second experiment validates the fact that Deep RL can be efï¬ciently trained directly on the belief state returned by the state tracker. Indeed, DQN on the original spaces performs as well as GPSARSA on the summary spaces.
In the next section, we train and compare the deep RL networks previously described on the original state and action spaces.
# 6.2 Comparison of the Deep RL Methods
6.2.1 Experimental Protocol Similarly to the previous example, we work on a restaurant domain and use the DSTC2 speci- fications. We use eâgreedy exploration for all four algorithms with e⬠starting at 0.5 and be- ing linearly annealed at a rate of A = 0.99995. To speed up the learning process, the actions select-pricerange, select-area, and select-food are excluded from exploration. Note that this set does not depend on the state and is meant for exploration only. All the actions can be performed by the system at any moment.
We derived two datasets from DSTC2. The ï¬rst dataset contains the 2118 dialogues of DSTC2. We had these dialogues rated by a human expert, based on the quality of dialogue management and on a scale of 0 to 3. The second dataset only con- tains the dialogues with a rating of 3 (706 dia- logues). The underlying assumption is that these dialogues correspond to optimal policies.
â DDAN + Batch â DON + Batch 15 ââ DA2C + Batch Average dialogue length ° Average rewards -2 oO 5 10 15 x1000 training dialogues
â SupExptBatchDA2c ââ SupFullBatchDA2c â BatchDA2c ââ DA2c Average dialogue length Average rewards 0 5 10 15 x1000 training dialogues
(a) Comparison of DA2C, DQN and DDQN after batch ini- tialization.
(b) Comparison of DA2C and DA2C after batch initializa- tion (batchDA2C), and TDA2C after supervised training on expert (SupExptBatchDA2C) and non-expert data (SupFull- BatchDA2C).
Figure 2: Comparison of different algorithms on simulated dialogues, with pre-training.
We compare the convergence rates of the deep RL models in different settings. First, we com- pare DQN, DDQN and DA2C without any pre- training (Figure 1b). Then, we compare DQN, DDQN and TDA2C with an RL initialization on the DSTC2 dataset (Figure 2a). Finally, we focus on the advantage actor-critic models and compare DA2C, TDA2C, TDA2C with batch initialization on DSTC2, and TDA2C with batch initialization on the expert dialogues (Figure 2b).
of the dialogue acts chosen by the system were still appropriate, which explains that the system learns acceptable behavior from the entire dataset. This shows that supervised training, even when performed not only on optimal dialogues, makes learning much faster and relieves the need for re- stricted action sets. Valid actions are learnt from the dialogues and then RL exploits the good and bad dialogues to pursue training towards a high performing policy.
# 6.2.2 Results
# 7 Concluding Remarks
As expected, DDQN converges faster than DQN on all experiments. Figure 1b shows that, with- out any pre-training, DA2C is the one which con- verges the fastest (6000 dialogues vs. 10000 dia- logues for the other models). Figure 2a gives con- sistent results and shows that, with initial train- ing on the 2118 dialogues of DSTC2, TDA2C converges signiï¬cantly faster than the other mod- els. Figure 2b focuses on DA2C and TDA2C. Compared to batch training, supervised training on DSTC2 speeds up convergence by 2000 dia- logues (3000 dialogues vs. 5000 dialogues). In- terestingly, there does not seem to be much dif- ference between supervised training on the expert data and on DSTC2. The expert data only con- sists of 706 dialogues out of 2118 dialogues. Our observation is that, in the non-expert data, many
In this paper, we used policy networks for dia- logue systems and trained them in a two-stage fashion: supervised training and batch reinforce- ment learning followed by online reinforcement learning. An important feature of policy networks is that they directly provide a probability distribu- tion over the action space, which enables super- vised training. We compared the results with other deep reinforcement learning algorithms, namely Deep Q Networks and Double Deep Q Networks. The combination of supervised and reinforcement learning is the main beneï¬t of our method, which paves the way for developing trainable end-to-end dialogue systems. Supervised training on a small dataset considerably bootstraps the learning pro- cess and can be used to signiï¬cantly improve the
convergence rate of reinforcement learning in sta- tistically optimised dialogue systems.
# References
R. Amit and M. Mataric. 2002. Learning move- In Proc. ment sequences from demonstration. Int. Conf. on Development and Learning. pages 203â208.
A. G. Barto, R. S. Sutton, and C. W. Anderson. In Artiï¬cial Neural Networks, chapter 1990. Neuronlike Adaptive Elements That Can Solve Difï¬cult Learning Control Problems, pages 81â 93.
H. Benbrahim and J. A. Franklin. 1997. Biped dynamic walking using reinforcement learning. Robotics and Autonomous Systems 22:283â302.
D. P. Bertsekas and J. Tsitsiklis. 1996. Neuro- Dynamic Programming. Athena Scientiï¬c.
S. Bhatnagar, R. Sutton, M. Ghavamzadeh, and M. Lee. 2009. Natural Actor-Critic Algorithms. Automatica 45(11).
Simpleds: A simple deep reinforcement learning dialogue system. arXiv:1601.04574v1 [cs.AI].
H. Cuay´ahuitl, S. Keizer, and O. Lemon. 2015. Strategic dialogue management via deep rein- forcement learning. arXiv:1511.08099 [cs.AI].
L. Daubigney, M. Geist, S. Chandramohan, and O. Pietquin. 2012. A Comprehensive Rein- forcement Learning Framework for Dialogue IEEE Journal of Management Optimisation. Selected Topics in Signal Processing 6(8):891â 902.
Y. Engel, S. Mannor, and R. Meir. 2005. Rein- forcement learning with gaussian processes. In Proc. of ICML.
M. GaËsi´c, C. Breslin, M. Henderson, D. Kim, M. Szummer, B. Thomson, P. Tsiakoulis, and S.J. Young. 2013. On-line policy optimisation of bayesian spoken dialogue systems via human In Proc. of ICASSP. pages 8367â interaction. 8371.
M. GaËsi´c, M. Henderson, B. Thomson, P. Tsiak- oulis, and S. Young. 2012. Policy optimisa- tion of POMDP-based dialogue systems with- out state space compression. In Proc. of SLT.
M. GaËsi´c, F. JurËc´ıËcek, S. Keizer, F. Mairesse, B. Thomson, K. Yu, and S. Young. 2010. Gaus- sian processes for fast policy optimisation of
POMDP-based dialogue managers. In Proc. of SIGDIAL.
M. Henderson, B. Thomson, and J. Williams. 2014. The Second Dialog State Tracking Chal- lenge. In Proc. of SIGDIAL.
R. Laroche, G. Putois, and P. Bretier. 2010. Op- timising a handcrafted dialogue system design. In Proc. of Interspeech.
O. Lemon and O. Pietquin. 2007. Machine learn- In Proc. of ing for spoken dialogue systems. Interspeech. pages 2685â2688.
E. Levin, R. Pieraccini, and W. Eckert. 1997. Learning dialogue strategies within the markov decision process framework. In Proc. of ASRU.
L-J Lin. 1993. Reinforcement learning for robots using neural networks. Ph.D. thesis, Carnegie Mellon University.
V Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I Antonoglou, D. Wierstra, and M. Riedmiller. 2013. Playing Atari with deep reinforcement learning. In NIPS Deep Learning Workshop.
V. Mnih, K. Kavukcuoglu, D. Silver, A.A. Rusu, J. Veness, M.G. Bellemare, A. Graves, M. Ried- miller, A.K. Fidjeland, G. Ostrovski, S. Pe- tersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. 2015. Human-level control through deep reinforcement learning. Nature 518(7540):529â533.
B. Piot, M. Geist, and O. Pietquin. 2015. Imitation Learning Applied to Embodied Conversational Agents. In Proc. of MLIS.
D. A. Pomerleau. 1989. Alvinn: An autonomous In Proc. of land vehicle in a neural network. NIPS. pages 305â313.
J. Schatzmann, B. Thomson, K. Weilhammer, H. Ye, and S. Young. 2007. Agenda-based user simulation for bootstrapping a POMDP di- alogue system. In Proc. of NAACL HLT. pages 149â152.
J. Schatzmann and S. Young. 2009. The hidden agenda user simulation model. Proc. of TASLP 17(4):733â747.
J. Si, A. G. Barto, W. B. Powell, and D. Wun- sch. 2004. Supervised ActorCritic Reinforce- ment Learning, pages 359â380.
D. Silver, A. Huang, C.J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser,
I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalch- brenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hass- abis. 2016. Mastering the game of go with deep neural networks and tree search. Nature 529(7587):484â489.
S. Sukhbaatar, A. Szlam, G. Synnaeve, S. Chintala, and R. Fergus. 2016. Maze- base: A sandbox for learning from games. arxiv.org/pdf/1511.07401 [cs.LG].
R. S. Sutton. 1984. Temporal credit assignment in reinforcement learning. Ph.D. thesis, Uni- versity of Massachusetts at Amherst, Amherst, MA, USA.
R. S. Sutton, D. McAllester, S. Singh, and Y. Man- sour. 2000. Policy gradient methods for re- inforcement learning with function approxima- tion. In Proc. of NIPS. volume 12, pages 1057â 1063.
R.S. Sutton and A.G. Barto. 1998. Reinforcement Learning. MIT Press.
H. van Hasselt. 2010. Double q-learning. In Proc. of NIPS. pages 2613â2621.
H. van Hasselt, A. Guez, and D. Silver. 2015. Deep reinforcement learning with double Q- learning. arXiv:1509.06461v3 [cs.LG].
J.D. Williams and S. Young. 2007. Partially ob- servable markov decision processes for spoken dialog systems. Proc. of CSL 21:231â422.
R.J. Williams. 1992. Simple statistical gradient- following algorithms for connectionist rein- forcement learning. Machine Learning 8:229â 256.
S. Young, M. Gasic, B. Thomson, and J. Williams. 2013. POMDP-based statistical spoken dialog systems: A review. Proc. IEEE 101(5):1160â 1179.
# A Speciï¬cations of restaurant search in DTSC2
Constraint slots area, type of food, price range.
Request slots area, type of food, address, name, price range, postcode, signature dish, phone number
Table 1: Summary actions.
Action Description Cannot help No restaurant in the database matches the userâs constraints. Conï¬rm Domain Explicit Conï¬rm Offer Conï¬rm that the user is looking for a restaurant. Ask the user to conï¬rm a piece of information. Propose a restaurant to the user. Repeat Ask the user to repeat. Request Request a slot from the user. Select Ask the user to select a value between two propositions (e.g. select between Italian and Indian).
Table 2: User actions.
Action Description Deny Deny a piece of information. Null Say nothing. Request More Conï¬rm Request more options. Ask the system to conï¬rm a piece of information. Acknowledge Acknowledge. Afï¬rm Say yes. Request Request a slot value. Inform Inform the system of a slot value. Thank you Thank the system. Repeat Ask the system to repeat. Request Alternatives Request alternative restaurant options. Negate Say no. Bye Say goodbye to the system. Hello Say hello to the system. Restart the system to restart | {
"id": "1511.08099"
} |
1606.02960 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | 6 1 0 2
v o N 0 1 ] L C . s c [
2 v 0 6 9 2 0 . 6 0 6 1 : v i X r a
# Sequence-to-Sequence Learning as Beam-Search Optimization
Sam Wiseman and Alexander M. Rush School of Engineering and Applied Sciences Harvard University Cambridge, MA, USA {swiseman,srush}@seas.harvard.edu
# Abstract
Sequence-to-Sequence (seq2seq) modeling has rapidly become an important general- purpose NLP tool that has proven effective for many text-generation and sequence-labeling tasks. Seq2seq builds on deep neural language modeling and inherits its remarkable accuracy in estimating local, next-word distributions. In this work, we introduce a model and beam- search training scheme, based on the work of Daum´e III and Marcu (2005), that extends seq2seq to learn global sequence scores. This structured approach avoids classical biases as- sociated with local training and uniï¬es the training loss with the test-time usage, while preserving the proven model architecture of seq2seq and its efï¬cient training approach. We show that our system outperforms a highly- optimized attention-based seq2seq system and other baselines on three different sequence to sequence tasks: word ordering, parsing, and machine translation.
text generation applications, such as image or video captioning (Venugopalan et al., 2015; Xu et al., 2015).
The dominant approach to training a seq2seq sys- tem is as a conditional language model, with training maximizing the likelihood of each successive tar- get word conditioned on the input sequence and the gold history of target words. Thus, training uses a strictly word-level loss, usually cross-entropy over the target vocabulary. This approach has proven to be very effective and efï¬cient for training neural lan- guage models, and seq2seq models similarly obtain impressive perplexities for word-generation tasks.
Notably, however, seq2seq models are not used as conditional language models at test-time; they must instead generate fully-formed word sequences. In practice, generation is accomplished by searching over output sequences greedily or with beam search. In this context, Ranzato et al. (2016) note that the combination of the training and generation scheme just described leads to at least two major issues:
1
# 1 Introduction
Sequence-to-Sequence learning with deep neural networks (herein, seq2seq) (Sutskever et al., 2011; Sutskever et al., 2014) has rapidly become a very useful and surprisingly general-purpose tool for nat- In addition to demon- ural language processing. strating impressive results for machine translation (Bahdanau et al., 2015), roughly the same model and training have also proven to be useful for sen- tence compression (Filippova et al., 2015), parsing (Vinyals et al., 2015), and dialogue systems (Ser- ban et al., 2016), and they additionally underlie other
1. Exposure Bias: the model is never exposed to its own errors during training, and so the in- ferred histories at test-time do not resemble the gold training histories.
training uses a word-level loss, while at test-time we target improving sequence-level evaluation metrics, such as BLEU (Papineni et al., 2002).
We might additionally add the concern of label bias (Lafferty et al., 2001) to the list, since word- probabilities at each time-step are locally normal- ized, guaranteeing that successors of incorrect his-
tories receive the same mass as do the successors of the true history.
In this work we develop a non-probabilistic vari- ant of the seq2seq model that can assign a score to any possible target sequence, and we propose a training procedure, inspired by the learning as search optimization (LaSO) framework of Daum´e III and Marcu (2005), that deï¬nes a loss function in terms of errors made during beam search. Fur- thermore, we provide an efï¬cient algorithm to back- propagate through the beam-search procedure dur- ing seq2seq training.
This approach offers a possible solution to each of the three aforementioned issues, while largely maintaining the model architecture and training ef- ï¬ciency of standard seq2seq learning. Moreover, by scoring sequences rather than words, our ap- proach also allows for enforcing hard-constraints on sequence generation at training time. To test out the effectiveness of the proposed approach, we develop a general-purpose seq2seq system with beam search optimization. We run experiments on three very dif- ferent problems: word ordering, syntactic parsing, and machine translation, and compare to a highly- tuned seq2seq system with attention (Luong et al., 2015). The version with beam search optimization shows signiï¬cant improvements on all three tasks, and particular improvements on tasks that require difï¬cult search.
# 2 Related Work
The issues of exposure bias and label bias have re- ceived much attention from authors in the structured prediction community, and we brieï¬y review some of this work here. One prominent approach to com- bating exposure bias is that of SEARN (Daum´e III et al., 2009), a meta-training algorithm that learns a search policy in the form of a cost-sensitive classiï¬er trained on examples generated from an interpolation of an oracle policy and the modelâs current (learned) policy. Thus, SEARN explicitly targets the mis- match between oracular training and non-oracular (often greedy) test-time inference by training on the output of the modelâs own policy. DAgger (Ross et al., 2011) is a similar approach, which differs in terms of how training examples are generated and aggregated, and there have additionally been impor-
tant reï¬nements to this style of training over the past several years (Chang et al., 2015). When it comes to training RNNs, SEARN/DAgger has been applied under the name âscheduled samplingâ (Bengio et al., 2015), which involves training an RNN to generate the t + 1âst token in a target sequence after consum- ing either the true tâth token, or, with probability that increases throughout training, the predicted tâth to- ken.
is uncom- mon to use beam search when training with SEARN/DAgger. The early-update (Collins and Roark, 2004) and LaSO (Daum´e III and Marcu, 2005) training strategies, however, explicitly ac- count for beam search, and describe strategies for updating parameters when the gold structure be- comes unreachable during search. Early update and LaSO differ primarily in that the former discards a training example after the ï¬rst search error, whereas LaSO resumes searching after an error from a state that includes the gold partial structure. In the con- text of feed-forward neural network training, early update training has been recently explored in a feed- forward setting by Zhou et al. (2015) and Andor et al. (2016). Our work differs in that we adopt a LaSO-like paradigm (with some minor modiï¬ca- tions), and apply it to the training of seq2seq RNNs (rather than feed-forward networks). We also note that Watanabe and Sumita (2015) apply maximum- violation training (Huang et al., 2012), which is sim- ilar to early-update, to a parsing model with recur- rent components, and that Yazdani and Henderson (2015) use beam-search in training a discriminative, locally normalized dependency parser with recurrent components.
Recently authors have also proposed alleviating exposure bias using techniques from reinforcement learning. Ranzato et al. (2016) follow this ap- proach to train RNN decoders in a seq2seq model, and they obtain consistent improvements in perfor- mance, even over models trained with scheduled sampling. As Daum´e III and Marcu (2005) note, LaSO is similar to reinforcement learning, except it does not require âexplorationâ in the same way. Such exploration may be unnecessary in supervised text-generation, since we typically know the gold partial sequences at each time-step. Shen et al. (2016) use minimum risk training (approximated by
sampling) to address the issues of exposure bias and loss-evaluation mismatch for seq2seq MT, and show impressive performance gains.
Whereas exposure bias results from training in a certain way, label bias results from properties of the model itself. In particular, label bias is likely to affect structured models that make sub-structure predictions using locally-normalized scores. Be- cause the neural and non-neural literature on this point has recently been reviewed by Andor et al. (2016), we simply note here that RNN models are typically locally normalized, and we are unaware of any speciï¬cally seq2seq work with RNNs that does not use locally-normalized scores. The model we introduce here, however, is not locally normalized, and so should not suffer from label bias. We also note that there are some (non-seq2seq) exceptions to the trend of locally normalized RNNs, such as the work of Sak et al. (2014) and Voigtlaender et al. (2015), who train LSTMs in the context of HMMs for speech recognition using sequence-level objec- tives; their work does not consider search, however.
# 3 Background and Notation
In the simplest seq2seq scenario, we are given a col- lection of source-target sequence pairs and tasked with learning to generate target sequences from source sequences. For instance, we might view ma- chine translation in this way, where in particular we attempt to generate English sentences from (corre- sponding) French sentences. Seq2seq models are part of the broader class of âencoder-decoderâ mod- els (Cho et al., 2014), which ï¬rst use an encoding model to transform a source object into an encoded representation x. Many different sequential (and non-sequential) encoders have proven to be effec- tive for different source domains. In this work we are agnostic to the form of the encoding model, and simply assume an abstract source representation x. Once the input sequence is encoded, seq2seq models generate a target sequence using a decoder. The decoder is tasked with generating a target se- quence of words from a target vocabulary V. In particular, words are generated sequentially by con- ditioning on the input representation x and on the previously generated words or history. We use the notation w1:T to refer to an arbitrary word sequence
of length T , and the notation y1:T to refer to the gold (i.e., correct) target word sequence for an input x.
Most seq2seq systems utilize a recurrent neural network (RNN) for the decoder model. Formally, a recurrent neural network is a parameterized non- linear function RNN that recursively maps a se- quence of vectors to a sequence of hidden states. Let m1, . . . , mT be a sequence of T vectors, and let h0 be some initial state vector. Applying an RNN to any such sequence yields hidden states ht at each time-step t, as follows:
ht â RNN(mt, htâ1; θ),
where θ is the set of model parameters, which are In this work, the vectors mt shared over time. will always correspond to the embeddings of a tar- get word sequence w1:T , and so we will also write ht â RNN(wt, htâ1; θ), with wt standing in for its embedding.
RNN decoders are typically trained to act as con- ditional language models. That is, one attempts to model the probability of the ¢âth target word con- ditioned on « and the target history by stipulating that p(w;|wi-2â1, ©) = g(wz, he_-1, x), for some pa- rameterized function g typically computed with an affine layer followed by a softmax. In computing these probabilities, the state hy_, represents the tar- get history, and ho is typically set to be some func- tion of a. The complete model (including encoder) is trained, analogously to a neural language model, to minimize the cross-entropy loss at each time-step while conditioning on the gold history in the train- ing data. That is, the model is trained to minimize âInTT 2; p(yelgnt1, 2).
t=1 p(yt|y1:tâ1, x). discrete se- is quence generation can be performed by approx- imately maximizing the probability of the tar- get sequence under the conditional distribution, Ëy1:T = argbeamw1:T t=1 p(wt|w1:tâ1, x), where we use the notation argbeam to emphasize that the decoding process requires heuristic search, since the RNN model is non-Markovian. In practice, a simple beam search procedure that explores K prospective histories at each time-step has proven to be an effec- tive decoding approach. However, as noted above, decoding in this manner after conditional language- model style training potentially suffers from the is-
sues of exposure bias and label bias, which moti- vates the work of this paper.
# 4 Beam Search Optimization
We begin by making one small change to the seq2seq modeling framework. Instead of predicting the probability of the next word, we instead learn to produce (non-probabilistic) scores for ranking se- quences. Deï¬ne the score of a sequence consisting of history w1:tâ1 followed by a single word wt as f (wt, htâ1, x), where f is a parameterized function examining the current hidden-state of the relevant RNN at time t â 1 as well as the input representa- tion x. In experiments, our f will have an identi- cal form to g but without the ï¬nal softmax transfor- mation (which transforms unnormalized scores into probabilities), thereby allowing the model to avoid issues associated with the label bias problem.
More importantly, we also modify how this model is trained. Ideally we would train by comparing the gold sequence to the highest-scoring complete sequence. However, because ï¬nding the argmax sequence according to this model is intractable, we propose to adopt a LaSO-like (Daum´e III and Marcu, 2005) scheme to train, which we will re- fer to as beam search optimization (BSO). In par- ticular, we deï¬ne a loss that penalizes the gold se- quence falling off the beam during training.1 The proposed training approach is a simple way to ex- pose the model to incorrect histories and to match the training procedure to test generation. Further- more we show that it can be implemented efï¬ciently without changing the asymptotic run-time of train- ing, beyond a factor of the beam size K.
# 4.1 Search-Based Loss
We now formalize this notion of a search-based loss for RNN training. Assume we have a set St of K candidate sequences of length t. We can calculate a score for each sequence in St using a scoring func- tion f parameterized with an RNN, as above, and we deï¬ne the sequence Ëy(K) 1:t â St to be the Kâth ranked
1Using a non-probabilistic model further allows us to incur no loss (and thus require no update to parameters) when the gold sequence is on the beam; this contrasts with models based on a CRF loss, such as those of Andor et al. (2016) and Zhou et al. (2015), though in training those models are simply not updated when the gold sequence remains on the beam.
sequence in St according to f . That is, assuming distinct scores,
|{Ëy(k) 1:t â St | f (Ëy(k) t , Ëh (k) tâ1) > f (Ëy(K) t , Ëh (K) tâ1)}| = K â 1,
# oo
(k) where Ëy(k) tâ1 is the RNN state corresponding to its t â 1âst step, and where we have omitted the x argument to f for brevity.
We now deï¬ne a loss function that gives loss each time the score of the gold preï¬x y1:t does not exceed that of Ëy(K)
# Lif) = T
T Daa?) [1- a). F (yes rea) + FG
Above, the â(Ëy(K) 1:t ) term denotes a mistake-speciï¬c cost-function, which allows us to scale the loss de- pending on the severity of erroneously predicting Ëy(K) 1:t ; it is assumed to return 0 when the margin re- quirement is satisï¬ed, and a positive number other- wise. It is this term that allows us to use sequence- rather than word-level costs in training (addressing the 2nd issue in the introduction). For instance, when training a seq2seq model for machine trans- lation, it may be desirable to have â(Ëy(K) 1:t ) be in- versely related to the partial sentence-level BLEU score of Ëy(K) 1:t with y1:t; we experiment along these lines in Section 5.3.
Finally, because we want the full gold sequence to be at the top of the beam at the end of search, when t = T we modify the loss to require the score of y1:T to exceed the score of the highest ranked incorrect prediction by a margin.
We can optimize the loss L using a two-step pro- cess: (1) in a forward pass, we compute candidate sets St and record margin violations (sequences with non-zero loss); (2) in a backward pass, we back- propagate the errors through the seq2seq RNNs. Un- like standard seq2seq training, the ï¬rst-step requires running search (in our case beam search) to ï¬nd margin violations. The second step can be done by adapting back-propagation through time (BPTT). We next discuss the details of this process.
# 4.2 Forward: Find Violations
In order to minimize this loss, we need to specify a procedure for constructing candidate sequences Ëy(k) 1:t
at each time step t so that we ï¬nd margin viola- tions. We follow LaSO (rather than early-update 2; see Section 2) and build candidates in a recursive If there was no margin violation at tâ1, manner. then St is constructed using a standard beam search update. If there was a margin violation, St is con- structed as the K best sequences assuming the gold history y1:tâ1 through time-step tâ1.
Formally, assume the function succ maps a se- quence w1:tâ1 â V tâ1 to the set of all valid se- quences of length t that can be formed by appending to it a valid word w â V. In the simplest, uncon- strained case, we will have
succ(w1:tâ1) = {w1:tâ1, w | w â V}.
As an important aside, note that for some prob- lems it may be preferable to deï¬ne a succ func- tion which imposes hard constraints on successor sequences. For instance, if we would like to use seq2seq models for parsing (by emitting a con- stituency or dependency structure encoded into a se- quence in some way), we will have hard constraints on the sequences the model can output, namely, that they represent valid parses. While hard constraints such as these would be difï¬cult to add to standard seq2seq at training time, in our framework they can naturally be added to the succ function, allowing us to train with hard constraints; we experiment along these lines in Section 5.3, where we refer to a model trained with constrained beam search as ConBSO.
Having deï¬ned an appropriate succ function, we specify the candidate set as:
succ(y1:1-1) violation at tâ1 ( S; = topk P UL, suce(g 4) 1) otherwise,
where we have a margin violation at tâ1 iff (K) f (ytâ1, htâ2) < f (Ëy(K) tâ1 , Ëh tâ2) + 1, and where topK considers the scores given by f . This search procedure is illustrated in the top portion of Figure 1. In the forward pass of our training algorithm, shown as the ï¬rst part of Algorithm 1, we run this version of beam search and collect all sequences and their hidden states that lead to losses.
2We found that training with early-update rather than (de- layed) LaSO did not work well, even after pre-training. Given the success of early-update in many NLP tasks this was some- what surprising. We leave this question to future work.
smells } {ome } (today } barks } Friday ) Y barks ) My straight \ now {Eun (otar =a) (etsy) {_ dog {__ the (et) ine en dog ) GE) Eg {ae} {biue }â( dog (Tiome }â{ today} }â+( barks )
Figure 1: Top: possible Ëy(k) 1:t formed in training with a beam of size K = 3 and with gold sequence y1:6 = âa red dog runs quickly todayâ. The gold sequence is high- lighted in yellow, and the predicted preï¬xes involved in margin violations (at t = 4 and t = 6) are in gray. Note that time-step T = 6 uses a different loss criterion. Bot- tom: preï¬xes that actually participate in the loss, ar- ranged to illustrate the back-propagation process.
# 4.3 Backward: Merge Sequences
Once we have collected margin violations we can run backpropagation to compute parameter updates. Assume a margin violation occurs at time-step t be- tween the predicted history Ëy(K) 1:t and the gold his- tory y1:t. As in standard seq2seq training we must back-propagate this error through the gold history; however, unlike seq2seq we also have a gradient for the wrongly predicted history.
Recall that to back-propagate errors through an RNN we run a recursive backward procedure â denoted below by BRNN â at each time-step t, which accumulates the gradients of next-step and fu- ture losses with respect to ht. We have:
# âhtL â BRNN(âhtLt+1, âht+1L),
where Lt+1 is the loss at step t + 1, deriving, for instance, from the score f (yt+1, ht). Running this BRNN procedure from t = T â 1 to t = 0 is known as back-propagation through time (BPTT).
In determining the total computational cost of back-propagation here, ï¬rst note that in the worst case there is one violation at each time-step, which leads to T independent, incorrect sequences. Since we need to call BRNN O(T ) times for each se- quence, a naive strategy of running BPTT for each incorrect sequence would lead to an O(T 2) back- ward pass, rather than the O(T ) time required for the standard seq2seq approach.
Fortunately, our combination of search-strategy and loss make it possible to efï¬ciently share BRNN operations. This shared structure comes
naturally from the LaSO update, which resets the beam in a convenient way.
We informally illustrate the process in Figure 1. The top of the diagram shows a possible sequence of Ëy(k) 1:t formed during search with a beam of size 3 for the target sequence y = âa red dog runs quickly today.â When the gold sequence falls off the beam at t = 4, search resumes with S5 = succ(y1:4), and so all subsequent predicted sequences have y1:4 as a preï¬x and are thus functions of h4. Moreover, be- cause our loss function only involves the scores of the gold preï¬x and the violating preï¬x, we end up with the relatively simple computation tree shown at the bottom of Figure 1. It is evident that we can backpropagate in a single pass, accumulating gradi- ents from sequences that diverge from the gold at the time-step that precedes their divergence. The second half of Algorithm 1 shows this explicitly for a single sequence, though it is straightforward to extend the algorithm to operate in batch.3
# 5 Data and Methods
We run experiments on three different tasks, com- paring our approach to the seq2seq baseline, and to other relevant baselines.
# 5.1 Model
While the method we describe applies to seq2seq RNNs in general, for all experiments we use the global attention model of Luong et al. (2015) â which consists of an LSTM (Hochreiter and Schmidhuber, 1997) encoder and an LSTM decoder with a global attention model â as both the base- line seq2seq model (i.e., as the model that computes the g in Section 3) and as the model that computes our sequence-scores f (wt, htâ1, x). As in Luong et al. (2015), we also use âinput feeding,â which involves feeding the attention distribution from the previous time-step into the decoder at the current step. This model architecture has been found to be highly performant for neural machine translation and other seq2seq tasks.
3We also note that because we do not update the parameters until after the T âth search step, our training procedure differs slightly from LaSO (which is online), and in this aspect is essen- tially equivalent to the âdelayed LaSO updateâ of Bj¨orkelund and Kuhn (2014).
Algorithm 1 Seq2seq Beam-Search Optimization 1: procedure BSO(x, Ktr, succ) 2: 3: 4: 5:
/*FORWARD*/ Init empty storage Ëy1:T and Ëh1:T ; init S1 r â 0; violations â {0} for t = 1, . . . , T do
empty storage 71.7 4: r + 0; violations ~ {0} 5: 6 for t = ., Ido K=Ky ift AT else argmax f(g, A ) ky of ay + 7: if f(y:,hrâ1) < FG Ae *)) 41 then fer he, 9: Drat â i), 10: Add t to violations 11: ret 12: Si41 < topK(suce(y1:4)) 13: else 14: S41 < topK(U_, suce(g(*))) 15: /*BACKWARD*/ . 16: grad_hr <â grad_hr <0 17: for t = T â , ldo 18: gradi he CBRNN(s,£e0, gradi Ait) 19: gradhy <âBRNN(V;, £41, grad. hiv) 20: if t â 1 ⬠violations then . 21: grad_h;, â grad_h, + grad_h, 22: grad_h, +0
9: 10: 11: 12: 13: 14:
15: 16: 17: 18: 19: 20: 21: 22:
To distinguish the models we refer to our system as BSO (beam search optimization) and to the base- line as seq2seq. When we apply constrained training (as discussed in Section 4.2), we refer to the model as ConBSO. In providing results we also distinguish between the beam size Ktr with which the model is trained, and the beam size Kte which is used at test-time. In general, if we plan on evaluating with a beam of size Kte it makes sense to train with a beam of size Ktr = Kte + 1, since our objective requires the gold sequence to be scored higher than the last sequence on the beam.
# 5.2 Methodology
Here we detail additional techniques we found nec- essary to ensure the model learned effectively. First, we found that the model failed to learn when trained from a random initialization.4 We therefore found it necessary to pre-train the model using a standard, word-level cross-entropy loss as described in Sec-
4This may be because there is relatively little signal in the sparse, sequence-level gradient, but this point requires further investigation.
tion 3. The necessity of pre-training in this instance is consistent with the ï¬ndings of other authors who train non-local neural models (Kingsbury, 2009; Sak et al., 2014; Andor et al., 2016; Ranzato et al., 2016).5
Similarly, it is clear that the smaller the beam used in training is, the less room the model has to make erroneous predictions without running afoul of the margin loss. Accordingly, we also found it use- ful to use a âcurriculum beamâ strategy in training, whereby the size of the beam is increased gradually during training. In particular, given a desired train- ing beam size Ktr, we began training with a beam of size 2, and increased it by 1 every 2 epochs until reaching Ktr.
Finally, it has been established that dropout (Sri- vastava et al., 2014) regularization improves the per- formance of LSTMs (Pham et al., 2014; Zaremba et al., 2014), and in our experiments we run beam search under dropout.6
For all experiments, we trained both seq2seq and BSO models with mini-batch Adagrad (Duchi et al., 2011) (using batches of size 64), and we renormal- ized all gradients so they did not exceed 5 before updating parameters. We did not extensively tune learning-rates, but we found initial rates of 0.02 for the encoder and decoder LSTMs, and a rate of 0.1 or 0.2 for the ï¬nal linear layer (i.e., the layer tasked with making word-predictions at each time- step) to work well across all the tasks we consid- ered. Code implementing the experiments described below can be found at https://github.com/ harvardnlp/BSO.7
# 5.3 Tasks and Results
Our experiments are primarily intended to evaluate the effectiveness of beam search optimization over standard seq2seq training. As such, we run exper- iments with the same model across three very dif-
5Andor et al. (2016) found, however, that pre-training only increased convergence-speed, but was not necessary for obtain- ing good results.
6However, it is important to ensure that the same mask ap- plied at each time-step of the forward search is also applied at the corresponding step of the backward pass. We accomplish this by pre-computing masks for each time-step, and sharing them between the partial sequence LSTMs.
7Our code is based on Yoon Kimâs seq2seq code, https: //github.com/harvardnlp/seq2seq-attn.
ferent problems: word ordering, dependency pars- ing, and machine translation. While we do not in- clude all the features and extensions necessary to reach state-of-the-art performance, even the baseline seq2seq model is generally quite performant.
Word Ordering The task of correctly ordering the words in a shufï¬ed sentence has recently gained some attention as a way to test the (syntactic) capa- bilities of text-generation systems (Zhang and Clark, 2011; Zhang and Clark, 2015; Liu et al., 2015; Schmaltz et al., 2016). We cast this task as seq2seq problem by viewing a shufï¬ed sentence as a source sentence, and the correctly ordered sentence as the target. While word ordering is a somewhat synthetic task, it has two interesting properties for our pur- poses. First, it is a task which plausibly requires search (due to the exponentially many possible or- derings), and, second, there is a clear hard constraint on output sequences, namely, that they be a permu- tation of the source sequence. For both the baseline and BSO models we enforce this constraint at test- time. However, we also experiment with constrain- ing the BSO model during training, as described in Section 4.2, by deï¬ning the succ function to only al- low successor sequences containing un-used words in the source sentence.
For experiments, we use the same PTB dataset (with the standard training, development, and test splits) and evaluation procedure as in Zhang and Clark (2015) and later work, with performance re- ported in terms of BLEU score with the correctly or- dered sentences. For all word-ordering experiments we use 2-layer encoder and decoder LSTMs, each with 256 hidden units, and dropout with a rate of 0.2 between LSTM layers. We use simple 0/1 costs in deï¬ning the â function.
We show our test-set results in Table 1. We see that on this task there is a large improvement at each beam size from switching to BSO, and a further im- provement from using the constrained model.
Inspired by a similar analysis in Daum´e III and Marcu (2005), we further examine the relationship between Ktr and Kte when training with ConBSO in Table 2. We see that larger Ktr hurt greedy in- ference, but that results continue to improve, at least initially, when using a Kte that is (somewhat) bigger than Ktr â 1.
Word Ordering (BLEU) Kte = 1 Kte = 5 Kte = 10 seq2seq BSO ConBSO 25.2 28.0 28.6 29.8 33.2 34.3 31.0 34.3 34.5 LSTM-LM 15.4 - 26.8
Table 1: Word ordering. BLEU Scores of seq2seq, BSO, constrained BSO, and a vanilla LSTM language model (from Schmaltz et al, 2016). All experiments above have Ktr = 6.
Word Ordering Beam Size (BLEU) Kte = 1 Kte = 5 Kte = 10 Ktr = 2 Ktr = 6 Ktr = 11 30.59 28.20 26.88 31.23 34.22 34.42 30.26 34.67 34.88 seq2seq 26.11 30.20 31.04
Table 2: Beam-size experiments on word ordering devel- opment set. All numbers reï¬ect training with constraints (ConBSO).
Dependency Parsing We next apply our model to dependency parsing, which also has hard con- straints and plausibly beneï¬ts from search. We treat dependency parsing with arc-standard transi- tions as a seq2seq task by attempting to map from a source sentence to a target sequence of source sentence words interleaved with the arc-standard, reduce-actions in its parse. For example, we attempt to map the source sentence
But it was the Quotron problems that ...
to the target sequence
But it was @L SBJ @L DEP the Quotron problems @L NMOD @L NMOD that ...
We use the standard Penn Treebank dataset splits with Stanford dependency labels, and the standard UAS/LAS evaluation metric (excluding punctua- tion) following Chen and Manning (2014). All models thus see only the words in the source and, when decoding, the actions it has emitted so far; no other features are used. We use 2-layer encoder and decoder LSTMs with 300 hidden units per layer
Dependency Parsing (UAS/LAS) Kte = 5 Kte = 1 Kte = 10 seq2seq 87.33/82.26 86.91/82.11 BSO ConBSO 85.11/79.32 88.53/84.16 91.00/87.18 91.25/86.92 88.66/84.33 91.17/87.41 91.57/87.26 Andor 93.17/91.18 - -
Table 3: Dependency parsing. UAS/LAS of seq2seq, BSO, ConBSO and baselines on PTB test set. Andor is the current state-of-the-art model for this data set (Andor et al. 2016), and we note that with a beam of size 32 they obtain 94.41/92.55. All experiments above have Ktr = 6.
and dropout with a rate of 0.3 between LSTM lay- ers. We replace singleton words in the training set with an UNK token, normalize digits to a single symbol, and initialize word embeddings for both source and target words from the publicly available word2vec (Mikolov et al., 2013) embeddings. We use simple 0/1 costs in deï¬ning the â function.
As in the word-ordering case, we also experiment with modifying the succ function in order to train under hard constraints, namely, that the emitted tar- get sequence be a valid parse. In particular, we con- strain the output at each time-step to obey the stack constraint, and we ensure words in the source are emitted in order.
We show results on the test-set in Table 3. BSO and ConBSO both show signiï¬cant improvements over seq2seq, with ConBSO improving most on UAS, and BSO improving most on LAS. We achieve a reasonable ï¬nal score of 91.57 UAS, which lags behind the state-of-the-art, but is promising for a general-purpose, word-only model.
Translation We ï¬nally evaluate our model on a small machine translation dataset, which allows us to experiment with a cost function that is not 0/1, and to consider other baselines that attempt to mit- igate exposure bias in the seq2seq setting. We use the dataset from the work of Ranzato et al. (2016), which uses data from the German-to-English por- tion of the IWSLT 2014 machine translation eval- uation campaign (Cettolo et al., 2014). The data comes from translated TED talks, and the dataset contains roughly 153K training sentences, 7K devel- opment sentences, and 7K test sentences. We use the same preprocessing and dataset splits as Ranzato et
Machine Translation (BLEU) Kte = 1 Kte = 5 Kte = 10 seq2seq 22.53 BSO, SB-â 23.83 24.03 26.36 23.87 25.48 XENT DAD MIXER 17.74 20.12 20.73 20.10 22.25 21.81 20.28 22.40 21.83
Table 4: Machine translation experiments on test set; re- sults below middle line are from MIXER model of Ran- zato et al. (2016). SB-â indicates sentence BLEU costs are used in deï¬ning â. XENT is similar to our seq2seq model but with a convolutional encoder and simpler at- tention. DAD trains seq2seq with scheduled sampling (Bengio et al., 2015). BSO, SB-â experiments above have Ktr = 6.
al. (2016), and like them we also use a single-layer LSTM decoder with 256 units. We also use dropout with a rate of 0.2 between each LSTM layer. We em- phasize, however, that while our decoder LSTM is of the same size as that of Ranzato et al. (2016), our re- sults are not directly comparable, because we use an LSTM encoder (rather than a convolutional encoder as they do), a slightly different attention mechanism, and input feeding (Luong et al., 2015).
1:t ) to 1 â SB(Ëy(K) r+1:t, yr+1:t), where r is the last margin violation and SB denotes smoothed, sentence-level BLEU (Chen and Cherry, 2014). This setting of â should act to penalize erroneous predictions with a relatively low sentence-level BLEU score more than those with a relatively high sentence-level BLEU score. In Table 4 we show our ï¬nal results and those from Ranzato et al. (2016).8 While we start with an improved baseline, we see similarly large increases in accuracy as those obtained by DAD and MIXER, in particular when Kte > 1.
these sequence-level costs in Table 5, which compares us- ing sentence-level BLEU costs in deï¬ning â with using 0/1 costs. We see that the more sophisti- cated sequence-level costs have a moderate effect on BLEU score.
# 8Some results from personal communication.
Machine Translation (BLEU) Kte = 1 Kte = 5 Kte = 10 0/1-â 25.73 SB-â 25.99 28.21 28.45 27.43 27.58
Table 5: BLEU scores obtained on the machine trans- lation development data when training with â(Ëy(k) 1:t ) = 1 (top) and â(Ëy(k) r+1:t, yr+1:t) (bottom), and Ktr = 6.
Timing Given Algorithm 1, we would expect training time to increase linearly with the size of the beam. On the above MT task, our highly tuned seq2seq baseline processes an average of 13,038 to- kens/second (including both source and target to- kens) on a GTX 970 GPU. For beams of size Ktr = 2, 3, 4, 5, and 6, our implementation processes on average 1,985, 1,768, 1,709, 1,521, and 1,458 to- kens/second, respectively. Thus, we appear to pay an initial constant factor of â 3.3 due to the more complicated forward and backward passes, and then training scales with the size of the beam. Because we batch beam predictions on a GPU, however, we ï¬nd that in practice training time scales sub-linearly with the beam-size.
# 6 Conclusion
We have introduced a variant of seq2seq and an as- sociated beam search training scheme, which ad- dresses exposure bias as well as label bias, and moreover allows for both training with sequence- level cost functions as well as with hard constraints. Future work will examine scaling this approach to much larger datasets.
# Acknowledgments
We thank Yoon Kim for helpful discussions and for providing the initial seq2seq code on which our im- plementations are based. We thank Allen Schmaltz for help with the word ordering experiments. We also gratefully acknowledge the support of a Google Research Award.
# References
[Andor et al.2016] Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman
Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. ACL.
[Bahdanau et al.2015] Dzmitry Bahdanau, Kyunghyun 2015. Neural machine Cho, and Yoshua Bengio. translation by jointly learning to align and translate. In ICLR.
Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2016. An Actor-Critic Algorithm for Sequence Prediction. CoRR, abs/1607.07086.
Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems, pages 1171â1179.
and Jonas Kuhn. 2014. Learning structured perceptrons for coreference Resolution with Latent Antecedents and Non-local Features. ACL, Baltimore, MD, USA, June.
[Cettolo et al.2014] Mauro Cettolo, Jan Niehues, Sebas- tian St¨uker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th iwslt evaluation campaign. In Proceedings of IWSLT, 20014.
[Chang et al.2015] Kai-Wei Chang, Hal Daum´e III, John Langford, and Stephane Ross. 2015. Efï¬cient pro- grammable learning to search. In Arxiv.
[Chen and Cherry2014] Boxing Chen and Colin Cherry. 2014. A systematic comparison of smoothing tech- niques for sentence-level bleu. ACL 2014, page 362. [Chen and Manning2014] Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In EMNLP, pages 740â 750.
[Cho et al.2014] KyungHyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder- decoder approaches. Eighth Workshop on Syntax, Se- mantics and Structure in Statistical Translation.
and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of the 42nd Annual Meet- ing on Association for Computational Linguistics, page 111. Association for Computational Linguistics. [Daum´e III and Marcu2005] Hal Daum´e III and Daniel Marcu. 2005. Learning as search optimization: ap- proximate large margin methods for structured predic- In Proceedings of the Twenty-Second Interna- tion. tional Conference on Machine Learning (ICML 2005), pages 169â176.
[Daum´e III et al.2009] Hal Daum´e III, John Langford, and Daniel Marcu. 2009. Search-based structured pre- diction. Machine Learning, 75(3):297â325.
[Duchi et al.2011] John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive Subgradient Methods for On- line Learning and Stochastic Optimization. The Jour- nal of Machine Learning Research, 12:2121â2159. [Filippova et al.2015] Katja Filippova, Enrique Alfon- seca, Carlos A Colmenares, Lukasz Kaiser, and Oriol Vinyals. 2015. Sentence compression by deletion with lstms. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 360â368.
[Hochreiter and Schmidhuber1997] Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9:1735â1780.
[Huang et al.2012] Liang Huang, Suphan Fayong, and Yang Guo. 2012. Structured perceptron with inexact search. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 142â151. Association for Computational Linguistics.
[Kingsbury2009] Brian Kingsbury. 2009. Lattice-based optimization of sequence classiï¬cation criteria for In Acoustics, neural-network acoustic modeling. Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on, pages 3761â3764. IEEE.
[Lafferty et al.2001] John D. Lafferty, Andrew McCal- lum, and Fernando C. N. Pereira. 2001. Condi- tional random ï¬elds: Probabilistic models for seg- menting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning (ICML 2001), pages 282â289.
[Liu et al.2015] Yijia Liu, Yue Zhang, Wanxiang Che, and Bing Qin. 2015. Transition-based syntactic lineariza- tion. In Proceedings of NAACL.
and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, pages 1412â1421.
[Mikolov et al.2013] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Dis- tributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111â3119.
[Papineni et al.2002] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association
for computational linguistics, pages 311â318. Associ- ation for Computational Linguistics.
[Pham et al.2014] Vu Pham, Th´eodore Bluche, Christo- pher Kermorvant, and J´erËome Louradour. 2014. Dropout improves recurrent neural networks for hand- In Frontiers in Handwriting writing recognition. Recognition (ICFHR), 2014 14th International Con- ference on, pages 285â290. IEEE.
Sumit [Ranzato et al.2016] MarcâAurelio Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. ICLR.
[Ross et al.2011] St´ephane Ross, Geoffrey J. Gordon, and Drew Bagnell. 2011. A reduction of imitation learn- ing and structured prediction to no-regret online learn- In Proceedings of the Fourteenth International ing. Conference on Artiï¬cial Intelligence and Statistics, pages 627â635.
[Sak et al.2014] Hasim Sak, Oriol Vinyals, Georg Heigold, Andrew W. Senior, Erik McDermott, Rajat Monga, and Mark Z. Mao. 2014. Sequence discrimi- native distributed training of long short-term memory In INTERSPEECH 2014, recurrent neural networks. pages 1209â1213.
[Schmaltz et al.2016] Allen Schmaltz, Alexander M Rush, and Stuart M Shieber. 2016. Word ordering without syntax. arXiv preprint arXiv:1604.08633. [Serban et al.2016] Iulian Vlad Serban, Alessandro Sor- doni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the Thirtieth AAAI Conference on Artiï¬cial Intelligence, pages 3776â3784.
[Shen et al.2016] Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics, ACL 2016. [Srivastava et al.2014] Nitish Srivastava, Geoffrey Hin- ton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to pre- vent neural networks from overï¬tting. The Journal of Machine Learning Research, 15(1):1929â1958.
[Sutskever et al.2011] Ilya Sutskever, James Martens, and Geoffrey E Hinton. 2011. Generating text with recur- rent neural networks. In Proceedings of the 28th In- ternational Conference on Machine Learning (ICML), pages 1017â1024.
[Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Informa- tion Processing Systems (NIPS), pages 3104â3112. [Venugopalan et al.2015] Subhashini Venugopalan, Mar- cus Rohrbach, Jeffrey Donahue, Raymond J. Mooney,
Trevor Darrell, and Kate Saenko. 2015. Sequence to sequence - video to text. In ICCV, pages 4534â4542. [Vinyals et al.2015] Oriol Vinyals, Åukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Advances in Neural Information Processing Systems, pages 2755â 2763.
Patrick Doetsch, Simon Wiesler, Ralf Schluter, and Hermann Sequence-discriminative training of Ney. In Acoustics, Speech and recurrent neural networks. Signal Processing (ICASSP), 2015 IEEE International Conference on, pages 2100â2104. IEEE.
[Watanabe and Sumita2015] Taro Watanabe and Eiichiro Sumita. 2015. Transition-based neural constituent parsing. Proceedings of ACL-IJCNLP.
Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption In ICML, pages generation with visual attention. 2048â2057.
[Yazdani and Henderson2015] Majid Yazdani and James Henderson. 2015. Incremental recurrent neural net- work dependency parser with search-based discrimi- In Proceedings of the 19th Confer- native training. ence on Computational Natural Language Learning, (CoNLL 2015), pages 142â152.
[Zaremba et al.2014] Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. CoRR, abs/1409.2329.
[Zhang and Clark2011] Yue Zhang and Stephen Clark. 2011. Syntax-based grammaticality improvement us- ing ccg and guided search. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing, pages 1147â1157. Association for Com- putational Linguistics.
[Zhang and Clark2015] Yue Zhang and Stephen Clark. 2015. Discriminative syntax-based word order- ing for text generation. Computational Linguistics, 41(3):503â538.
[Zhou et al.2015] Hao Zhou, Yue Zhang, and Jiajun Chen. 2015. A neural probabilistic structured-prediction model for transition-based dependency parsing. In Proceedings of the 53rd Annual Meeting of the As- sociation for Computational Linguistics, pages 1213â 1222. | {
"id": "1604.08633"
} |
1606.02006 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | 2016
6 1 0 2
# t c O 5
# ] L C . s c [
arXiv:1606.02006v2 [es.CL]
2 v 6 0 0 2 0 . 6 0 6 1 : v i X r a
# Incorporating Discrete Translation Lexicons into Neural Machine Translation
Philip Arthurâ, Graham Neubigââ , Satoshi Nakamuraâ â Graduate School of Information Science, Nara Institute of Science and Technology â Language Technologies Institute, Carnegie Mellon University philip.arthur.om0@is.naist.jp gneubig@cs.cmu.edu s-nakamura@is.naist.jp
# Abstract
Neural machine translation (NMT) often makes mistakes in translating low-frequency content words that are essential to understand- ing the meaning of the sentence. We propose a method to alleviate this problem by aug- menting NMT systems with discrete transla- tion lexicons that efï¬ciently encode transla- tions of these low-frequency words. We de- scribe a method to calculate the lexicon proba- bility of the next word in the translation candi- date by using the attention vector of the NMT model to select which source word lexical probabilities the model should focus on. We test two methods to combine this probability with the standard NMT probability: (1) using it as a bias, and (2) linear interpolation. Exper- iments on two corpora show an improvement of 2.0-2.3 BLEU and 0.13-0.44 NIST score, and faster convergence time.1
# 1 Introduction
Neural machine translation (NMT, §2; Kalchbren- ner and Blunsom (2013), Sutskever et al. (2014)) is a variant of statistical machine translation (SMT; Brown et al. (1993)), using neural networks. NMT has recently gained popularity due to its ability to model the translation process end-to-end using a sin- gle probabilistic model, and for its state-of-the-art performance on several language pairs (Luong et al., 2015a; Sennrich et al., 2016).
Input: Reference: ãã¥ãã¸ã¢ ã® åºèº«ã§ãã I come from Tunisia. System: Chunisia no shusshindesu. (Iâm from Tunisia.) ãã«ã¦ã§ã¼ ã® åºèº«ã§ãã Noruue- no shusshindesu. (Iâm from Norway.)
Figure 1: An example of a mistake made by NMT on low-frequency content words.
continuous-valued numbers. This is in contrast to more traditional SMT methods such as phrase-based machine translation (PBMT; Koehn et al. (2003)), which represent translations as discrete pairs of word strings in the source and target languages. The use of continuous representations is a major advan- tage, allowing NMT to share statistical power be- tween similar words (e.g. âdogâ and âcatâ) or con- texts (e.g. âthis isâ and âthat isâ). However, this property also has a drawback in that NMT systems often mistranslate into words that seem natural in the context, but do not reï¬ect the content of the source sentence. For example, Figure 1 is a sentence from our data where the NMT system mistakenly trans- lated âTunisiaâ into the word for âNorway.â This variety of error is particularly serious because the content words that are often mistranslated by NMT are also the words that play a key role in determining the whole meaning of the sentence.
One feature of NMT systems is that they treat each word in the vocabulary as a vector of
1Tools to replicate our experiments can be found at http://isw3.naist.jp/~philip-a/emnlp2016/index.html
In contrast, PBMT and other traditional SMT methods tend to rarely make this kind of mistake. This is because they base their translations on dis- crete phrase mappings, which ensure that source words will be translated into a target word that has
been observed as a translation at least once in the training data. In addition, because the discrete map- pings are memorized explicitly, they can be learned efï¬ciently from as little as a single instance (barring errors in word alignments). Thus we hypothesize that if we can incorporate a similar variety of infor- mation into NMT, this has the potential to alleviate problems with the previously mentioned fatal errors on low-frequency words.
In this paper, we propose a simple, yet effective method to incorporate discrete, probabilistic lexi- cons as an additional information source in NMT (§3). First we demonstrate how to transform lexi- cal translation probabilities (§3.1) into a predictive probability for the next word by utilizing attention vectors from attentional NMT models (Bahdanau et al., 2015). We then describe methods to incorporate this probability into NMT, either through linear in- terpolation with the NMT probabilities (§3.2.2) or as the bias to the NMT predictive distribution (§3.2.1). We construct these lexicon probabilities by using traditional word alignment methods on the training data (§4.1), other external parallel data resources such as a handmade dictionary (§4.2), or using a hy- brid between the two (§4.3).
We perform experiments (§5) on two English- the Japanese methodâs utility in improving translation accuracy and reducing the time required for training.
# 2 Neural Machine Translation
The goal of machine translation is to translate a se- quence of source words F = f |F | into a sequence of target words E = e|E| 1 . These words belong to the source vocabulary Vf , and the target vocabulary Ve respectively. NMT performs this translation by cal- culating the conditional probability pm(ei|F, eiâ1 ) of the ith target word ei based on the source F and the preceding target words eiâ1 . This is done by en- 1 coding the context hF, eiâ1 i a ï¬xed-width vector ηi, and calculating the probability as follows:
pm(ei|F, eiâ1 1 ) = softmax(Wsηi + bs), (1)
where Ws and bs are respectively weight matrix and bias vector parameters.
The exact variety of the NMT model depends on how we calculate ηi used as input. While there
are many methods to perform this modeling, we opt to use attentional models (Bahdanau et al., 2015), which focus on particular words in the source sen- tence when calculating the probability of ei. These models represent the current state of the art in NMT, and are also convenient for use in our proposed method. Speciï¬cally, we use the method of Luong et al. (2015a), which we describe brieï¬y here and refer readers to the original paper for details.
First, an encoder converts the source sentence F into a matrix R where each column represents a sin- gle word in the input sentence as a continuous vec- tor. This representation is generated using a bidirec- tional encoder
ââr j = enc(embed(fj), ââr jâ1) ââr j = enc(embed(fj), ââr j+1) rj = [ââr j; ââr j].
Here the embed(·) function maps the words into a representation (Bengio et al., 2003), and enc(·) is a stacking long short term memory (LSTM) neural network (Hochreiter and Schmidhuber, 1997; Gers et al., 2000; Sutskever et al., 2014). Finally we con- catenate the two vectors ââr j and ââr j into a bidirec- tional representation rj. These vectors are further concatenated into the matrix R where the jth col- umn corresponds to rj.
Next, we generate the output one word at a time while referencing this encoded input sentence and tracking progress with a decoder LSTM. The de- coderâs hidden state hi is a ï¬xed-length continuous vector representing the previous target words eiâ1 , initialized as h0 = 0. Based on this hi, we calculate a similarity vector αi, with each element equal to
αi,j = sim(hi, rj). (2)
sim(·) can be an arbitrary similarity function, which we set to the dot product, following Luong et al. (2015a). We then normalize this into an attention vector, which weights the amount of focus that we put on each word in the source sentence
ai = softmax(αi). (3)
This attention vector is then used to weight the en- coded representation R to create a context vector ci for the current time step
c = Ra.
Finally, we create ηi by concatenating the previous hidden state hiâ1 with the context vector, and per- forming an afï¬ne transform
ηi = Wη[hiâ1; ci] + bη,
Once we have this representation of the current state, we can calculate pm(ei|F, eiâ1 ) according to Equation (1). The next word ei is chosen according to this probability, and we update the hidden state by inputting the chosen word into the decoder LSTM
hi = enc(embed(ei), hiâ1). (4)
If we deï¬ne all the parameters in this model as θ, we can then train the model by minimizing the negative log-likelihood of the training data
â log(pm(ei|F, eiâ1 Ëθ = argmin 1 θ ; θ)).
# i XhF, Ei X 3 Integrating Lexicons into NMT
In §2 we described how traditional NMT models calculate the probability of the next target word pm(ei|eiâ1 , F ). Our goal in this paper is to improve the accuracy of this probability estimate by incorpo- rating information from discrete probabilistic lexi- cons. We assume that we have a lexicon that, given a source word f , assigns a probability pl(e|f ) to tar- get word e. For a source word f , this probability will generally be non-zero for a small number of transla- tion candidates, and zero for the majority of words in VE. In this section, we ï¬rst describe how we in- corporate these probabilities into NMT, and explain how we actually obtain the pl(e|f ) probabilities in §4.
# 3.1 Converting Lexicon Probabilities into Conditioned Predictive Proabilities
First, we need to convert lexical probabilities pl(e|f ) for the individual words in the source sentence F to a form that can be used together with pm(ei|eiâ1 , F ). Given input sentence F , we can construct a matrix in which each column corre- sponds to a word in the input sentence, each row corresponds to a word in the VE, and the entry cor- responds to the appropriate lexical probability:
LF = pl(e = 1|f1) ... pl(e = 1|f|F |) ... · · · . . . pl(e = |Ve||f1) · · · pl(e = |Ve||f|F |) .
This matrix can be precomputed during the encoding stage because it only requires information about the source sentence F .
Next we convert this matrix into a predictive prob- ability over the next word: pl(ei|F, eiâ1 ). To do so we use the alignment probability a from Equation (3) to weight each column of the LF matrix:
pl(ei|F, eiâ1 1 ) = LF ai = pl(e = 1|f1) · · · plex(e = 1|f|F |) . . . pl(e = Ve|f1) · · · plex(e = Ve|f|F |) ... ... ai,1 ... ai,|F | .
This calculation is similar to the way how attentional models calculate the context vector ci, but over a vector representing the probabilities of the target vo- cabulary, instead of the distributed representations of the source words. The process of involving ai is important because at every time step i, the lexi- cal probability pl(ei|eiâ1 , F ) will be inï¬uenced by 1 different source words.
# 3.2 Combining Predictive Probabilities
After calculating the lexicon predictive proba- bility pl(ei|eiâ1 , F ), next we need to integrate this probability with the NMT model probability pm(ei|eiâ1 , F ). To do so, we examine two methods: (1) adding it as a bias, and (2) linear interpolation.
# 3.2.1 Model Bias
In our ï¬rst bias method, we use pl(·) to bias the probability distribution calculated by the vanilla NMT model. Speciï¬cally, we add a small constant Ç« to pl(·), take the logarithm, and add this adjusted log probability to the input of the softmax as follows:
pb(ei|F, eiâ1 1 ) = softmax(Wsηi + bs+ log(pl(ei|F, eiâ1 1 ) + Ç«)).
We take the logarithm of pl(·) so that the values will still be in the probability domain after the softmax is calculated, and add the hyper-parameter Ç« to prevent zero probabilities from becoming ââ after taking the log. When Ç« is small, the model will be more heavily biased towards using the lexicon, and when Ç« is larger the lexicon probabilities will be given less weight. We use Ç« = 0.001 for this paper.
# 3.2.2 Linear Interpolation
We also attempt to incorporate the two probabil- ities through linear interpolation between the stan- dard NMT probability model probability pm(·) and the lexicon probability pl(·). We will call this the linear method, and deï¬ne it as follows:
po(ei|F, eiâ1 ) = pl(ei = 1|F, eiâ1 ... pl(ei = |Ve||F, eiâ1 where λ is an interpolation coefï¬cient that is the re- sult of the sigmoid function λ = sig(x) = 1 1+eâx . x is a learnable parameter, and the sigmoid func- tion ensures that the ï¬nal interpolation level falls be- tween 0 and 1. We choose x = 0 (λ = 0.5) at the beginning of training.
)
This notation is partly inspired by Allamanis et al. (2016) and Gu et al. (2016) who use linear inter- polation to merge a standard attentional model with a âcopyâ operator that copies a source word as-is into the target sentence. The main difference is that they use this to copy words into the output while our method uses it to inï¬uence the probabilities of all target words.
# 4 Constructing Lexicon Probabilities
In the previous section, we have deï¬ned some ways to use predictive probabilities pl(ei|F, eiâ1 ) based on word-to-word lexical probabilities pl(e|f ). Next, we deï¬ne three ways to construct these lexical prob- abilities using automatically learned lexicons, hand- made lexicons, or a combination of both.
# 4.1 Automatically Learned Lexicons
In traditional SMT systems, lexical translation prob- abilities are generally learned directly from parallel data in an unsupervised fashion using a model such as the IBM models (Brown et al., 1993; Och and Ney, 2003). These models can be used to estimate the alignments and lexical translation probabilities pl(e|f ) between the tokens of the two languages us- ing the expectation maximization (EM) algorithm.
First in the expectation step, the algorithm esti- mates the expected count c(e|f ). In the maximiza-
tion step, lexical probabilities are calculated by di- viding the expected count by all possible counts:
pl,a(e|f ) = c(f, e) Ëe c(f, Ëe) ,
The IBM models vary in level of reï¬nement, with Model 1 relying solely on these lexical probabil- ities, and latter IBM models (Models 2, 3, 4, 5) introducing more sophisticated models of fertility and relative alignment. Even though IBM models also occasionally have problems when dealing with âgarbage collectingâ effects the rare words (e.g. (Liang et al., 2006)), traditional SMT systems gen- erally achieve better translation accuracies of low- frequency words than NMT systems (Sutskever et al., 2014), indicating that these problems are less prominent than they are in NMT.
Note that in many cases, NMT limits the target vocabulary (Jean et al., 2015) for training speed or memory constraints, resulting in rare words not be- ing covered by the NMT vocabulary VE. Accord- ingly, we allocate the remaining probability assigned by the lexicon to the unknown word symbol hunki:
pl,a(e = hunki|f ) = 1 â pl,a(e = i|f ). (5)
# iâVe X
# 4.2 Manual Lexicons
In addition, for many language pairs, broad- coverage handmade dictionaries exist, and it is desir- able that we be able to use the information included in them as well. Unlike automatically learned lexi- cons, however, handmade dictionaries generally do not contain translation probabilities. To construct the probability pl(e|f ), we deï¬ne the set of trans- lations Kf existing in the dictionary for particular source word f , and assume a uniform distribution over these words:
pl,m(e|f ) = 1 |Kf | 0 ( if e â Kf otherwise .
Following Equation (5), unknown source words will assign their probability mass to the hunki tag.
# 4.3 Hybrid Lexicons
Handmade lexicons have broad coverage of words but their probabilities might not be as accurate as the
Tokens Corpus Sentence Data Ja En 464K 3.60M 4.97M 377K 7.77M 8.04M 5.3K 3.8K 24.3K 26.8K 5.5K 26.0K 28.4K BTEC KFTT BTEC KFTT BTEC KFTT Train 510 1160 508 1169 Dev 3.8K Test
Table 1: Corpus details.
learned ones, particularly if the automatic lexicon is constructed on in-domain data. Thus, we also test a hybrid method where we use the handmade lexi- cons to complement the automatically learned lexi- con.2 3 Speciï¬cally, inspired by phrase table ï¬ll-up used in PBMT systems (Bisazza et al., 2011), we use the probability of the automatically learned lex- icons pl,a by default, and fall back to the handmade lexicons pl,m only for uncovered words:
pl,h(e|f ) = ( pl,a(e|f ) pl,m(e|f ) otherwise if f is covered
# 5 Experiment & Result
In this section, we describe experiments we use to evaluate our proposed methods.
# 5.1 Settings
Dataset: We perform experiments on two widely- used tasks for the English-to-Japanese language pair: KFTT (Neubig, 2011) and BTEC (Kikui et al., 2003). KFTT is a collection of Wikipedia article about city of Kyoto and BTEC is a travel conversa- tion corpus. BTEC is an easier translation task than KFTT, because KFTT covers a broader domain, has a larger vocabulary of rare words, and has relatively long sentences. The details of each corpus are de- picted in Table 1.
We tokenize English according to the Penn Tree- bank standard (Marcus et al., 1993) and lowercase,
2Alternatively, we could imagine a method where we com- bined the training data and dictionary before training the word alignments to create the lexicon. We attempted this, and results were comparable to or worse than the ï¬ll-up method, so we use the ï¬ll-up method for the remainder of the paper.
3While most words in the Vf will be covered by the learned lexicon, many words (13% in experiments) are still left uncov- ered due to alignment failures or other factors.
(6)
and tokenize Japanese using KyTea (Neubig et al., 2011). We limit training sentence length up to 50 in both experiments and keep the test data at the original length. We replace words of frequency less than a threshold u in both languages with the hunki symbol and exclude them from our vocabulary. We choose u = 1 for BTEC and u = 3 for KFTT, re- sulting in |Vf | = 17.8k, |Ve| = 21.8k for BTEC and |Vf | = 48.2k, |Ve| = 49.1k for KFTT. NMT Systems: We build the described models us- ing the Chainer4 toolkit. The depth of the stacking LSTM is d = 4 and hidden node size h = 800. We concatenate the forward and backward encod- ings (resulting in a 1600 dimension vector) and then perform a linear transformation to 800 dimensions. We train the system using the Adam (Kingma and Ba, 2014) optimization method with the default set- tings: α = 1eâ3, β1 = 0.9, β2 = 0.999, Ç« = 1eâ8. Additionally, we add dropout (Srivastava et al., 2014) with drop rate r = 0.2 at the last layer of each stacking LSTM unit to prevent overï¬tting. We use a batch size of B = 64 and we run a total of N = 14 iterations for all data sets. All of the ex- periments are conducted on a single GeForce GTX TITAN X GPU with a 12 GB memory cache.
At test time, we use beam search with beam size b = 5. We follow Luong et al. (2015b) in replac- ing every unknown token at position i with the tar- get token that maximizes the probability pl,a(ei|fj). We choose source word fj according to the high- est alignment score in Equation (3). This unknown word replacement is applied to both baseline and proposed systems. Finally, because NMT models tend to give higher probabilities to shorter sentences (Cho et al., 2014), we discount the probability of hEOSi token by 10% to correct for this bias. Traditional SMT Systems: We also prepare two traditional SMT systems for comparison: a PBMT system (Koehn et al., 2003) using Moses5 (Koehn et al., 2007), and a hierarchical phrase-based MT sys- tem (Chiang, 2007) using Travatar6 (Neubig, 2013), Systems are built using the default settings, with models trained on the training data, and weights tuned on the development data. Lexicons: We use a total of 3 lexicons for the
4http://chainer.org/index.html 5http://www.statmt.org/moses/ 6http://www.phontron.com/travatar/
BLEU NIST RECALL BLEU NIST RECALL pbmt 48.18 hiero 52.27 attn 48.31 auto-bias 49.74â 50.34â hyb-bias
Table 2: Accuracies for the baseline attentional NMT (attn) and the proposed bias-based method using the automatic (auto-bias) or hybrid (hyb-bias) dictionaries. Bold indicates a gain over the attn baseline, â indicates a signiï¬cant increase at p < 0.05, and â indicates p < 0.10. Traditional phrase-based (pbmt) and hierarchical phrase based (hiero) systems are shown for reference.
proposed method, and apply bias and linear method for all of them, totaling 6 experiments. The ï¬rst lexicon (auto) is built on the training data using the automatically learned lexicon method of §4.1 separately for both the BTEC and KFTT ex- periments. Automatic alignment is performed using GIZA++ (Och and Ney, 2003). The second lexicon (man) is built using the popular English-Japanese dictionary Eijiro7 with the manual lexicon method of §4.2. Eijiro contains 104K distinct word-to-word translation entries. The third lexicon (hyb) is built by combining the ï¬rst and second lexicon with the hybrid method of §4.3. Evaluation: We use standard single reference BLEU-4 (Papineni et al., 2002) to evaluate the trans- lation performance. Additionally, we also use NIST (Doddington, 2002), which is a measure that puts a particular focus on low-frequency word strings, and thus is sensitive to the low-frequency words we are focusing on in this paper. We measure the statistical signiï¬cant differences between systems using paired bootstrap resampling (Koehn, 2004) with 10,000 it- erations and measure statistical signiï¬cance at the p < 0.05 and p < 0.10 levels.
Additionally, we also calculate the recall of rare words from the references. We deï¬ne ârare wordsâ as words that appear less than eight times in the tar- get training corpus or references, and measure the percentage of time they are recovered by each trans- lation system.
# 5.2 Effect of Integrating Lexicons
In this section, we ï¬rst a detailed examination of the utility of the proposed bias method when used
7http://eijiro.jp
20 U E L B 15 10 attn auto-bias hyb-bias 5 0 1000 2000 time (minutes) 3000 4000
Figure 2: Training curves for the baseline attn and the proposed bias method.
with the auto or hyb lexicons, which empirically gave the best results, and perform a comparison among the other lexicon integration methods in the following section. Table 2 shows the results of these methods, along with the corresponding baselines.
First, compared to the baseline attn, our bias method achieved consistently higher scores on both test sets. In particular, the gains on the more difï¬- cult KFTT set are large, up to 2.3 BLEU, 0.44 NIST, and 30% Recall, demonstrating the utility of the pro- posed method in the face of more diverse content and fewer high-frequency words.
Compared to the traditional pbmt systems hiero, particularly on KFTT we can see that the proposed method allows the NMT system to exceed the traditional SMT methods in BLEU. This is de- spite the fact that we are not performing ensembling, which has proven to be essential to exceed tradi- tional systems in several previous works (Sutskever
Input Reference attn Do you have an opinion regarding extramarital affairs? ä¸å« ã« é¢ã㦠æè¦ ã ããã¾ã ãã Furin ni kanshite iken ga arimasu ka. ãµãã«ã¼ ã« é¢ãã æè¦ ã¯ ããã¾ã ãã Sakk¯a ni kansuru iken wa arimasu ka. (Do you have an opinion about soccer?) auto-bias ä¸å« ã« é¢ã㦠æè¦ ã ããã¾ã ãã Furin ni kanshite iken ga arimasu ka. (Do you have an opinion about affairs?) Could you put these fragile things in a safe place? ãã® å£ãç© ã å®å
¨ãª å ´æ ã« ç½®ã㦠ãããã¾ãã ã ã Kono kowaremono o anzenâna basho ni oite moraemasen ka. è²´éå ã å®å
¨ ã« åºããã ã® ã§ãã ã Kich¯o-hin o anzen ni dashitai nodesuga. (Iâd like to safely put out these valuables.) Input Reference attn auto-bias ãã® å£ãç© ã å®å
¨ãª å ´æ ã« ç½®ã㦠ãããã¾ãã ã ã Kono kowaremono o anzenâna basho ni oite moraemasen ka. (Could you put these fragile things in a safe place?)
Table 3: Examples where the proposed auto-bias improved over the baseline system attn. Underlines indicate words were mistaken in the baseline output but correct in the proposed modelâs output.
et al., 2014; Luong et al., 2015a; Sennrich et al., 2016). Interestingly, despite gains in BLEU, the NMT methods still fall behind in NIST score on the KFTT data set, demonstrating that traditional SMT systems still tend to have a small advantage in translating lower-frequency words, despite the gains made by the proposed method.
In Table 3, we show some illustrative examples where the proposed method (auto-bias) was able to obtain a correct translation while the normal at- tentional model was not. The ï¬rst example is a mistake in translating âextramarital affairsâ into the Japanese equivalent of âsoccer,â entirely changing the main topic of the sentence. This is typical of the errors that we have observed NMT systems make (the mistake from Figure 1 is also from attn, and was ï¬xed by our proposed method). The second ex- ample demonstrates how these mistakes can then af- fect the process of choosing the remaining words, propagating the error through the whole sentence.
Next, we examine the effect of the proposed method on the training time for each neural MT method, drawing training curves for the KFTT data in Figure 2. Here we can see that the proposed bias training methods achieve reasonable BLEU scores in the upper 10s even after the ï¬rst iteration. In con- trast, the baseline attn method has a BLEU score of around 5 after the ï¬rst iteration, and takes signiï¬- cantly longer to approach values close to its maximal
ae Ba Dd ¢ E bd ¢ we On WN & FR ATEW, S FOAKATEW, bias attn
Ba E bd ¢ On WN & FR S FOAKATEW,
ae Dd ¢ E we ATEW, S
Figure 3: Attention matrices for baseline attn and proposed bias methods. Lighter colors indicate stronger attention between the words, and boxes sur- rounding words indicate the correct alignments.
accuracy. This shows that by incorporating lexical probabilities, we can effectively bootstrap the learn- ing of the NMT system, allowing it to approach an appropriate answer in a more timely fashion.8
It is also interesting to examine the alignment vec-
8Note that these gains are despite the fact that one iteration of the proposed method takes a longer (167 minutes for attn vs. 275 minutes for auto-bias) due to the necessity to cal- culate and use the lexical probability matrix for each sentence. It also takes an additional 297 minutes to train the lexicon with GIZA++, but this can be greatly reduced with more efï¬cient training methods (Dyer et al., 2013).
(a) BTEC
Lexicon bias BLEU NIST linear bias linear 48.31 - auto man hyb 5.98 49.74â 49.08 50.34â 6.11 6.03â 6.10â 5.90 6.14â 5.94
47.97 51.04â 49.27 (b) KFTT
BLEU NIST Lexicon bias linear bias linear - auto man hyb 5.15 20.86 5.59â 5.12 5.55â 23.20â 20.78 22.80â 4.61 5.11 5.03 18.19 20.88 20.33
Table 4: A comparison of the bias and linear lexicon integration methods on the automatic, man- ual, and hybrid lexicons. The ï¬rst line without lexi- con is the traditional attentional NMT.
tors produced by the baseline and proposed meth- ods, a visualization of which we show in Figure 3. For this sentence, the outputs of both meth- ods were both identical and correct, but we can see that the proposed method (right) placed sharper attention on the actual source word correspond- ing to content words in the target sentence. This trend of peakier attention distributions in the pro- posed method held throughout the corpus, with the per-word entropy of the attention vectors being 3.23 bits for auto-bias, compared with 3.81 bits for attn, indicating that the auto-bias method places more certainty in its attention decisions.
# 5.3 Comparison of Integration Methods
Finally, we perform a full comparison between the various methods for integrating lexicons into the translation process, with results shown in Table 4. In general the bias method improves accuracy for the auto and hyb lexicon, but is less effective for the man lexicon. This is likely due to the fact that the manual lexicon, despite having broad coverage, did not sufï¬ciently cover target-domain words (cov- erage of unique words in the source vocabulary was 35.3% and 9.7% for BTEC and KFTT respectively). the the trend is linear method, with it improving man systems,
but causing decreases when using the auto and hyb lexicons. This indicates that the linear method is more suited for cases where the lexi- con does not closely match the target domain, and plays a more complementary role. Compared to the log-linear modeling of bias, which strictly en- forces constraints imposed by the lexicon distribu- linear interpolation is intu- tion (Klakow, 1998), itively more appropriate for integrating this type of complimentary information.
On the other hand, the performance of linear in- terpolation was generally lower than that of the bias method. One potential reason for this is the fact that we use a constant interpolation coefï¬cient that was set ï¬xed in every context. Gu et al. (2016) have re- cently developed methods to use the context infor- mation from the decoder to calculate the different in- terpolation coefï¬cients for every decoding step, and it is possible that introducing these methods would improve our results.
# 6 Additional Experiments
To test whether the proposed method is useful on larger data sets, we also performed follow-up ex- periments on the larger Japanese-English ASPEC dataset (Nakazawa et al., 2016) that consist of 2 million training examples, 63 million tokens, and 81,000 vocabulary size. We gained an improvement in BLEU score from 20.82 using the attn baseline to 22.66 using the auto-bias proposed method. This experiment shows that our method scales to larger datasets.
# 7 Related Work
From the beginning of work on NMT, unknown words that do not exist in the system vocabulary have been focused on as a weakness of these sys- tems. Early methods to handle these unknown words replaced them with appropriate words in the target vocabulary (Jean et al., 2015; Luong et al., 2015b) according to a lexicon similar to the one used in this work. In contrast to our work, these only handle unknown words and do not incorporate information from the lexicon in the learning procedure.
There have also been other approaches that incor- porate models that learn when to copy words as-is into the target language (Allamanis et al., 2016; Gu
et al., 2016; G¨ulc¸ehre et al., 2016). These models are similar to the linear approach of §3.2.2, but are only applicable to words that can be copied as- is into the target language. In fact, these models can be thought of as a subclass of the proposed approach that use a lexicon that assigns a all its probability to target words that are the same as the source. On the other hand, while we are simply using a static in- terpolation coefï¬cient λ, these works generally have a more sophisticated method for choosing the inter- polation between the standard and âcopyâ models. Incorporating these into our linear method is a promising avenue for future work.
In addition Mi et al. (2016) have also recently pro- posed a similar approach by limiting the number of vocabulary being predicted by each batch or sen- tence. This vocabulary is made by considering the original HMM alignments gathered from the train- ing corpus. Basically, this method is a speciï¬c ver- sion of our bias method that gives some of the vocab- ulary a bias of negative inï¬nity and all other vocab- ulary a uniform distribution. Our method improves over this by considering actual translation probabil- ities, and also considering the attention vector when deciding how to combine these probabilities.
Finally, there have been a number of recent works that improve accuracy of low-frequency words us- ing character-based translation models (Ling et al., 2015; Costa-Juss`a and Fonollosa, 2016; Chung et al., 2016). However, Luong and Manning (2016) have found that even when using character-based models, incorporating information about words al- lows for gains in translation accuracy, and it is likely that our lexicon-based method could result in im- provements in these hybrid systems as well.
# 8 Conclusion & Future Work
In this paper, we have proposed a method to in- corporate discrete probabilistic lexicons into NMT systems to solve the difï¬culties that NMT systems have demonstrated with low-frequency words. As a result, we achieved substantial increases in BLEU (2.0-2.3) and NIST (0.13-0.44) scores, and observed qualitative improvements in the translations of con- tent words.
For future work, we are interested in conducting the experiments on larger-scale translation tasks. We
also plan to do subjective evaluation, as we expect that improvements in content word translation are critical to subjective impressions of translation re- sults. Finally, we are also interested in improve- ments to the linear method where λ is calculated based on the context, instead of using a ï¬xed value.
# Acknowledgment
We thank Makoto Morishita and Yusuke Oda for their help in this project. We also thank the faculty members of AHC lab for their supports and sugges- tions.
This work was supported by grants from the Min- istry of Education, Culture, Sport, Science, and Technology of Japan and in part by JSPS KAKENHI Grant Number 16H05873.
# References
Miltiadis Allamanis, Hao Peng, and Charles Sutton. 2016. A convolutional attention network for extreme summarization of source code. In Proceedings of the 33th International Conference on Machine Learning (ICML).
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 4th International Conference on Learning Representa- tions (ICLR).
Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic lan- guage model. Journal of Machine Learning Research, pages 1137â1155.
Arianna Bisazza, Nick Ruiz, and Marcello Federico. 2011. Fill-up versus interpolation methods for phrase- based SMT adaptation. In Proceedings of the 2011 International Workshop on Spoken Language Transla- tion (IWSLT), pages 136â143.
Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estima- tion. Computational Linguistics, pages 263â311. David Chiang. 2007. Hierarchical phrase-based transla- tion. Computational Linguistics, pages 201â228. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoderâdecoder ap- proaches. In Proceedings of the Workshop on Syntax and Structure in Statistical Translation (SSST), pages 103â111.
Junyoung Chung, Kyunghyun Cho, and Yoshua Bengio. 2016. A character-level decoder without explicit seg- mentation for neural machine translation. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1693â1703. Marta R. Costa-Juss`a and Jos´e A. R. Fonollosa. 2016. Character-based neural machine translation. In Pro- ceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 357â361. George Doddington. 2002. Automatic evaluation of ma- chine translation quality using n-gram co-occurrence statistics. In Proceedings of the Second Interna- tional Conference on Human Language Technology Research, pages 138â145.
Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of IBM model 2. In Proceedings of the 2013 Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Technologies, pages 644â648.
Felix A. Gers, J¨urgen A. Schmidhuber, and Fred A. Cum- mins. 2000. Learning to forget: Continual prediction with LSTM. Neural Computation, pages 2451â2471. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. 2016. Incorporating copying mechanism in sequence- to-sequence learning. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (ACL), pages 1631â1640.
C¸ aglar G¨ulc¸ehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguis- tics (ACL), pages 140â149.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long Neural Computation, pages short-term memory. 1735â1780.
S´ebastien Jean, KyungHyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large tar- get vocabulary for neural machine translation. In Pro- ceedings of the 53th Annual Meeting of the Associa- tion for Computational Linguistics (ACL) and the 7th Internationali Joint Conference on Natural Language Processing of the Asian Federation of Natural Lan- guage Processing, ACL 2015, July 26-31, 2015, Bei- jing, China, Volume 1: Long Papers, pages 1â10. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1700â1709. Gen-ichiro Kikui, Eiichiro Sumita, Toshiyuki Takezawa, and Seiichi Yamamoto. 2003. Creating corpora for speech-to-speech translation. In 8th European Confer- ence on Speech Communication and Technology, EU-
ROSPEECH 2003 - INTERSPEECH 2003, Geneva, Switzerland, September 1-4, 2003, pages 381â384. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. CoRR.
Dietrich Klakow. 1998. Log-linear interpolation of lan- guage models. In Proceedings of the 5th International Conference on Speech and Language Processing (IC- SLP).
Phillip Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL), pages 48â 54.
Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, OndËrej Bojar, Alexandra Con- stantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceed- ings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL), pages 177â180. Philipp Koehn. 2004. Statistical signiï¬cance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP).
Percy Liang, Ben Taskar, and Dan Klein. 2006. Align- ment by agreement. In Proceedings of the 2006 Hu- man Language Technology Conference of the North American Chapter of the Association for Computa- tional Linguistics (HLT-NAACL), pages 104â111. Wang Ling, Isabel Trancoso, Chris Dyer, and Alan W. Black. 2015. Character-based neural machine transla- tion. CoRR.
Minh-Thang Luong and Christopher D. Manning. 2016. Achieving open vocabulary neural machine translation with hybrid word-character models. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (ACL), pages 1054â1063.
Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015a. Effective approaches to attention- In Proceedings of based neural machine translation. the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1412â1421. Minh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. 2015b. Addressing the rare word problem in neural machine translation. In Proceedings of the 53th Annual Meeting of the As- sociation for Computational Linguistics (ACL) and the 7th Internationali Joint Conference on Natural Lan- guage Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 11â19.
Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beat- rice Santorini. 1993. Building a large annotated cor- pus of English: The Penn treebank. Computational Linguistics, pages 313â330.
Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016. Vocabulary manipulation for neural machine transla- tion. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 124â129.
Toshiaki Nakazawa, Manabu Yaguchi, Kiyotaka Uchi- moto, Masao Utiyama, Eiichiro Sumita, Sadao Kuro- hashi, and Hitoshi Isahara. 2016. Aspec: Asian scien- tiï¬c paper excerpt corpus. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2016), pages 2204â2208.
Graham Neubig, Yosuke Nakata, and Shinsuke Mori. 2011. Pointwise prediction for robust, adaptable In Proceedings of Japanese morphological analysis. the 49th Annual Meeting of the Association for Com- putational Linguistics (ACL), pages 529â533.
Graham Neubig. 2011. The Kyoto free translation task. http://www.phontron.com/kftt.
Graham Neubig. 2013. Travatar: A forest-to-string ma- chine translation engine based on tree transducers. In Proceedings of the 51th Annual Meeting of the Associ- ation for Computational Linguistics (ACL), pages 91â 96.
Franz Josef Och and Hermann Ney. 2003. A system- atic comparison of various statistical alignment mod- els. Computational Linguistics, pages 19â51.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: A method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computa- tional Linguistics (ACL), pages 311â318.
Rico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation models 2016. In Proceedings of the 54th with monolingual data. Annual Meeting of the Association for Computational Linguistics (ACL), pages 86â96.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, 2014. Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overï¬tting. Journal of Machine Learning Re- search, pages 1929â1958.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Se- quence to sequence learning with neural networks. In Proceedings of the 28th Annual Conference on Neural Information Processing Systems (NIPS), pages 3104â 3112. | {
"id": "1606.02006"
} |
1606.01781 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | 7 1 0 2 n a J 7 2 ] L C . s c [
2 v 1 8 7 1 0 . 6 0 6 1 : v i X r a
# Very Deep Convolutional Networks for Text Classiï¬cation
Alexis Conneau Facebook AI Research aconneau@fb.com
Holger Schwenk Facebook AI Research schwenk@fb.com
Yann Le Cun Facebook AI Research yann@fb.com
# Lo¨ıc Barrault LIUM, University of Le Mans, France loic.barrault@univ-lemans.fr
# Abstract
The dominant approach for many NLP tasks are recurrent neural networks, in par- ticular LSTMs, and convolutional neural networks. However, these architectures are rather shallow in comparison to the deep convolutional networks which have pushed the state-of-the-art in computer vi- sion. We present a new architecture (VD- CNN) for text processing which operates directly at the character level and uses only small convolutions and pooling oper- ations. We are able to show that the per- formance of this model increases with the depth: using up to 29 convolutional layers, we report improvements over the state-of- the-art on several public text classiï¬cation tasks. To the best of our knowledge, this is the ï¬rst time that very deep convolutional nets have been applied to text processing.
terest in the research community and they are sys- tematically applied to all NLP tasks. However, while the use of (deep) neural networks in NLP has shown very good results for many tasks, it seems that they have not yet reached the level to outperform the state-of-the-art by a large margin, as it was observed in computer vision and speech recognition.
Convolutional neural networks, in short Con- vNets, are very successful in computer vision. In early approaches to computer vision, handcrafted features were used, for instance âscale-invariant feature transform (SIFT)â(Lowe, 2004), followed by some classiï¬er. The fundamental idea of Con- vNets(LeCun et al., 1998) is to consider feature extraction and classiï¬cation as one jointly trained task. This idea has been improved over the years, in particular by using many layers of convolutions and pooling to sequentially extract a hierarchical representation(Zeiler and Fergus, 2014) of the in- put. The best networks are using more than 150 layers as in (He et al., 2016a; He et al., 2016b).
# 1 Introduction
The goal of natural language processing (NLP) is to process text with computers in order to analyze it, to extract information and eventually to rep- resent the same information differently. We may want to associate categories to parts of the text (e.g. POS tagging or sentiment analysis), struc- ture text differently (e.g. parsing), or convert it to some other form which preserves all or part of the content (e.g. machine translation, summariza- tion). The level of granularity of this processing can range from individual characters to subword units (Sennrich et al., 2016) or words up to whole sentences or even paragraphs.
After a couple of pioneer works (Bengio et al. (2001), Collobert and Weston (2008), Collobert et al. (2011) among others), the use of neural net- works for NLP applications is attracting huge in-
Many NLP approaches consider words as ba- sic units. An important step was the introduction of continuous representations of words(Bengio et al., 2003). These word embeddings are now the state-of-the-art in NLP. However, it is less clear how we should best represent a sequence of words, e.g. a whole sentence, which has complicated syn- In general, in the tactic and semantic relations. same sentence, we may be faced with local and long-range dependencies. Currently, the main- stream approach is to consider a sentence as a se- quence of tokens (characters or words) and to pro- cess them with a recurrent neural network (RNN). Tokens are usually processed in sequential order, from left to right, and the RNN is expected to âmemorizeâ the whole sequence in its internal states. The most popular and successful RNN vari- ant are certainly LSTMs(Hochreiter and Schmid-
Dataset Label Yelp P. Sample Been going to Dr. Goldberg for over 10 years. I think I was one of his 1st patients when he started at MHMG. Hes been great over the years and is really all about the big picture. [...] I love this show, however, there are 14 episodes in the ï¬rst season and this DVD only shows the ï¬rst eight. [...]. I hope the BBC will release another DVD that contains all the episodes, but for now this one is still somewhat enjoyable. ju4 xi1n hua2 she4 5 yue4 3 ri4 , be3i ji1ng 2008 a4o yu4n hui4 huo3 ju4 jie1 li4 ji1ng guo4 shi4 jie4 wu3 da4 zho1u 21 ge4 che2ng shi4 âWhat should I look for when buying a laptop? What is the best brand and whatâs reliable?â,âWeight and dimensions are important if youâre planning to travel with the laptop. Get something with at least 512 mb of RAM. [..] is a good brand, and has an easy to use site where you can build a custom laptop.â +1 Amz P. 3(/5) Sogou âSportsâ âComputer, Internetâ Yah. A.
Table 1: Examples of text samples and their labels.
huber, 1997) â there are many works which have shown the ability of LSTMs to model long-range dependencies in NLP applications, e.g. (Sunder- meyer et al., 2012; Sutskever et al., 2014) to name just a few. However, we argue that LSTMs are generic learning machines for sequence process- ing which are lacking task-speciï¬c structure.
several sentence classiï¬cation tasks, initially pro- posed by (Zhang et al., 2015). These tasks and our experimental results are detailed in section 4. The proposed deep convolutional network shows signiï¬cantly better results than previous ConvNets approach. The paper concludes with a discus- sion of future research directions for very deep ap- proach in NLP.
It is well known that a fully connected one hidden layer neural network can in principle learn any real- valued function, but much better results can be obtained with a deep problem-speciï¬c architec- ture which develops hierarchical representations. By these means, the search space is heavily con- strained and efï¬cient solutions can be learned with gradient descent. ConvNets are namely adapted for computer vision because of the compositional structure of an image. Texts have similar proper- ties : characters combine to form n-grams, stems, words, phrase, sentences etc.
We believe that a challenge in NLP is to develop deep architectures which are able to learn hierar- chical representations of whole sentences, jointly with the task. In this paper, we propose to use deep architectures of many convolutional layers to ap- proach this goal, using up to 29 layers. The design of our architecture is inspired by recent progress in computer vision, in particular (Simonyan and Zisserman, 2015; He et al., 2016a).
This paper is structured as follows. There have been previous attempts to use ConvNets for text processing. We summarize the previous works in the next section and discuss the relations and dif- ferences. Our architecture is described in detail in section 3. We have evaluated our approach on
# 2 Related work
There is a large body of research on sentiment analysis, or more generally on sentence classiï¬ca- tion tasks. Initial approaches followed the clas- sical two stage scheme of extraction of (hand- crafted) features, followed by a classiï¬cation stage. Typical features include bag-of-words or n- grams, and their TF-IDF. These techniques have been compared with ConvNets by (Zhang et al., 2015; Zhang and LeCun, 2015). We use the same corpora for our experiments. More recently, words or characters, have been projected into a low-dimensional space, and these embeddings are combined to obtain a ï¬xed size representation of the input sentence, which then serves as input for the classiï¬er. The simplest combination is the element-wise mean. This usually performs badly since all notion of token order is disregarded.
Another class of approaches are recursive neu- ral networks. The main idea is to use an ex- ternal tool, namely a parser, which speciï¬es the order in which the word embeddings are com- bined. At each node, the left and right context are combined using weights which are shared for all nodes (Socher et al., 2011). The state of the top node is fed to the classiï¬er. A recurrent neural net-
work (RNN) could be considered as a special case of a recursive NN: the combination is performed sequentially, usually from left to right. The last state of the RNN is used as ï¬xed-sized representa- tion of the sentence, or eventually a combination of all the hidden states.
First works using convolutional neural networks for NLP appeared in (Collobert and Weston, 2008; Collobert et al., 2011). They have been subse- quently applied to sentence classiï¬cation (Kim, 2014; Kalchbrenner et al., 2014; Zhang et al., 2015). We will discuss these techniques in more detail below. If not otherwise stated, all ap- proaches operate on words which are projected into a high-dimensional space.
A rather shallow neural net was proposed in (Kim, 2014): one convolutional layer (using multiple widths and ï¬lters) followed by a max pooling layer over time. The ï¬nal classiï¬er uses one fully connected layer with drop-out. Results are reported on six data sets, in particular Stanford Sentiment Treebank (SST). A similar system was proposed in (Kalchbrenner et al., 2014), but us- ing ï¬ve convolutional layers. An important differ- ence is also the introduction of multiple temporal k-max pooling layers. This allows to detect the k most important features in a sentence, independent of their speciï¬c position, preserving their relative order. The value of k depends on the length of the sentence and the position of this layer in the network. (Zhang et al., 2015) were the ï¬rst to per- form sentiment analysis entirely at the character level. Their systems use up to six convolutional layers, followed by three fully connected classiï¬- cation layers. Convolutional kernels of size 3 and 7 are used, as well as simple max-pooling layers. Another interesting aspect of this paper is the in- troduction of several large-scale data sets for text classiï¬cation. We use the same experimental set- ting (see section 4.1). The use of character level information was also proposed by (Dos Santos and Gatti, 2014): all the character embeddings of one word are combined by a max operation and they are then jointly used with the word embedding in- formation in a shallow architecture. In parallel to our work, (Yang et al., 2016) proposed a based hi- erarchical attention network for document classi- ï¬cation that perform an attention ï¬rst on the sen- tences in the document, and on the words in the sentence. Their architecture performs very well on datasets whose samples contain multiple sen-
tences.
In the computer vision community, the com- bination of recurrent and convolutional networks in one architecture has also been investigated, with the goal to âget the best of both worldsâ, e.g. (Pinheiro and Collobert, 2014). The same idea was recently applied to sentence classiï¬ca- tion (Xiao and Cho, 2016). A convolutional net- work with up to ï¬ve layers is used to learn high- level features which serve as input for an LSTM. The initial motivation of the authors was to ob- tain the same performance as (Zhang et al., 2015) with networks which have signiï¬cantly fewer pa- rameters. They report results very close to those of (Zhang et al., 2015) or even outperform Con- vNets for some data sets.
In summary, we are not aware of any work that uses VGG-like or ResNet-like architecture to go deeper than than six convolutional layers (Zhang et al., 2015) for sentence classiï¬cation. Deeper networks were not tried or they were re- ported to not improve performance. This is in sharp contrast to the current trend in computer vi- sion where signiï¬cant improvements have been re- ported using much deeper networks(Krizhevsky et al., 2012), namely 19 layers (Simonyan and Zis- serman, 2015), or even up to 152 layers (He et al., 2016a). In the remainder of this paper, we describe our very deep convolutional architecture and re- port results on the same corpora than (Zhang et al., 2015). We were able to show that performance improves with increased depth, using up to 29 con- volutional layers.
# 3 VDCNN Architecture
The overall architecture of our network is shown in Figure 1. Our model begins with a look-up ta- ble that generates a 2D tensor of size (f0, s) that contain the embeddings of the s characters. s is ï¬xed to 1024, and f0 can be seen as the âRGBâ dimension of the input text.
We ï¬rst apply one layer of 64 convolutions of size 3, followed by a stack of temporal âconvolu- tional blocksâ. Inspired by the philosophy of VGG and ResNets we apply these two design rules: (i) for the same output temporal resolution, the layers have the same number of feature maps, (ii) when the temporal resolution is halved, the number of feature maps is doubled. This helps reduce the memory footprint of the network. The networks contains 3 pooling operations (halving the tempo-
# fc(2048, nClasses)
# I
fc(2048, 2048), ReLU
fc(4096, 2048), ReLU output: 512 x k k-max pooling, k=8 Convolutional Block, 3, 512 optional shortcut Convolutional Block, 3, 512 output: 512 x s/8 pool/2 optional shortcut Convolutional Block, 3, 256 optional shortcut Convolutional Block, 3, 256 Convolutional Block, 3, 256 output: 256 x s/4 pool/2 optional shortcut Convolutional Block, 3, 128 optional shortcut Convolutional Block, 3, 128 output: 128 x s/2 pool/2 optional shortcut Convolutional Block, 3, 64 optional shortcut Convolutional Block, 3, 64 output: 64 x s 3, Temp Conv, 64 output: 16 x s Lookup table, 16 input : 1 x s
# Text
Figure 1: VDCNN architecture.
ral resolution each time by 2), resulting in 3 levels of 128, 256 and 512 feature maps (see Figure 1). The output of these convolutional blocks is a ten- sor of size 512 à sd, where sd = s 2p with p = 3 the number of down-sampling operations. At this level of the convolutional network, the resulting tensor can be seen as a high-level representation of the input text. Since we deal with padded in- put text of ï¬xed size, sd is constant. However, in the case of variable size input, the convolu- tional encoder provides a representation of the in- put text that depends on its initial length s. Repre- sentations of a text as a set of vectors of variable size can be valuable namely for neural machine translation, in particular when combined with an In Figure 1, temporal convolu- attention model. tions with kernel size 3 and X feature maps are denoted â3, Temp Conv, Xâ, fully connected layers which are linear projections (matrix of size I à O) are denoted âfc(I, O)â and â3-max pooling, stride 2â means temporal max- pooling with kernel size 3 and stride 2.
Most of the previous applications of ConvNets to NLP use an architecture which is rather shal- low (up to 6 convolutional layers) and combines convolutions of different sizes, e.g. spanning 3, 5 and 7 tokens. This was motivated by the fact that convolutions extract n-gram features over tokens and that different n-gram lengths are needed to model short- and long-span relations. In this work, we propose to create instead an architecture which uses many layers of small convolutions (size 3). Stacking 4 layers of such convolutions results in a span of 9 tokens, but the network can learn by it- self how to best combine these different â3-gram featuresâ in a deep hierarchical manner. Our ar- chitecture can be in fact seen as a temporal adap- tation of the VGG network (Simonyan and Zisser- man, 2015). We have also investigated the same kind of âResNet shortcutâ connections as in (He et al., 2016a), namely identity and 1 Ã 1 convolu- tions (see Figure 1).
For the classiï¬cation tasks in this work, the tem- poral resolution of the output of the convolution blocks is ï¬rst down-sampled to a ï¬xed dimension using k-max pooling. By these means, the net- work extracts the k most important features, inde- pendently of the position they appear in the sen- tence. The 512 à k resulting features are trans- formed into a single vector which is the input to a three layer fully connected classiï¬er with ReLU hidden units and softmax outputs. The number of
ReLU Temporal Batch Norm 3, Temp Conv, 256 ReLU Temporal Batch Norm 3, Temp Conv, 256
Figure 2: Convolutional block.
output neurons depends on the classiï¬cation task, the number of hidden units is set to 2048, and k to 8 in all experiments. We do not use drop-out with the fully connected layers, but only temporal batch normalization after convolutional layers to regularize our network.
# Convolutional Block
Each convolutional block (see Figure 2) is a se- layers, each one quence of two convolutional followed by a temporal BatchNorm (Ioffe and Szegedy, 2015) layer and an ReLU activation. The kernel size of all the temporal convolutions is 3, with padding such that the temporal resolution is preserved (or halved in the case of the convolu- tional pooling with stride 2, see below). Steadily increasing the depth of the network by adding more convolutional layers is feasible thanks to the limited number of parameters of very small con- volutional ï¬lters in all layers. Different depths of the overall architecture are obtained by vary- ing the number of convolutional blocks in between the pooling layers (see table 2). Temporal batch normalization applies the same kind of regulariza- tion as batch normalization except that the activa- tions in a mini-batch are jointly normalized over temporal (instead of spatial) locations. So, for a mini-batch of size m and feature maps of tempo- ral size s, the sum and the standard deviations re- lated to the BatchNorm algorithm are taken over |B| = m · s terms.
We explore three types of down-sampling be- tween blocks Ki and Ki+1 (Figure 1) :
(i) The ï¬rst convolutional stride 2 (ResNet-like). layer of Ki+1 has
(ii) Ki is followed by a k-max pooling layer where k is such that the resolution is halved
(Kalchbrenner et al., 2014).
(iii) Ki is followed by max-pooling with kernel size 3 and stride 2 (VGG-like).
All these types of pooling reduce the temporal res- olution by a factor 2. At the ï¬nal convolutional layer, the resolution is thus sd.
Depth: conv block 512 conv block 256 conv block 128 conv block 64 First conv. layer #params [in M] 9 2 2 2 2 1 2.2 17 4 4 4 4 1 4.3 29 4 4 10 10 1 4.6 49 6 10 16 16 1 7.8
Table 2: Number of conv. layers per depth. In this work, we have explored four depths for our networks: 9, 17, 29 and 49, which we de- ï¬ne as being the number of convolutional lay- ers. The depth of a network is obtained by sum- ming the number of blocks with 64, 128, 256 and 512 ï¬lters, with each block containing two con- In Figure 1, the network has volutional layers. 2 blocks of each type, resulting in a depth of 2 à (2 + 2 + 2 + 2) = 16. Adding the very ï¬rst convolutional layer, this sums to a depth of 17 con- volutional layers. The depth can thus be increased or decreased by adding or removing convolutional blocks with a certain number of ï¬lters. The best conï¬gurations we observed for depths 9, 17, 29 and 49 are described in Table 2. We also give the number of parameters of all convolutional layers.
# 4 Experimental evaluation
# 4.1 Tasks and data
In the computer vision community, the availabil- ity of large data sets for object detection and im- age classiï¬cation has fueled the development of new architectures. In particular, this made it pos- sible to compare many different architectures and to show the beneï¬t of very deep convolutional net- works. We present our results on eight freely avail- able large-scale data sets introduced by (Zhang et al., 2015) which cover several classiï¬cation tasks such as sentiment analysis, topic classiï¬cation or news categorization (see Table 3). The number of training examples varies from 120k up to 3.6M, and the number of classes is comprised between 2 and 14. This is considerably lower than in com- puter vision (e.g. 1 000 classes for ImageNet).
#Classes Classiï¬cation Task #Test 7.6k 60k 70k 38k 50k 60k 650k 400k #Train 120k 450k 560k 560k 650k 1 400k 3 000k 3 600k Data set AGâs news Sogou news DBPedia Yelp Review Polarity Yelp Review Full Yahoo! Answers Amazon Review Full Amazon Review Polarity 4 English news categorization 5 Chinese news categorization 14 Ontology classiï¬cation 2 Sentiment analysis 5 Sentiment analysis 10 Topic classiï¬cation 5 Sentiment analysis 2 Sentiment analysis
Table 3: Large-scale text classiï¬cation data sets used in our experiments. See (Zhang et al., 2015) for a detailed description.
This has the consequence that each example in- duces less gradient information which may make it harder to train large architectures. It should be also noted that some of the tasks are very ambigu- ous, in particular sentiment analysis for which it is difï¬cult to clearly associate ï¬ne grained labels. There are equal numbers of examples in each class for both training and test sets. The reader is re- ferred to (Zhang et al., 2015) for more details on the construction of the data sets. Table 4 summa- rizes the best published results on these corpora we are aware of. We do not use âThesaurus data augmentationâ or any other preprocessing, except lower-casing. Nevertheless, we still outperform the best convolutional neural networks of (Zhang et al., 2015) for all data sets. The main goal of our work is to show that it is possible and beneï¬cial to train very deep convolutional networks as text encoders. Data augmentation may improve our re- sults even further. We will investigate this in future research.
# 4.2 Common model settings
rate of 0.01 and momentum of 0.9. We follow the same training procedure as in (Zhang et al., layers 2015). We initialize our convolutional following (He et al., 2015). One epoch took from 24 minutes to 2h45 for depth 9, and from 50 minutes to 7h (on the largest datasets) for depth 29. It took between 10 to 15 epoches to converge. The implementation is done using Torch 7. All experiments are performed on a single NVidia K40 GPU. Unlike previous research on the use of ConvNets for text processing, we use temporal batch norm without dropout.
# 4.3 Experimental results
In this section, we evaluate several conï¬gurations of our model, namely three different depths and three different pooling types (see Section 3). Our main contribution is a thorough evaluation of net- works of increasing depth using an architecture with small temporal convolution ï¬lters with dif- ferent types of pooling, which shows that a signif- icant improvement on the state-of-the-art conï¬gu- rations can be achieved on text classiï¬cation tasks by pushing the depth to 29 convolutional layers.
The following settings have been used in all our experiments. They were found to be best in initial experiments. Following (Zhang et al., 2015), all processing is done at the char- acter level which is the atomic representation of a sentence, same as pixels for images. The dictionary consists of the following characters âabcdefghijklmnopqrstuvwxyz0123456 789-,;.!?:â"/| #$%Ë&*Ëâ+=<>()[]{}â plus a special padding, space and unknown token which add up to a total of 69 tokens. The input text is padded to a ï¬xed size of 1014, larger text are truncated. The character embedding is of size 16. Training is performed with SGD, using a mini-batch of size 128, an initial learning
Our deep architecture works well on big data sets in particular, even for small depths. Table 5 shows the test errors for depths 9, 17 and 29 and for each type of pooling : convolution with stride 2, k-max pooling and temporal max-pooling. For the smallest depth we use (9 convolutional layers), we see that our model already performs better than Zhangâs convolutional baselines (which includes 6 convolutional layers and has a different archi- tecture) on the biggest data sets : Yelp Full, Ya- hoo Answers and Amazon Full and Polarity. The most important decrease in classiï¬cation error can be observed on the largest data set Amazon Full which has more than 3 Million training samples.
Yah. A. Conv+RNN [Xiao] 28.26 24.2 Yelp F. Conv [Zhang] 37.95â - Amz. F. Amz. P. Corpus: Method Author Error [Yang] Sogou n-TFIDF n-TFIDF n-TFIDF [Zhang] [Zhang] [Zhang] 1.31 2.81 7.64 - - - AG DBP. Conv [Zhang] 40.43â 36.4 Conv [Zhang] 4.93â -
Yelp P. ngrams [Zhang] 4.36 - Table 4: Best published results from previous work. Zhang et al. (2015) best results use a Thesaurus data augmentation technique (marked with an â). Yang et al. (2016)âs hierarchical methods is particularly adapted to datasets whose samples contain multiple sentences.
AG Sogou DBP. Yelp P. Yelp F. Yah. A. Amz. F. Amz. P. 37.63 Convolution 10.17 38.04 KMaxPooling 9.83 36.73 9.17 MaxPooling 36.10 Convolution 9.29 37.41 KMaxPooling 9.39 36.07 8.88 MaxPooling 35.28 Convolution 9.36 37.00 KMaxPooling 8.67 35.74 8.73 MaxPooling
Table 5: Testing error of our models on the 8 data sets. No data preprocessing or augmentation is used.
We also observe that for a small depth, temporal max-pooling works best on all data sets.
iments, it seems to hurt performance to perform this type of max operation at intermediate layers (with the exception of the smallest data sets).
Depth improves performance. As we increase the network depth to 17 and 29, the test errors decrease on all data sets, for all types of pooling (with 2 exceptions for 48 comparisons). Going from depth 9 to 17 and 29 for Amazon Full re- duces the error rate by 1% absolute. Since the test is composed of 650K samples, 6.5K more test samples have been classiï¬ed correctly. These improvements, especially on large data sets, are signiï¬cant and show that increasing the depth is useful for text processing. Overall, compared to previous state-of-the-art, our best architecture with depth 29 and max-pooling has a test error of 37.0 compared to 40.43%. This represents a gain of 3.43% absolute accuracy. The signiï¬cant im- provements which we obtain on all data sets com- pared to Zhangâs convolutional models do not in- clude any data augmentation technique.
Max-pooling performs better than other pool- ing types. In terms of pooling, we can also see that max-pooling performs best overall, very close to convolutions with stride 2, but both are signiï¬- cantly superior to k-max pooling.
Both pooling mechanisms perform a max oper- ation which is local and limited to three consec- utive tokens, while k-max polling considers the whole sentence at once. According to our exper-
Our models outperform state-of-the-art Con- vNets. We obtain state-of-the-art results for all data sets, except AGâs news and Sogou news which are the smallest ones. However, with our very deep architecture, we get closer to the state- of-the-art which are ngrams TF-IDF for these data sets and signiï¬cantly surpass convolutional mod- els presented in (Zhang et al., 2015). As observed in previous work, differences in accuracy between shallow (TF-IDF) and deep (convolutional) mod- els are more signiï¬cant on large data sets, but we still perform well on small data sets while getting closer to the non convolutional state-of-the-art re- sults on small data sets. The very deep models even perform as well as ngrams and ngrams-TF- IDF respectively on the sentiment analysis task of Yelp Review Polarity and the ontology classi- ï¬cation task of the DBPedia data set. Results of Yang et al. (only on Yahoo Answers and Amazon Full) outperform our model on the Yahoo Answers dataset, which is probably linked to the fact that their model is task-speciï¬c to datasets whose sam- ples that contain multiple sentences like (question, answer). They use a hierarchical attention mecha- nism that apply very well to documents (with mul- tiple sentences).
Going even deeper degrades accuracy. Short- cut connections help reduce the degradation. As described in (He et al., 2016a), the gain in accu- racy due to the the increase of the depth is limited when using standard ConvNets. When the depth increases too much, the accuracy of the model gets saturated and starts degrading rapidly. This degra- dation problem was attributed to the fact that very deep models are harder to optimize. The gradi- ents which are backpropagated through the very deep networks vanish and SGD with momentum is not able to converge to a correct minimum of the loss function. To overcome this degradation of the model, the ResNet model introduced short- cut connections between convolutional blocks that allow the gradients to ï¬ow more easily in the net- work (He et al., 2016a).
We evaluate the impact of shortcut connections by increasing the number of convolutions to 49 layers. We present an adaptation of the ResNet model to the case of temporal convolutions for text (see Figure 1). Table 6 shows the evolution of the test errors on the Yelp Review Full data set with or without shortcut connections. When looking at the column âwithout shortcutâ, we observe the same degradation problem as in the original ResNet ar- ticle: when going from 29 to 49 layers, the test error rate increases from 35.28 to 37.41 (while the training error goes up from 29.57 to 35.54). When using shortcut connections, we observe improved results when the network has 49 layers: both the training and test errors go down and the network is less prone to underï¬tting than it was without short- cut connections.
While shortcut connections give better results when the network is very deep (49 layers), we were not able to reach state-of-the-art results with them. We plan to further explore adaptations of residual networks to temporal convolutions as we think this a milestone for going deeper in NLP. Residual units (He et al., 2016a) better adapted to the text processing task may help for training even deeper models for text processing, and is left for future research.
Exploring these models on text classiï¬cation tasks with more classes sounds promising. Note that one of the most important difference between the classiï¬cation tasks discussed in this work and ImageNet is that the latter deals with 1000 classes and thus much more information is back-propagated to the network through the gra-
depth without shortcut with shortcut 9 17 29 49 37.63 36.10 35.28 37.41 40.27 39.18 36.01 36.15
Table 6: Test error on the Yelp Full data set for all depths, with or without residual connections.
dients. Exploring the impact of the depth of tem- poral convolutional models on categorization tasks with hundreds or thousands of classes would be an interesting challenge and is left for future research.
# 5 Conclusion
We have presented a new architecture for NLP which follows two design principles: 1) operate at the lowest atomic representation of text, i.e. char- acters, and 2) use a deep stack of local operations, i.e. convolutions and max-pooling of size 3, to learn a high-level hierarchical representation of a sentence. This architecture has been evaluated on eight freely available large-scale data sets and we were able to show that increasing the depth up to 29 convolutional layers steadily improves perfor- mance. Our models are much deeper than pre- viously published convolutional neural networks and they outperform those approaches on all data sets. To the best of our knowledge, this is the ï¬rst time that the âbeneï¬t of depthsâ was shown for convolutional neural networks in NLP.
Eventhough text follows human-deï¬ned rules and images can be seen as raw signals of our en- vironment, images and small texts have similar properties. Texts are also compositional for many languages. Characters combine to form n-grams, stems, words, phrase, sentences etc. These simi- lar properties make the comparison between com- puter vision and natural language processing very proï¬table and we believe future research should invest into making text processing models deeper. Our work is a ï¬rst attempt towards this goal.
In this paper, we focus on the use of very deep convolutional neural networks for sentence classi- ï¬cation tasks. Applying similar ideas to other se- quence processing tasks, in particular neural ma- chine translation is left for future research. It needs to be investigated whether these also beneï¬t from having deeper convolutional encoders.
# References
Yoshua Bengio, Rejean Ducharme, and Pascal Vin- cent. 2001. A neural probabilistic language model. In NIPS, volume 13, pages 932â938, Vancouver, British Columbia, Canada.
Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. Journal of machine learning research, 3(Feb):1137â1155.
Ronan Collobert and Jason Weston. 2008. A uniï¬ed architecture for natural language processing: deep neural networks with multitask learning. In ICML, pages 160â167, Helsinki, Finland.
Ronan Collobert, Jason Weston Lon Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. 2011. Natural language processing (almost) from scratch. JMLR, pages 2493â2537.
C´ıcero Nogueira Dos Santos and Maira Gatti. 2014. Deep convolutional neural networks for sentiment analysis of short texts. In COLING, pages 69â78, Dublin, Ireland.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving deep into rectiï¬ers: Surpass- ing human-level performance on imagenet classiï¬- In Proceedings of the IEEE international cation. conference on computer vision, pages 1026â1034, Santiago, Chile.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016a. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770â778, Las Vegas, Nevada, USA.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Identity mappings in deep residual Sun. 2016b. networks. In European Conference on Computer Vision, pages 630â645, Amsterdam, Netherlands. Springer.
1997. Long short-term memory. Neural computation, 9(8):1735â1780.
Sergey Ioffe and Christian Szegedy. 2015. Batch nor- malization: Accelerating deep network training by reducing internal covariate shift. In ICML, pages 448â456, Lille, France.
Nal Kalchbrenner, Edward Grefenstette, and Phil Blun- som. 2014. A convolutional neural network for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computa- tional Linguistics, pages 655â665, Baltimore, Mary- land, USA.
2014. Convolutional neural networks for sentence classiï¬cation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746â1751,
Doha, Qatar. Association for Computational Lin- guistics.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. 2012. Imagenet classiï¬cation with deep con- volutional neural networks. In Advances in neural information processing systems, pages 1097â1105, Lake Tahoe, California, USA.
Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324.
David G Lowe. 2004. Distinctive image features from International journal of scale-invariant keypoints. computer vision, 60(2):91â110.
Pedro HO Pinheiro and Ronan Collobert. 2014. Re- current convolutional neural networks for scene la- beling. In ICML, pages 82â90, Beijing, China.
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. pages 1715â1725.
Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In ICLR, San Diego, California, USA.
Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. 2011. Semi-supervised recursive autoencoders for predict- ing sentiment distributions. In Proceedings of the conference on empirical methods in natural lan- guage processing, pages 151â161, Edinburgh, UK. Association for Computational Linguistics.
Martin Sundermeyer, Ralf Schl¨uter, and Hermann Ney. 2012. Lstm neural networks for language model- ing. In Interspeech, pages 194â197, Portland, Ore- gon, USA.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. In NIPS, pages 3104â3112, Montreal, Canada.
2016. Efï¬cient character-level document classiï¬cation by combin- ing convolution and recurrent layers.
Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchi- cal attention networks for document classiï¬cation. In Proceedings of NAACL-HLT, pages 1480â1489, San Diego, California, USA.
Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In Eu- ropean conference on computer vision, pages 818â 833, Zurich, Switzerland. Springer.
Xiang Zhang and Yann LeCun. 2015. Text understand- ing from scratch. arXiv preprint arXiv:1502.01710.
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- siï¬cation. In NIPS, pages 649â657, Montreal, Canada. | {
"id": "1502.01710"
} |
1606.01885 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | 6 1 0 2
n u J 6 ] G L . s c [
1 v 5 8 8 1 0 . 6 0 6 1 : v i X r a
# Learning to Optimize
# Ke Li Jitendra Malik
Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA 94720 United States {ke.li,malik}@eecs.berkeley.edu
# Abstract
Algorithm design is a laborious process and often requires many iterations of ideation and validation. In this paper, we explore automating algorithm design and present a method to learn an optimization algorithm, which we believe to be the ï¬rst method that can automatically discover a better algorithm. We approach this problem from a reinforcement learning perspective and represent any particular optimization algorithm as a policy. We learn an optimization algorithm using guided policy search and demonstrate that the resulting algorithm outperforms existing hand-engineered algorithms in terms of convergence speed and/or the ï¬nal objective value.
# Introduction
The current approach to designing algorithms is a laborious process. First, the designer must study the problem and devise an algorithm guided by a mixture of intuition, theoretical and/or empirical insight and general design paradigms. She then needs to analyze the algorithmâs performance on prototypical examples and compare it to that of existing algorithms. If the algorithm falls short, she must uncover the underlying cause and ï¬nd clever ways to overcome the discovered shortcomings. She iterates on this process until she arrives at an algorithm that is superior than existing algorithms. Given the often protracted nature of this process, a natural question to ask is: can we automate it?
In this paper, we focus on automating the design of unconstrained continuous optimization algorithms, which are some of the most powerful and ubiquitous tools used in all areas of science and engineering. Extensive work over the past several decades has yielded many popular methods, like gradient descent, momentum, conjugate gradient and L-BFGS. These algorithms share one commonality: they are all hand-engineered â that is, the steps of these algorithms are carefully designed by human experts. Just as deep learning has achieved tremendous success by automating feature engineering, automating algorithm design could open the way to similar performance gains.
We learn a better optimization algorithm by observing its execution. To this end, we formulate the problem as a reinforcement learning problem. Under this framework, any particular optimization algorithm simply corresponds to a policy. We reward optimization algorithms that converge quickly and penalize those that do not. Learning an optimization algorithm then reduces to ï¬nding an optimal policy, which can be solved using any reinforcement learning method. To differentiate the algorithm that performs learning from the algorithm that is learned, we will henceforth refer to the former as the âlearning algorithmâ or âlearnerâ and the latter as the âautonomous algorithmâ or âpolicyâ. We use an off-the-shelf reinforcement learning algorithm known as guided policy search [17], which has demonstrated success in a variety of robotic control settings [18, 10, 19, 12]. We show empirically that the autonomous optimization algorithm we learn converges faster and/or ï¬nds better optima than existing hand-engineered optimization algorithms.
# 2 Related Work
Early work has explored the general theme of speeding up learning with accumulation of learning experience. This line of work, known as âlearning to learnâ or âmeta-learningâ [1, 27, 5, 26], considers the problem of devising methods that can take advantage of knowledge learned on other related tasks to train faster, a problem that is today better known as multi-task learning and transfer learning. In contrast, the proposed method can learn to accelerate the training procedure itself, without necessarily requiring any training on related auxiliary tasks.
A different line of work, known as âprogramming by demonstrationâ [7], considers the problem of learning programs from examples of input and output. Several different approaches have been proposed: Liang et al. [20] represents programs explicitly using a formal language, constructs a hierarchical Bayesian prior over programs and performs inference using an MCMC sampling procedure and Graves et al. [11] represents programs implicitly as sequences of memory access operations and trains a recurrent neural net to learn the underlying patterns in the memory access operations. Subsequent work proposes variants of this model that use different primitive memory access operations [14], more expressive operations [16, 28] or other non-differentiable operations [30, 29]. Others consider building models that permit parallel execution [15] or training models with stronger supervision in the form of execution traces [23]. The aim of this line of work is to replicate the behaviour of simple existing algorithms from examples, rather than to learn a new algorithm that is better than existing algorithms.
There is a rich body of work on hyperparameter optimization, which studies the optimization of hyperparameters used to train a model, such as the learning rate, the momentum decay factor and regularization parameters. Most methods [13, 4, 24, 25, 9] rely on sequential model-based Bayesian optimization [22, 6], while others adopt a random search approach [3] or use gradient- based optimization [2, 8, 21]. Because each hyperparameter setting corresponds to a particular instantiation of an optimization algorithm, these methods can be viewed as a way to search over different instantiations of the same optimization algorithm. The proposed method, on the other hand, can search over the space of all possible optimization algorithms. In addition, when presented with a new objective function, hyperparameter optimization needs to conduct multiple trials with different hyperparameter settings to ï¬nd the optimal hyperparameters. In contrast, once training is complete, the autonomous algorithm knows how to choose hyperparameters on-the-ï¬y without needing to try different hyperparameter settings, even when presented with an objective function that it has not seen during training.
To the best of our knowledge, the proposed method represents the ï¬rst attempt to learn a better algorithm automatically.
# 3 Method
# 3.1 Preliminaries
In the reinforcement learning setting, the learner is given a choice of actions to take in each time step, which changes the state of the environment in an unknown fashion, and receives feedback based on the consequence of the action. The feedback is typically given in the form of a reward or cost, and the objective of the learner is to choose a sequence of actions based on observations of the current environment that maximizes cumulative reward or minimizes cumulative cost over all time steps.
A reinforcement learning problem is typically formally represented as an Markov decision process (MDP). We consider a ï¬nite-horizon MDP with continuous state and action spaces deï¬ned by the tuple (S, A, p0, p, c, γ), where S is the set of states, A is the set of actions, p0 : S â R+ is the probability density over initial states, p : S à A à S â R+ is the transition probability density, that is, the conditional probability density over successor states given the current state and action, c : S â R is a function that maps state to cost and γ â (0, 1] is the discount factor. The objective is to learn a stochastic policy Ïâ : S à A â R+, which is a conditional probability density over actions given the current state, such that the expected cumulative cost is minimized. That
2
is,
Tv smi t ⢠= argmin Eso,a0,s1,...,97 » Y 2) : t=0
where the expectation is taken with respect to the joint distribution over the sequence of states and actions, often referred to as a trajectory, which has the density
T-1 4 (80; 40; 81,-+-, 87) = Po (So) Il T (az| 84) P ( Sepa] Se, 44) - t=0
This problem of ï¬nding the cost-minimizing policy is known as the policy search problem. To enable generalization to unseen states, the policy is typically parameterized and minimization is performed over representable policies. Solving this problem exactly is intractable in all but selected special cases. Therefore, policy search methods generally tackle this problem by solving it approximately.
In many practical settings, p, which characterizes the dynamics, is unknown and must therefore be estimated. Additionally, because it is often equally important to minimize cost at earlier and later time steps, we will henceforth focus on the undiscounted setting, i.e. the setting where γ = 1.
Guided policy search [17] is a method for performing policy search in continuous state and action spaces under possibly unknown dynamics. It works by alternating between computing a target distribution over trajectories that is encouraged to minimize cost and agree with the current policy and learning parameters of the policy in a standard supervised fashion so that sample trajectories from executing the policy are close to sample trajectories drawn from the target distribution. The target trajectory distribution is computed by iteratively ï¬tting local time-varying linear and quadratic approximations to the (estimated) dynamics and cost respectively and optimizing over a restricted class of linear-Gaussian policies subject to a trust region constraint, which can be solved efï¬ciently in closed form using a dynamic programming algorithm known as linear-quadratic-Gaussian (LQG). We refer interested readers to [17] for details.
# 3.2 Formulation
Consider the general structure of an algorithm for unconstrained continuous optimization, which is outlined in Algorithm 1. Starting from a random location in the domain of the objective function, the algorithm iteratively updates the current location by a step vector computed from some functional Ï of the objective function, the current location and past locations.
# Algorithm 1 General structure of optimization algorithms
# Require: Objective function f
x(0) â random point in the domain of f for i = 1, 2, . . . do âx â Ï(f, {x(0), . . . , x(iâ1)}) if stopping condition is met then return x(iâ1) end if x(i) â x(iâ1) + âx end for
This framework subsumes all existing optimization algorithms. Different optimization algorithms differ in the choice of Ï. First-order methods use a Ï that depends only on the gradient of the objective function, whereas second-order methods use a Ï that depends on both the gradient and the Hessian of the objective function. In particular, the following choice of Ï yields the gradient descent method:
Ï(f, {x(0), . . . , x(iâ1)}) = âγâf (x(iâ1)),
where γ denotes the step size or learning rate. Similarly, the following choice of Ï yields the gradient descent method with momentum:
i-1 D}) = 9 [ Sra hovy(el) j=0 mf, {r,..
3
where γ again denotes the step size and α denotes the momentum decay factor.
Therefore, if we can learn Ï, we will be able to learn an optimization algorithm. Since it is difï¬cult to model general functionals, in practice, we restrict the dependence of Ï on the objective function f to objective values and gradients evaluated at current and past locations. Hence, Ï can be simply modelled as a function from the objective values and gradients along the trajectory taken by the optimizer so far to the next step vector.
We observe that the execution of an optimization algorithm can be viewed as the execution of a ï¬xed policy in an MDP: the state consists of the current location and the objective values and gradients evaluated at the current and past locations, the action is the step vector that is used to update the current location, and the transition probability is partially characterized by the location update formula, x(i) â x(iâ1) + âx. The policy that is executed corresponds precisely to the choice of Ï used by the optimization algorithm. For this reason, we will also use Ï to denote the policy at hand. Under this formulation, searching over policies corresponds to searching over all possible ï¬rst-order optimization algorithms.
We can use reinforcement learning to learn the policy Ï. To do so, we need to deï¬ne the cost function, which should penalize policies that exhibit undesirable behaviours during their execution. Since the performance metric of interest for optimization algorithms is the speed of convergence, the cost function should penalize policies that converge slowly. To this end, assuming the goal is to minimize the objective function, we deï¬ne cost at a state to be the objective value at the current location. This encourages the policy to reach the minimum of the objective function as quickly as possible.
Since the policy Ï may be stochastic in general, we model each dimension of the action conditional on the state as an independent Gaussian whose mean is given by a regression model and variance is some learned constant. We choose to parameterize the mean of Ï using a neural net, due to its appealing properties as a universal function approximator and strong empirical performance in a variety of applications. We use guided policy search to learn the parameters of the policy.
We use a training set consisting of different randomly generated objective functions. We evaluate the resulting autonomous algorithm on different objective functions drawn from the same distribution.
# 3.3 Discussion
An autonomous optimization algorithm offers several advantages over hand-engineered algorithms. First, an autonomous optimizer is trained on real algorithm execution data, whereas hand-engineered optimizers are typically derived by analyzing objective functions with properties that may or may not be satisï¬ed by objective functions that arise in practice. Hence, an autonomous optimizer minimizes the amount of a priori assumptions made about objective functions and can instead take full advantage of the information about the actual objective functions of interest. Second, an autonomous optimizer has no hyperparameters that need to be tuned by the user. Instead of just computing a step direction which must then be combined with a user-speciï¬ed step size, an autonomous optimizer predicts the step direction and size jointly. This allows the autonomous optimizer to dynamically adjust the step size based on the information it has acquired about the objective function while performing the optimization. Finally, when an autonomous optimizer is trained on a particular class of objective functions, it may be able to discover hidden structure in the geometry of the class of objective functions. At test time, it can then exploit this knowledge to perform optimization faster.
# Implementation Details
We store the current location, previous gradients and improvements in the objective value from previous iterations in the state. We keep track of only the information pertaining to the previous H time steps and use H = 25 in our experiments. More speciï¬cally, the dimensions of the state space encode the following information:
Current location in the domain ⢠Change in the objective value at the current location relative to the objective value at the ith
most recent location for all i â {2, . . . , H + 1}
⢠Gradient of the objective function evaluated at the ith most recent location for all i â {2, . . . , H + 1}
4
Initially, we set the dimensions corresponding to historical information to zero. The current location is only used to compute the cost; because the policy should not depend on the absolute coordinates of the current location, we exclude it from the input that is fed into the neural net.
We use a small neural net to model the policy. Its architecture consists of a single hidden layer with 50 hidden units. Softplus activation units are used in the hidden layer and linear activation units are used in the output layer. The training objective imposed by guided policy search takes the form of the squared Mahalanobis distance between mean predicted and target actions along with other terms dependent on the variance of the policy. We also regularize the entropy of the policy to encourage deterministic actions conditioned on the state. The coefï¬cient on the regularizer increases gradually in later iterations of guided policy search. We initialize the weights of the neural net randomly and do not regularize the magnitude of weights.
Initially, we set the target trajectory distribution so that the mean action given state at each time step matches the step vector used by the gradient descent method with momentum. We choose the best settings of the step size and momentum decay factor for each objective function in the training set by performing a grid search over hyperparameters and running noiseless gradient descent with momentum for each hyperparameter setting.
For training, we sample 20 trajectories with a length of 40 time steps for each objective function in the training set. After each iteration of guided policy search, we sample new trajectories from the new distribution and discard the trajectories from the preceding iteration.
# 4 Experiments
We learn autonomous optimization algorithms for various convex and non-convex classes of objective functions that correspond to loss functions for different machine learning models. We ï¬rst learn an autonomous optimizer for logistic regression, which induces a convex loss function. We then learn an autonomous optimizer for robust linear regression using the Geman-McClure M-estimator, whose loss function is non-convex. Finally, we learn an autonomous optimizer for a two-layer neural net classiï¬er with ReLU activation units, whose error surface has even more complex geometry.
# 4.1 Logistic Regression
We consider a logistic regression model with an ¢2 regularizer on the weight vector. Training the model requires optimizing the following objective:
wb n n Xr ¢ min ââ yi logo (w? x; +b) +(1ây)log(l-o (w? x; +b))+ 3 \jw||5 ; i=l
where w â Rd and b â R denote the weight vector and bias respectively, xi â Rd and yi â {0, 1} denote the feature vector and label of the ith instance, λ denotes the coefï¬cient on the regularizer and Ï(z) := 1 1+eâz . For our experiments, we choose λ = 0.0005 and d = 3. This objective is convex in w and b.
We train an autonomous algorithm that learns to optimize objectives of this form. The training set consists of examples of such objective functions whose free variables, which in this case are xi and yi, are all assigned concrete values. Hence, each objective function in the training set corresponds to a logistic regression problem on a different dataset.
To construct the training set, we randomly generate a dataset of 100 instances for each function in the training set. The instances are drawn randomly from two multivariate Gaussians with random means and covariances, with half drawn from each. Instances from the same Gaussian are assigned the same label and instances from different Gaussians are assigned different labels.
We train the autonomous algorithm on a set of 90 objective functions. We evaluate it on a test set of 100 random objective functions generated using the same procedure and compare to popular hand-engineered algorithms, such as gradient descent, momentum, conjugate gradient and L-BFGS. All baselines are run with the best hyperparameter settings tuned on the training set.
For each algorithm and objective function in the test set, we compute the difference between the objective value achieved by a given algorithm and that achieved by the best of the competing
5
(a) (b) (c)
Figure 1: (a) Mean margin of victory of each algorithm for optimizing the logistic regression loss. Higher margin of victory indicates better performance. (b-c) Objective values achieved by each algorithm on two objective functions from the test set. Lower objective values indicate better performance. Best viewed in colour.
algorithms at every iteration, a quantity we will refer to as âthe margin of victoryâ. This quantity is positive when the current algorithm is better than all other algorithms and negative otherwise. In Figure 1a, we plot the mean margin of victory of each algorithm at each iteration averaged over all objective functions in the test set. We ï¬nd that conjugate gradient and L-BFGS diverge or oscillate in rare cases (on 6% of the objective functions in the test set), even though the autonomous algorithm, gradient descent and momentum do not. To reï¬ect performance of these baselines in the majority of cases, we exclude the offending objective functions when computing the mean margin of victory.
As shown, the autonomous algorithm outperforms gradient descent, momentum and conjugate gradient at almost every iteration. The margin of victory of the autonomous algorithm is quite high in early iterations, indicating that the autonomous algorithm converges much faster than other algorithms. It is interesting to note that despite having seen only trajectories of length 40 at training time, the autonomous algorithm is able to generalize to much longer time horizons at test time. L-BFGS converges to slightly better optima than the autonomous algorithm and the momentum method. This is not surprising, as the objective functions are convex and L-BFGS is known to be a very good optimizer for convex optimization problems.
We show the performance of each algorithm on two objective functions from the test set in Figures 1b and 1c. In Figure 1b, the autonomous algorithm converges faster than all other algorithms. In Figure 1c, the autonomous algorithm initially converges faster than all other algorithms but is later overtaken by L-BFGS, while remaining faster than all other optimizers. However, it eventually achieves the same objective value as L-BFGS, while the objective values achieved by gradient descent and momentum remain much higher.
# 4.2 Robust Linear Regression
Next, we consider the problem of linear regression using a robust loss function. One way to ensure robustness is to use an M-estimator for parameter estimation. A popular choice is the Geman-McClure estimator, which induces the following objective:
2 min i SS _ (i= wtxi = bd) = wTxi - 5) 2 ~_ wlx, â bp)?â wb 2 2+ (y; â wx; âb)
where w â Rd and b â R denote the weight vector and bias respectively, xi â Rd and yi â R denote the feature vector and label of the ith instance and c â R is a constant that modulates the shape of the loss function. For our experiments, we use c = 1 and d = 3. This loss function is not convex in either w or b.
As with the preceding section, each objective function in the training set is a function of the above form with realized values for xi and yi. The dataset for each objective function is generated by drawing 25 random samples from each one of four multivariate Gaussians, each of which has a random mean and the identity covariance matrix. For all points drawn from the same Gaussian, their labels are generated by projecting them along the same random vector, adding the same randomly generated bias and perturbing them with i.i.d. Gaussian noise.
6
(a) (b) (c)
Figure 2: (a) Mean margin of victory of each algorithm for optimizing the robust linear regression loss. Higher margin of victory indicates better performance. (b-c) Objective values achieved by each algorithm on two objective functions from the test set. Lower objective values indicate better performance. Best viewed in colour.
The autonomous algorithm is trained on a set of 120 objective functions. We evaluate it on 100 randomly generated objective functions using the same metric as above. As shown in Figure 2a, the autonomous algorithm outperforms all hand-engineered algorithms except at early iterations. While it dominates gradient descent, conjugate gradient and L-BFGS at all times, it does not make progress as quickly as the momentum method initially. However, after around 30 iterations, it is able to close the gap and surpass the momentum method. On this optimization problem, both conjugate gradient and L-BFGS diverge quickly. Interestingly, unlike in the previous experiment, L-BFGS no longer performs well, which could be caused by non-convexity of the objective functions.
Figures 2b and 2c show performance on objective functions from the test set. In Figure 2b, the autonomous optimizer not only converges the fastest, but also reaches a better optimum than all other algorithms. In Figure 2c, the autonomous algorithm converges the fastest and is able to avoid most of the oscillations that hamper gradient descent and momentum after reaching the optimum.
# 4.3 Neural Net Classiï¬er
Finally, we train an autonomous algorithm to train a small neural net classifier. We consider a two-layer neural net with ReLU activation on the hidden units and softmax activation on the output units. We use the cross-entropy loss combined with @2 regularization on the weights. To train the model, we need to optimize the following objective:
exp (a max (Wx; + b, 0) + c),,) Do exp (G max (Wx; + b,0) + c);) in 2 Shoe wig d yop Sa ee SW + 5 ele,
where W â RhÃd, b â Rh, U â RpÃh, c â Rp denote the ï¬rst-layer and second-layer weights and biases, xi â Rd and yi â {1, . . . , p} denote the input and target class label of the ith instance, λ denotes the coefï¬cient on regularizers and (v)j denotes the jth component of v. For our experiments, we use λ = 0.0005 and d = h = p = 2. The error surface is known to have complex geometry and multiple local optima, making this a challenging optimization problem.
The training set consists of 80 objective functions, each of which corresponds to the objective for training a neural net on a different dataset. Each dataset is generated by generating four multivariate Gaussians with random means and covariances and sampling 25 points from each. The points from the same Gaussian are assigned the same random label of either 0 or 1. We make sure not all of the points in the dataset are assigned the same label.
We evaluate the autonomous algorithm in the same manner as above. As shown in Figure 3a, the autonomous algorithm signiï¬cantly outperforms all other algorithms. In particular, as evidenced by the sizeable and sustained gap between margin of victory of the autonomous optimizer and the momentum method, the autonomous optimizer is able to reach much better optima and is less prone to getting trapped in local optima compared to other methods. This gap is also larger compared to that exhibited in previous sections, suggesting that hand-engineered algorithms are more sub-optimal on
7
(a) (b) (c)
Figure 3: (a) Mean margin of victory of each algorithm for training neural net classiï¬ers. Higher margin of victory indicates better performance. (b-c) Objective values achieved by each algorithm on two objective functions from the test set. Lower objective values indicate better performance. Best viewed in colour.
challenging optimization problems and so the potential for improvement from learning the algorithm is greater in such settings. Due to non-convexity, conjugate gradient and L-BFGS often diverge.
Performance on examples of objective functions from the test set is shown in Figures 3b and 3c. As shown, the autonomous optimizer is able to reach better optima than all other methods and largely avoids oscillations that other methods suffer from.
# 5 Conclusion
We presented a method for learning a better optimization algorithm. We formulated this as a reinforcement learning problem, in which any optimization algorithm can be represented as a policy. Learning an optimization algorithm then reduces to ï¬nd the optimal policy. We used guided policy search for this purpose and trained autonomous optimizers for different classes of convex and non- convex objective functions. We demonstrated that the autonomous optimizer converges faster and/or reaches better optima than hand-engineered optimizers. We hope autonomous optimizers learned using the proposed approach can be used to solve various common classes of optimization problems more quickly and help accelerate the pace of innovation in science and engineering.
# References
[1] Jonathan Baxter, Rich Caruana, Tom Mitchell, Lorien Y Pratt, Daniel L Silver, and Sebastian Thrun. NIPS 1995 workshop on learning to learn: Knowledge consolidation and transfer in inductive sys- tems. https://web.archive.org/web/20000618135816/http://www.cs.cmu.edu/afs/cs.cmu. edu/user/caruana/pub/transfer.html, 1995. Accessed: 2015-12-05.
[2] Yoshua Bengio. Gradient-based optimization of hyperparameters. Neural computation, 12(8):1889â1900, 2000.
[3] James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. The Journal of Machine Learning Research, 13(1):281â305, 2012.
[4] James S Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. Algorithms for hyper-parameter optimization. In Advances in Neural Information Processing Systems, pages 2546â2554, 2011.
[5] Pavel Brazdil, Christophe Giraud Carrier, Carlos Soares, and Ricardo Vilalta. Metalearning: applications to data mining. Springer Science & Business Media, 2008.
[6] Eric Brochu, Vlad M Cora, and Nando De Freitas. A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599, 2010.
[7] Allen Cypher and Daniel Conrad Halbert. Watch what I do: programming by demonstration. MIT press, 1993.
[8] Justin Domke. Generic methods for optimization-based modeling. In AISTATS, volume 22, pages 318â326, 2012.
[9] Matthias Feurer, Jost Tobias Springenberg, and Frank Hutter. Initializing bayesian hyperparameter optimization via meta-learning. In AAAI, pages 1128â1135, 2015.
8
[10] Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, and Pieter Abbeel. Learning visual feature spaces for robotic manipulation with deep spatial autoencoders. arXiv preprint arXiv:1509.06113, 2015.
[11] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural Turing machines. arXiv preprint arXiv:1410.5401, 2014.
[12] Weiqiao Han, Sergey Levine, and Pieter Abbeel. Learning compound multi-step controllers under unknown dynamics. In International Conference on Intelligent Robots and Systems, 2015.
[13] Frank Hutter, Holger H Hoos, and Kevin Leyton-Brown. Sequential model-based optimization for general algorithm conï¬guration. In Learning and Intelligent Optimization, pages 507â523. Springer, 2011.
[14] Armand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. In Advances in Neural Information Processing Systems, pages 190â198, 2015.
[15] Åukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228, 2015.
[16] Karol Kurach, Marcin Andrychowicz, and Ilya Sutskever. Neural random-access machines. arXiv preprint arXiv:1511.06392, 2015.
[17] Sergey Levine and Pieter Abbeel. Learning neural network policies with guided policy search under unknown dynamics. In Advances in Neural Information Processing Systems, pages 1071â1079, 2014.
[18] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015.
[19] Sergey Levine, Nolan Wagener, and Pieter Abbeel. Learning contact-rich manipulation skills with guided policy search. arXiv preprint arXiv:1501.05611, 2015.
[20] Percy Liang, Michael I Jordan, and Dan Klein. Learning programs: A hierarchical Bayesian approach. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 639â646, 2010.
[21] Dougal Maclaurin, David Duvenaud, and Ryan P Adams. Gradient-based hyperparameter optimization through reversible learning. arXiv preprint arXiv:1502.03492, 2015.
[22] Jonas Mockus, Vytautas Tiesis, and Antanas Zilinskas. The application of bayesian methods for seeking the extremum. Towards global optimization, 2(117-129):2, 1978.
[23] Scott Reed and Nando de Freitas. Neural programmer-interpreters. arXiv preprint arXiv:1511.06279, 2015.
[24] Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pages 2951â2959, 2012.
[25] Kevin Swersky, Jasper Snoek, and Ryan P Adams. Multi-task bayesian optimization. In Advances in neural information processing systems, pages 2004â2012, 2013.
[26] Sebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 2012.
[27] Ricardo Vilalta and Youssef Drissi. A perspective view and survey of meta-learning. Artiï¬cial Intelligence Review, 18(2):77â95, 2002.
[28] Greg Yang. Lie access neural turing machine. arXiv preprint arXiv:1602.08671, 2016.
[29] Wojciech Zaremba, Tomas Mikolov, Armand Joulin, and Rob Fergus. Learning simple algorithms from examples. arXiv preprint arXiv:1511.07275, 2015.
[30] Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprint arXiv:1505.00521, 2015.
9 | {
"id": "1505.00521"
} |
1606.01541 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | 6 1 0 2
p e S 9 2 ] L C . s c [ 4 v 1 4 5 1 0 . 6 0 6 1 : v i X r a
# Deep Reinforcement Learning for Dialogue Generation
Jiwei Li1, Will Monroe1, Alan Ritter2, Michel Galley3, Jianfeng Gao3 and Dan Jurafsky1 1Stanford University, Stanford, CA, USA 2Ohio State University, OH, USA 3Microsoft Research, Redmond, WA, USA {jiweil,wmonroe4,jurafsky}@stanford.edu, ritter.1492@osu.edu {mgalley,jfgao}@microsoft.com
# Abstract
Recent neural models of dialogue generation offer great promise for generating responses for conversational agents, but tend to be short- sighted, predicting utterances one at a time while ignoring their inï¬uence on future out- comes. Modeling the future direction of a di- alogue is crucial to generating coherent, inter- esting dialogues, a need which led traditional NLP models of dialogue to draw on reinforce- ment learning. In this paper, we show how to integrate these goals, applying deep reinforce- ment learning to model future reward in chat- bot dialogue. The model simulates dialogues between two virtual agents, using policy gradi- ent methods to reward sequences that display three useful conversational properties: infor- mativity, coherence, and ease of answering (re- lated to forward-looking function). We evalu- ate our model on diversity, length as well as with human judges, showing that the proposed algorithm generates more interactive responses and manages to foster a more sustained conver- sation in dialogue simulation. This work marks a ï¬rst step towards learning a neural conversa- tional model based on the long-term success of dialogues.
context when mapping between consecutive dialogue turns (Sordoni et al., 2015) in a way not possible, for example, with MT-based dialogue models (Ritter et al., 2011).
Despite the success of SEQ2SEQ models in di- alogue generation, two problems emerge: First, SEQ2SEQ models are trained by predicting the next dialogue turn in a given conversational context using the maximum-likelihood estimation (MLE) objective function. However, it is not clear how well MLE approximates the real-world goal of chatbot develop- ment: teaching a machine to converse with humans, while providing interesting, diverse, and informative feedback that keeps users engaged. One concrete example is that SEQ2SEQ models tend to generate highly generic responses such as âI donât knowâ re- gardless of the input (Sordoni et al., 2015; Serban et al., 2016; Li et al., 2016a). This can be ascribed to the high frequency of generic responses found in the training set and their compatibility with a diverse range of conversational contexts. Yet âI donât knowâ is apparently not a good action to take, since it closes the conversation down.
# Introduction
Neural response generation (Sordoni et al., 2015; Shang et al., 2015; Vinyals and Le, 2015; Li et al., 2016a; Wen et al., 2015; Yao et al., 2015; Luan et al., 2016; Xu et al., 2016; Wen et al., 2016; Li et al., 2016b; Su et al., 2016) is of growing inter- est. The LSTM sequence-to-sequence (SEQ2SEQ) model (Sutskever et al., 2014) is one type of neural generation model that maximizes the probability of generating a response given the previous dialogue turn. This approach enables the incorporation of rich
Another common problem, illustrated in the two sample conversations on the left of Table 1, is that the system becomes stuck in an inï¬nite loop of repet- itive responses. This is due to MLE-based SEQ2SEQ modelsâ inability to account for repetition. In exam- ple 2 (bottom left), the dialogue falls into an inï¬nite loop after three turns, with both agents generating dull, generic utterances like i donât know what you are talking about and you donât know what you are saying. Looking at the entire conversation, utterance (4) turns out to be a bad action to take because it offers no way of continuing the conversation.1
1A similar rule is often suggested in improvisational comedy: https://en.wikipedia.org/wiki/Yes,_and...
Baseline mutual information model (Li et al. 2015) A: Where are you going? (1) B: Iâm going to the restroom. (2) A: See you later. (3) B: See you later. (4) A: See you later. (5) B: See you later. (6) ... ... A: how old are you? (1) B: Iâm 16. (2) A: 16? (3) B: I donât know what you are talking about. (4) A: You donât know what you are saying. (5) B: I donât know what you are talking about . (6) A: You donât know what you are saying. (7) ... Proposed reinforcement learning model A: Where are you going? (1) B: Iâm going to the police station. (2) A: Iâll come with you. (3) B: No, no, no, no, youâre not going anywhere. (4) A: Why? (5) B: I need you to stay here. (6) A: I donât know what you are talking about. (7) ... A: How old are you? (1) B: Iâm 16. Why are you asking? (2) A I thought you were 12. (3) B: What made you think so? (4) A: I donât know what you are talking about. (5) B: You donât know what you are saying. (6) ... ...
Table 1: Left Column: Dialogue simulation between two agents using a 4-layer LSTM encoder-decoder trained on the OpenSubtitles dataset. The ï¬rst turn (index 1) is input by the authors. Then the two agents take turns conversing, taking as input the other agentâs prior generated turn. The output is generated using the mutual information model (Li et al., 2015) in which an N-best list is ï¬rst obtained using beam search based on p(t|s) and reranked by linearly combining the backward probability p(s|t), where t and s respectively denote targets and sources. Right Column: Dialogue simulated using the proposed reinforcement learning model. The new model has more forward-looking utterances (questions like âWhy are you asking?â and offers like âIâll come with youâ) and lasts longer before it falls into conversational black holes.
These challenges suggest we need a conversa- tion framework that has the ability to (1) integrate developer-deï¬ned rewards that better mimic the true goal of chatbot development and (2) model the long- term inï¬uence of a generated response in an ongoing dialogue.
To achieve these goals, we draw on the insights of reinforcement learning, which have been widely ap- plied in MDP and POMDP dialogue systems (see Re- lated Work section for details). We introduce a neu- ral reinforcement learning (RL) generation method, which can optimize long-term rewards designed by system developers. Our model uses the encoder- decoder architecture as its backbone, and simulates conversation between two virtual agents to explore the space of possible actions while learning to maxi- mize expected reward. We deï¬ne simple heuristic ap- proximations to rewards that characterize good con- versations: good conversations are forward-looking (Allwood et al., 1992) or interactive (a turn suggests a following turn), informative, and coherent. The pa- rameters of an encoder-decoder RNN deï¬ne a policy over an inï¬nite action space consisting of all possible
utterances. The agent learns a policy by optimizing the long-term developer-deï¬ned reward from ongo- ing dialogue simulations using policy gradient meth- ods (Williams, 1992), rather than the MLE objective deï¬ned in standard SEQ2SEQ models.
Our model thus integrates the power of SEQ2SEQ systems to learn compositional semantic meanings of utterances with the strengths of reinforcement learn- ing in optimizing for long-term goals across a conver- sation. Experimental results (sampled results at the right panel of Table 1) demonstrate that our approach fosters a more sustained dialogue and manages to produce more interactive responses than standard SEQ2SEQ models trained using the MLE objective.
# 2 Related Work
Efforts to build statistical dialog systems fall into two major categories.
The ï¬rst treats dialogue generation as a source- to-target transduction problem and learns mapping rules between input messages and responses from a massive amount of training data. Ritter et al. (2011) frames the response generation problem as a statisti-
cal machine translation (SMT) problem. Sordoni et al. (2015) improved Ritter et al.âs system by rescor- ing the outputs of a phrasal SMT-based conversation system with a neural model that incorporates prior context. Recent progress in SEQ2SEQ models inspire several efforts (Vinyals and Le, 2015) to build end- to-end conversational systems which ï¬rst apply an encoder to map a message to a distributed vector rep- resenting its semantics and generate a response from the message vector. Serban et al. (2016) propose a hierarchical neural model that captures dependen- cies over an extended conversation history. Li et al. (2016a) propose mutual information between mes- sage and response as an alternative objective function in order to reduce the proportion of generic responses produced by SEQ2SEQ systems.
The other line of statistical research focuses on building task-oriented dialogue systems to solve domain-speciï¬c tasks. Efforts include statistical models such as Markov Decision Processes (MDPs) (Levin et al., 1997; Levin et al., 2000; Walker et al., 2003; Pieraccini et al., 2009), POMDP (Young et al., 2010; Young et al., 2013; GaËsic et al., 2013a; GaËsic et al., 2014) models, and models that statisti- cally learn generation rules (Oh and Rudnicky, 2000; Ratnaparkhi, 2002; Banchs and Li, 2012; Nio et al., 2014). This dialogue literature thus widely applies reinforcement learning (Walker, 2000; Schatzmann et al., 2006; Gasic et al., 2013b; Singh et al., 1999; Singh et al., 2000; Singh et al., 2002) to train dialogue policies. But task-oriented RL dialogue systems of- ten rely on carefully limited dialogue parameters, or hand-built templates with state, action and reward sig- nals designed by humans for each new domain, mak- ing the paradigm difï¬cult to extend to open-domain scenarios.
Also relevant is prior work on reinforcement learn- ing for language understanding - including learning from delayed reward signals by playing text-based games (Narasimhan et al., 2015; He et al., 2016), executing instructions for Windows help (Branavan et al., 2011), or understanding dialogues that give navigation directions (Vogel and Jurafsky, 2010).
Our goal is to integrate the SEQ2SEQ and rein- forcement learning paradigms, drawing on the advan- tages of both. We are thus particularly inspired by recent work that attempts to merge these paradigms, including Wen et al. (2016)â training an end-to-end
task-oriented dialogue system that links input repre- sentations to slot-value pairs in a databaseâ or Su et al. (2016), who combine reinforcement learning with neural generation on tasks with real users, show- ing that reinforcement learning improves dialogue performance.
# 3 Reinforcement Learning for Open-Domain Dialogue
In this section, we describe in detail the components of the proposed RL model.
The learning system consists of two agents. We use p to denote sentences generated from the ï¬rst agent and q to denote sentences from the second. The two agents take turns talking with each other. A dialogue can be represented as an alternating se- quence of sentences generated by the two agents: p1, q1, p2, q2, ..., pi, qi. We view the generated sen- tences as actions that are taken according to a policy deï¬ned by an encoder-decoder recurrent neural net- work language model.
The parameters of the network are optimized to maximize the expected future reward using policy search, as described in Section 4.3. Policy gradi- ent methods are more appropriate for our scenario than Q-learning (Mnih et al., 2013), because we can initialize the encoder-decoder RNN using MLE pa- rameters that already produce plausible responses, before changing the objective and tuning towards a policy that maximizes long-term reward. Q-learning, on the other hand, directly estimates the future ex- pected reward of each action, which can differ from the MLE objective by orders of magnitude, thus mak- ing MLE parameters inappropriate for initialization. The components (states, actions, reward, etc.) of our sequential decision problem are summarized in the following sub-sections.
# 3.1 Action
An action a is the dialogue utterance to generate. The action space is inï¬nite since arbitrary-length se- quences can be generated.
# 3.2 State
A state is denoted by the previous two dialogue turns [pi, qi]. The dialogue history is further transformed to a vector representation by feeding the concatena- tion of pi and qi into an LSTM encoder model as
described in Li et al. (2016a).
# 3.3 Policy
A policy takes the form of an LSTM encoder-decoder (i.e., pRL(pi+1|pi, qi) ) and is deï¬ned by its param- eters. Note that we use a stochastic representation of the policy (a probability distribution over actions given states). A deterministic policy would result in a discontinuous objective that is difï¬cult to optimize using gradient-based methods.
# 3.4 Reward
r denotes the reward obtained for each action. In this subsection, we discuss major factors that contribute to the success of a dialogue and describe how approx- imations to these factors can be operationalized in computable reward functions.
Ease of answering A turn generated by a machine should be easy to respond to. This aspect of a turn is related to its forward-looking function: the con- straints a turn places on the next turn (Schegloff and Sacks, 1973; Allwood et al., 1992). We propose to measure the ease of answering a generated turn by using the negative log likelihood of responding to that utterance with a dull response. We manually con- structed a list of dull responses S consisting 8 turns such as âI donât know what you are talking aboutâ, âI have no ideaâ, etc., that we and others have found occur very frequently in SEQ2SEQ models of con- versations. The reward function is given as follows:
1 1 r= He De Fy; 08 Powsesea(sla) (1) scS
where NS denotes the cardinality of NS and Ns de- notes the number of tokens in the dull response s. Although of course there are more ways to generate dull responses than the list can cover, many of these responses are likely to fall into similar regions in the vector space computed by the model. A system less likely to generate utterances in the list is thus also less likely to generate other dull responses.
represents the likelihood output by SEQ2SEQ models. It is worth noting that pseq2seq is different from the stochastic policy function pRL(pi+1|pi, qi), since the former is learned based on the MLE objective of the SEQ2SEQ model while the latter is the policy optimized for long-term future
reward in the RL setting. r1 is further scaled by the length of target S.
Information Flow We want each agent to con- tribute new information at each turn to keep the di- alogue moving and avoid repetitive sequences. We therefore propose penalizing semantic similarity be- tween consecutive turns from the same agent. Let hpi and hpi+1 denote representations obtained from the encoder for two consecutive turns pi and pi+1. The reward is given by the negative log of the cosine similarity between them:
hp, hi rz = â log cos(hp,, hp,,,) log cos dpi Pina â_ [pl psa (2)
Semantic Coherence We also need to measure the adequacy of responses to avoid situations in which the generated replies are highly rewarded but are un- grammatical or not coherent. We therefore consider the mutual information between the action a and pre- vious turns in the history to ensure the generated responses are coherent and appropriate:
r3 = 1 Na log pseq2seq(a|qi, pi)+ 1 Nqi log pbackward seq2seq (qi|a)
(3) pseq2seq(a|pi, qi) denotes the probability of generat- ing response a given the previous dialogue utterances [pi, qi]. pbackward seq2seq (qi|a) denotes the backward proba- bility of generating the previous dialogue utterance qi based on response a. pbackward is trained in a simi- seq2seq lar way as standard SEQ2SEQ models with sources and targets swapped. Again, to control the inï¬u- ence of target length, both log pseq2seq(a|qi, pi) and log pbackward seq2seq (qi|a) are scaled by the length of targets. The ï¬nal reward for action a is a weighted sum of
the rewards discussed above:
r(a, [pi, qi]) = λ1r1 + λ2r2 + λ3r3 (4)
where λ1 + λ2 + λ3 = 1. We set λ1 = 0.25, λ2 = 0.25 and λ3 = 0.5. A reward is observed after the agent reaches the end of each sentence.
# 4 Simulation
The central idea behind our approach is to simulate the process of two virtual agents taking turns talking with each other, through which we can explore the
state-action space and learn a policy pRL(pi+1|pi, qi) that leads to the optimal expected reward. We adopt an AlphaGo-style strategy (Silver et al., 2016) by initializing the RL system using a general response generation policy which is learned from a fully su- pervised setting.
# 4.1 Supervised Learning
For the ï¬rst stage of training, we build on prior work of predicting a generated target sequence given dia- logue history using the supervised SEQ2SEQ model (Vinyals and Le, 2015). Results from supervised models will be later used for initialization.
We trained a SEQ2SEQ model with attention (Bah- danau et al., 2015) on the OpenSubtitles dataset, which consists of roughly 80 million source-target pairs. We treated each turn in the dataset as a target and the concatenation of two previous sentences as source inputs.
# 4.2 Mutual Information
Samples from SEQ2SEQ models are often times dull and generic, e.g., âi donât knowâ (Li et al., 2016a) We thus do not want to initialize the policy model using the pre-trained SEQ2SEQ models because this will lead to a lack of diversity in the RL modelsâ ex- periences. Li et al. (2016a) showed that modeling mutual information between sources and targets will signiï¬cantly decrease the chance of generating dull responses and improve general response quality. We now show how we can obtain an encoder-decoder model which generates maximum mutual informa- tion responses.
As illustrated in Li et al. (2016a), direct decoding from Eq 3 is infeasible since the second term requires the target sentence to be completely generated. In- spired by recent work on sequence level learning (Ranzato et al., 2015), we treat the problem of gen- erating maximum mutual information response as a reinforcement learning problem in which a reward of mutual information value is observed when the model arrives at the end of a sequence.
Similar to Ranzato et al. (2015), we use policy gra- dient methods (Sutton et al., 1999; Williams, 1992) for optimization. We initialize the policy model pRL using a pre-trained pSEQ2SEQ(a|pi, qi) model. Given an input source [pi, qi], we generate a candidate list A = {Ëa|Ëa â¼ pRL}. For each generated candi-
date Ëa, we will obtain the mutual information score m(Ëa, [pi, qi]) from the pre-trained pSEQ2SEQ(a|pi, qi) and pbackward SEQ2SEQ(qi|a). This mutual information score will be used as a reward and back-propagated to the encoder-decoder model, tailoring it to generate se- quences with higher rewards. We refer the readers to Zaremba and Sutskever (2015) and Williams (1992) for details. The expected reward for a sequence is given by:
J(θ) = E[m(Ëa, [pi, qi])] (5)
The gradient is estimated using the likelihood ratio trick:
âJ(θ) = m(Ëa, [pi, qi])â log pRL(Ëa|[pi, qi])
We update the parameters in the encoder-decoder model using stochastic gradient descent. A curricu- lum learning strategy is adopted (Bengio et al., 2009) as in Ranzato et al. (2015) such that, for every se- quence of length T we use the MLE loss for the ï¬rst L tokens and the reinforcement algorithm for the remaining T â L tokens. We gradually anneal the value of L to zero. A baseline strategy is employed to decrease the learning variance: an additional neural model takes as inputs the generated target and the initial source and outputs a baseline value, similar to the strategy adopted by Zaremba and Sutskever (2015). The ï¬nal gradient is thus:
âJ(θ) = â log pRL(Ëa|[pi, qi])[m(Ëa, [pi, qi]) â b] (7)
# 4.3 Dialogue Simulation between Two Agents
We simulate conversations between the two virtual agents and have them take turns talking with each other. The simulation proceeds as follows: at the initial step, a message from the training set is fed to the ï¬rst agent. The agent encodes the input message to a vector representation and starts decoding to gen- erate a response output. Combining the immediate output from the ï¬rst agent with the dialogue history, the second agent updates the state by encoding the dialogue history into a representation and uses the decoder RNN to generate responses, which are sub- sequently fed back to the ï¬rst agent, and the process is repeated.
v XY 4 \ Input Message ON nd âe 4 Turn 2 & Sim 4 Tom n to Ue Dis 3 8 Dis = = = encode decode encode decode 1 encode | decode 1 m > > Di > > â11 ââ Png 1 â~__, "Ce ) â~__, pho How old are . fa you? : : P12 2 2 22> â Ss i â> Pra 2 2 ââ fiz ââ Pn2 p a 3 1,3: > > 11 â Paar 3 â_, 3 'm 16, why are â Le Cte) you Pn youasking? J, were Cte) :
Figure 1: Dialogue simulation between the two agents.
Optimization We initialize the policy model pRL with parameters from the mutual information model described in the previous subsection. We then use policy gradient methods to ï¬nd parameters that lead to a larger expected reward. The objective to maxi- mize is the expected future reward:
Tri(0) = i=T PRL(41:7) > R(ai, [pis al)] (8) i=l
where R(ai, [pi, qi]) denotes the reward resulting from action ai. We use the likelihood ratio trick (Williams, 1992; Glynn, 1990; Aleksandrov et al., 1968) for gradient updates:
generation systems using both human judgments and two automatic metrics: conversation length (number of turns in the entire session) and diversity.
# 5.1 Dataset
The dialogue simulation requires high-quality initial inputs fed to the agent. For example, an initial input of âwhy ?â is undesirable since it is unclear how the dialogue could proceed. We take a subset of 10 million messages from the OpenSubtitles dataset and extract 0.8 million sequences with the lowest likelihood of generating the response âi donât know what you are taking aboutâ to ensure initial inputs are easy to respond to.
i=T VJrLO ~ D7 Viosn\ (ailpi.a:) ¥- R(ai [pis ai) i=1 (9)
(9) We refer readers to Williams (1992) and Glynn
(1990) for more details.
# 4.4 Curriculum Learning
A curriculum Learning strategy is again employed in which we begin by simulating the dialogue for 2 turns, and gradually increase the number of simulated turns. We generate 5 turns at most, as the number of candidates to examine grows exponentially in the size of candidate list. Five candidate responses are generated at each step of the simulation.
# 5.2 Automatic Evaluation
Evaluating dialogue systems is difï¬cult. Metrics such as BLEU (Papineni et al., 2002) and perplexity have been widely used for dialogue quality evaluation (Li et al., 2016a; Vinyals and Le, 2015; Sordoni et al., 2015), but it is widely debated how well these auto- matic metrics are correlated with true response qual- ity (Liu et al., 2016; Galley et al., 2015). Since the goal of the proposed system is not to predict the highest probability response, but rather the long-term success of the dialogue, we do not employ BLEU or perplexity for evaluation2.
# 5 Experimental Results
In this section, we describe experimental results along with qualitative analysis. We evaluate dialogue
2We found the RL model performs worse on BLEU score. On a random sample of 2,500 conversational pairs, single reference BLEU scores for RL models, mutual information models and vanilla SEQ2SEQ models are respectively 1.28, 1.44 and 1.17. BLEU is highly correlated with perplexity in generation tasks.
Model SEQ2SEQ mutual information RL # of simulated turns 2.68 3.40 4.48
Table 2: The average number of simulated turns from standard SEQ2SEQ models, mutual informa- tion model and the proposed RL model.
Length of the dialogue The ï¬rst metric we pro- pose is the length of the simulated dialogue. We say a dialogue ends when one of the agents starts gener- ating dull responses such as âi donât knowâ 3 or two consecutive utterances from the same user are highly overlapping4.
The test set consists of 1,000 input messages. To reduce the risk of circular dialogues, we limit the number of simulated turns to be less than 8. Results are shown in Table 2. As can be seen, using mutual information leads to more sustained conversations between the two agents. The proposed RL model is ï¬rst trained based on the mutual information objec- tive and thus beneï¬ts from it in addition to the RL model. We observe that the RL model with dialogue simulation achieves the best evaluation score.
Diversity We report degree of diversity by calculat- ing the number of distinct unigrams and bigrams in generated responses. The value is scaled by the total number of generated tokens to avoid favoring long sentences as described in Li et al. (2016a). The re- sulting metric is thus a type-token ratio for unigrams and bigrams.
For both the standard SEQ2SEQ model and the pro- posed RL model, we use beam search with a beam size 10 to generate a response to a given input mes- sage. For the mutual information model, we ï¬rst generate n-best lists using pSEQ2SEQ(t|s) and then linearly re-rank them using pSEQ2SEQ(s|t). Results are presented in Table 4. We ï¬nd that the proposed RL model generates more diverse outputs when com-
Since the RL model is trained based on future reward rather than MLE, it is not surprising that the RL based models achieve lower BLEU score.
3We use a simple rule matching method, with a list of 8 phrases that count as dull responses. Although this can lead to both false-positives and -negatives, it works pretty well in practice.
4Two utterances are considered to be repetitive if they share more than 80 percent of their words.
pared against both the vanilla SEQ2SEQ model and the mutual information model.
Model SEQ2SEQ mutual information RL Unigram Bigram 0.0062 0.011 0.017 0.015 0.031 0.041
Table 4: Diversity scores (type-token ratios) for the standard SEQ2SEQ model, mutual information model and the proposed RL model.
Human Evaluation We explore three settings for human evaluation: the ï¬rst setting is similar to what was described in Li et al. (2016a), where we employ crowdsourced judges to evaluate a random sample of 500 items. We present both an input message and the generated outputs to 3 judges and ask them to decide which of the two outputs is better (denoted as single- turn general quality). Ties are permitted. Identical strings are assigned the same score. We measure the improvement achieved by the RL model over the mutual information model by the mean difference in scores between the models.
For the second setting, judges are again presented with input messages and system outputs, but are asked to decide which of the two outputs is easier to respond to (denoted as single-turn ease to answer). Again we evaluate a random sample of 500 items, each being assigned to 3 judges.
For the third setting, judges are presented with sim- ulated conversations between the two agents (denoted as multi-turn general quality). Each conversation consists of 5 turns. We evaluate 200 simulated con- versations, each being assigned to 3 judges, who are asked to decide which of the simulated conversations is of higher quality.
Setting single-turn general quality single-turn ease to answer multi-turn general quality RL-win RL-lose 0.40 0.52 0.72 0.36 0.23 0.12
Tie 0.24 0.25 0.16 Table 5: RL gains over the mutual information sys- tem based on pairwise human judgments.
Results for human evaluation are shown in Table 5. The proposed RL system does not introduce a signiï¬- cant boost in single-turn response quality (winning 40 percent of time and losing 36 percent of time).
Input Message How old are you? Whatâs your full name? I donât want to go home tonight. Do you have any feelings for me? How much time do you have here? Not long enough. Sorry, sir. Shall we get started? Do you play football? Weâd have to talk to him. How come you never say it? Table 3: Sampled responses generated from the mutual information models and the proposed RL model.
This is in line with our expectations, as the RL model is not optimized to predict the next utterance, but rather to increase long-term reward. The RL sys- tem produces responses that are signiï¬cantly easier to answer than does the mutual information system, as demonstrated by the single-turn ease to answer setting (winning 52 percent of time and losing 23 percent of time), and also signiï¬cantly higher quality multi-turn dialogues, as demonstrated by the multi- turn general quality setting (winning 72 percent of time).
A: Whatâs your name ? B: Daniel. A: How old are you ? B. Twelve. Whatâs your name ? A. Daniel. B: How old are you ? A: Twelve. Whatâs your name ? B: Daniel. A: How old are you ? B ...
Table 6: An simulated dialogue with a cycle longer than one.
Qualitative Analysis and Discussion We show a random sample of generated responses in Table 3 and simulated conversations in Table 1 at the beginning of the paper. From Table 3, we can see that the RL based agent indeed generates more interactive responses than the other baselines. We also ï¬nd that the RL model has a tendency to end a sentence with another question and hand the conversation over to the user. From Table 1, we observe that the RL model manages to produce more interactive and sustained conversations than the mutual information model.
some aspects of what makes a good conversation, ideally the system would instead receive real rewards from humans. Another problem with the current model is that we can only afford to explore a very small number of candidates and simulated turns since the number of cases to consider grow exponentially.
# 6 Conclusion
During error analysis, we found that although we penalize repetitive utterances in consecutive turns, the dialogue sometimes enters a cycle with length greater than one, as shown in Table 6. This can be ascribed to the limited amount of conversational his- tory we consider. Another issue observed is that the model sometimes starts a less relevant topic during the conversation. There is a tradeoff between rele- vance and less repetitiveness, as manifested in the reward function we deï¬ne in Eq 4.
The fundamental problem, of course, is that the manually deï¬ned reward function canât possibly cover the crucial aspects that deï¬ne an ideal conversa- tion. While the heuristic rewards that we deï¬ned are amenable to automatic calculation, and do capture
We introduce a reinforcement learning framework for neural response generation by simulating dialogues between two agents, integrating the strengths of neu- ral SEQ2SEQ systems and reinforcement learning for dialogue. Like earlier neural SEQ2SEQ models, our framework captures the compositional models of the meaning of a dialogue turn and generates se- mantically appropriate responses. Like reinforce- ment learning dialogue systems, our framework is able to generate utterances that optimize future re- ward, successfully capturing global properties of a good conversation. Despite the fact that our model uses very simple, operationable heuristics for captur- ing these global properties, the framework generates more diverse, interactive responses that foster a more sustained conversation.
# Acknowledgement
We would like to thank Chris Brockett, Bill Dolan and other members of the NLP group at Microsoft Re- search for insightful comments and suggestions. We also want to thank Kelvin Guu, Percy Liang, Chris Manning, Sida Wang, Ziang Xie and other members of the Stanford NLP groups for useful discussions. Jiwei Li is supported by the Facebook Fellowship, to which we gratefully acknowledge. This work is par- tially supported by the NSF via Awards IIS-1514268, IIS-1464128, and by the DARPA Communicating with Computers (CwC) program under ARO prime contract no. W911NF- 15-1-0462. Any opinions, ï¬ndings, and conclusions or recommendations ex- pressed in this material are those of the authors and do not necessarily reï¬ect the views of NSF, DARPA, or Facebook.
# References
V. M. Aleksandrov, V. I. Sysoyev, and V. V. Shemeneva. 1968. Stochastic optimization. Engineering Cybernet- ics, 5:11â16.
Jens Allwood, Joakim Nivre, and Elisabeth Ahls´en. 1992. On the semantics and pragmatics of linguistic feedback. Journal of Semantics, 9:1â26.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. of ICLR.
Rafael E Banchs and Haizhou Li. 2012. IRIS: a chat- oriented dialogue system based on the vector space model. In Proceedings of the ACL 2012 System Demon- strations, pages 37â42.
Yoshua Bengio, J´erËome Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Pro- ceedings of the 26th annual international conference on machine learning, pages 41â48. ACM.
SRK Branavan, David Silver, and Regina Barzilay. 2011. Learning to win by reading manuals in a monte-carlo framework. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies-Volume 1, pages 268â277. Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Margaret Mitchell, Jianfeng Gao, and Bill Dolan. 2015. deltaBLEU: A discriminative metric for generation tasks with intrinsically diverse targets. In Proc. of ACL- IJCNLP, pages 445â450, Beijing, China, July.
Milica GaËsic, Catherine Breslin, Matthew Henderson, Dongho Kim, Martin Szummer, Blaise Thomson, Pir- ros Tsiakoulis, and Steve Young. 2013a. Pomdp-based
dialogue manager adaptation to extended domains. In Proceedings of SIGDIAL.
Milica Gasic, Catherine Breslin, Mike Henderson, Dongkyu Kim, Martin Szummer, Blaise Thomson, Pir- ros Tsiakoulis, and Steve Young. 2013b. On-line policy optimisation of bayesian spoken dialogue systems via human interaction. In Proceedings of ICASSP 2013, pages 8367â8371. IEEE.
Milica GaËsic, Dongho Kim, Pirros Tsiakoulis, Catherine Breslin, Matthew Henderson, Martin Szummer, Blaise Thomson, and Steve Young. 2014. Incremental on- line adaptation of pomdp-based dialogue managers to extended domains. In Proceedings on InterSpeech. Peter W Glynn. 1990. Likelihood ratio gradient estima- tion for stochastic systems. Communications of the ACM, 33(10):75â84.
Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, and Mari Ostendorf. 2016. Deep rein- forcement learning with a natural language action space. In Proceedings of the 54th Annual Meeting of the Asso- ciation for Computational Linguistics (Volume 1: Long Papers), pages 1621â1630, Berlin, Germany, August. Esther Levin, Roberto Pieraccini, and Wieland Eckert. 1997. Learning dialogue strategies within the markov In Automatic Speech decision process framework. Recognition and Understanding, 1997. Proceedings., 1997 IEEE Workshop on, pages 72â79. IEEE.
Esther Levin, Roberto Pieraccini, and Wieland Eckert. 2000. A stochastic model of human-machine interac- tion for learning dialog strategies. IEEE Transactions on Speech and Audio Processing, 8(1):11â23.
Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proc. of NAACL-HLT.
Jiwei Li, Michel Galley, Chris Brockett, Georgios Sp- ithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 994â1003, Berlin, Germany, August.
Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Nose- worthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. arXiv preprint arXiv:1603.08023. 2016. LSTM based conversation models. arXiv preprint arXiv:1603.09457.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Mar- tin Riedmiller. 2013. Playing Atari with deep rein- forcement learning. NIPS Deep Learning Workshop.
Karthik Narasimhan, Tejas Kulkarni, and Regina Barzilay. 2015. Language understanding for text-based games using deep reinforcement learning. arXiv preprint arXiv:1506.08941.
Lasguido Nio, Sakriani Sakti, Graham Neubig, Tomoki Toda, Mirna Adriani, and Satoshi Nakamura. 2014. Developing non-goal dialog system based on examples of drama television. In Natural Interaction with Robots, Knowbots and Smartphones, pages 355â361. Springer. Alice H Oh and Alexander I Rudnicky. 2000. Stochastic language generation for spoken dialogue systems. In Proceedings of the 2000 ANLP/NAACL Workshop on Conversational systems-Volume 3, pages 27â32.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311â318.
Roberto Pieraccini, David Suendermann, Krishna Dayanidhi, and Jackson Liscombe. 2009. Are we there yet? Research in commercial spoken dialog systems. In Text, Speech and Dialogue, pages 3â13. Springer. MarcâAurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732.
Adwait Ratnaparkhi. 2002. Trainable approaches to sur- face natural language generation and their application to conversational dialog systems. Computer Speech & Language, 16(3):435â455.
Alan Ritter, Colin Cherry, and William B Dolan. 2011. Data-driven response generation in social media. In Proceedings of EMNLP 2011, pages 583â593.
Jost Schatzmann, Karl Weilhammer, Matt Stuttle, and Steve Young. 2006. A survey of statistical user simula- tion techniques for reinforcement-learning of dialogue management strategies. The knowledge engineering review, 21(02):97â126.
Emanuel A. Schegloff and Harvey Sacks. 1973. Opening up closings. Semiotica, 8(4):289â327.
Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierar- chical neural network models. In Proceedings of AAAI, February.
Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural In responding machine for short-text conversation. Proceedings of ACL-IJCNLP, pages 1577â1586.
David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrit- twieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. 2016. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489.
Satinder P Singh, Michael J Kearns, Diane J Litman, and Marilyn A Walker. 1999. Reinforcement learning for spoken dialogue systems. In Nips, pages 956â962. Satinder Singh, Michael Kearns, Diane J Litman, Mar- ilyn A Walker, et al. 2000. Empirical evaluation of a reinforcement learning spoken dialogue system. In AAAI/IAAI, pages 645â651.
Satinder Singh, Diane Litman, Michael Kearns, and Mari- lyn Walker. 2002. Optimizing dialogue management with reinforcement learning: Experiments with the nj- fun system. Journal of Artiï¬cial Intelligence Research, pages 105â133.
Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Meg Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversa- tional responses. In Proceedings of NAACL-HLT. Pei-Hao Su, Milica Gasic, Nikola Mrksic, Lina Rojas- Barahona, Stefan Ultes, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Continuously learning neural dialogue management. arxiv.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104â3112.
Richard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. 1999. Policy gradient methods for reinforcement learning with function approximation. In NIPS, volume 99, pages 1057â1063.
Oriol Vinyals and Quoc Le. 2015. A neural conversa- tional model. In Proceedings of ICML Deep Learning Workshop.
Adam Vogel and Dan Jurafsky. 2010. Learning to follow navigational directions. In Proceedings of ACL 2010, pages 806â814.
Marilyn A Walker, Rashmi Prasad, and Amanda Stent. 2003. A trainable generator for recommendations in multimodal dialog. In Proceeedings of INTERSPEECH 2003.
Marilyn A. Walker. 2000. An application of reinforce- ment learning to dialogue strategy selection in a spoken dialogue system for email. Journal of Artiï¬cial Intelli- gence Research, pages 387â416.
Tsung-Hsien Wen, Milica Gasic, Nikola MrkËsi´c, Pei-Hao Su, David Vandyke, and Steve Young. 2015. Semanti- cally conditioned LSTM-based natural language gener- ation for spoken dialogue systems. In Proceedings of EMNLP, pages 1711â1721, Lisbon, Portugal.
Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes, David Vandyke, and Steve Young. 2016. A network-based end-to-end trainable task-oriented dialogue system. arXiv preprint arXiv:1604.04562.
Ronald J Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256.
Zhen Xu, Bingquan Liu, Baoxun Wang, Chengjie Sun, and Xiaolong Wang. 2016. Incorporating loose-structured knowledge into LSTM with recall gate for conversation modeling. arXiv preprint arXiv:1605.05110.
Kaisheng Yao, Geoffrey Zweig, and Baolin Peng. 2015. Attention with intention for a neural network conversa- tion model. In NIPS workshop on Machine Learning for Spoken Language Understanding and Interaction. Steve Young, Milica GaËsi´c, Simon Keizer, Franc¸ois Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu. 2010. The hidden information state model: A prac- tical framework for pomdp-based spoken dialogue man- agement. Computer Speech & Language, 24(2):150â 174.
Steve Young, Milica Gasic, Blaise Thomson, and Jason D Williams. 2013. Pomdp-based statistical spoken di- alog systems: A review. Proceedings of the IEEE, 101(5):1160â1179.
Wojciech Zaremba and Ilya Sutskever. 2015. Reinforce- ment learning neural Turing machines. arXiv preprint arXiv:1505.00521. | {
"id": "1506.08941"
} |
1606.01540 | OpenAI Gym | OpenAI Gym is a toolkit for reinforcement learning research. It includes a
growing collection of benchmark problems that expose a common interface, and a
website where people can share their results and compare the performance of
algorithms. This whitepaper discusses the components of OpenAI Gym and the
design decisions that went into the software. | http://arxiv.org/pdf/1606.01540 | Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba | cs.LG, cs.AI | null | null | cs.LG | 20160605 | 20160605 | 6 1 0 2 n u J 5 ] G L . s c [
1 v 0 4 5 1 0 . 6 0 6 1 : v i X r a
# OpenAI Gym
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba OpenAI
# Abstract
OpenAI Gym1 is a toolkit for reinforcement learning research. It includes a growing collection of benchmark problems that expose a common interface, and a website where people can share their results and compare the performance of algorithms. This whitepaper discusses the components of OpenAI Gym and the design decisions that went into the software.
# 1 Introduction
Reinforcement learning (RL) is the branch of machine learning that is concerned with making sequences of decisions. RL has a rich mathematical theory and has found a variety of practical applications [1]. Recent advances that combine deep learning with reinforcement learning have led to a great deal of excitement in the ï¬eld, as it has become evident that general algorithms such as policy gradients and Q-learning can achieve good performance on difï¬cult problems, without problem-speciï¬c engineering [2, 3, 4].
To build on recent progress in reinforcement learning, the research community needs good benchmarks on which to compare algorithms. A variety of benchmarks have been released, such as the Arcade Learn- ing Environment (ALE) [5], which exposed a collection of Atari 2600 games as reinforcement learning problems, and recently the RLLab benchmark for continuous control [6], to which we refer the reader for a survey on other RL benchmarks, including [7, 8, 9, 10, 11]. OpenAI Gym aims to combine the best el- ements of these previous benchmark collections, in a software package that is maximally convenient and accessible. It includes a diverse collection of tasks (called environments) with a common interface, and this collection will grow over time. The environments are versioned in a way that will ensure that results remain meaningful and reproducible as the software is updated.
Alongside the software library, OpenAI Gym has a website (gym.openai.com) where one can ï¬nd score- boards for all of the environments, showcasing results submitted by users. Users are encouraged to provide links to source code and detailed instructions on how to reproduce their results.
# 2 Background
Reinforcement learning assumes that there is an agent that is situated in an environment. Each step, the agent takes an action, and it receives an observation and reward from the environment. An RL algorithm seeks to maximize some measure of the agentâs total reward, as the agent interacts with the environment. In the RL literature, the environment is formalized as a partially observable Markov decision process (POMDP) [12]. OpenAI Gym focuses on the episodic setting of reinforcement learning, where the agentâs experience is broken down into a series of episodes. In each episode, the agentâs initial state is randomly sampled from a distribution, and the interaction proceeds until the environment reaches a terminal state. The goal in episodic reinforcement learning is to maximize the expectation of total reward per episode, and to achieve a high level of performance in as few episodes as possible.
The following code snippet shows a single episode with 100 timesteps. It assumes that there is an object called agent, which takes in the observation at each timestep, and an object called env, which is the
1gym.openai.com
1
environment. OpenAI Gym does not include an agent class or specify what interface the agent should use; we just include an agent here for demonstration purposes.
ob0 = env.reset() # sample environment state, return first observation a0 = agent.act(ob0) # agent chooses first action ob1, rew0, done0, info0 = env.step(a0) # environment returns observation, # reward, and boolean flag indicating if the episode is complete. a1 = agent.act(ob1) ob2, rew1, done1, info1 = env.step(a1) ... a99 = agent.act(o99) ob100, rew99, done99, info2 = env.step(a99) # done99 == True => terminal
# 3 Design Decisions
The design of OpenAI Gym is based on the authorsâ experience developing and comparing reinforcement learning algorithms, and our experience using previous benchmark collections. Below, we will summarize some of our design decisions.
Environments, not agents. Two core concepts are the agent and the environment. We have chosen to only provide an abstraction for the environment, not for the agent. This choice was to maximize convenience for users and allow them to implement different styles of agent interface. First, one could imagine an âonline learningâ style, where the agent takes (observation, reward, done) as an input at each timestep and performs learning updates incrementally. In an alternative âbatch updateâ style, a agent is called with observation as input, and the reward information is collected separately by the RL algorithm, and later it is used to compute an update. By only specifying the agent interface, we allow users to write their agents with either of these styles.
Emphasize sample complexity, not just ï¬nal performance. The performance of an RL algorithm on an environment can be measured along two axes: ï¬rst, the ï¬nal performance; second, the amount of time it takes to learnâthe sample complexity. To be more speciï¬c, ï¬nal performance refers to the average reward per episode, after learning is complete. Learning time can be measured in multiple ways, one simple scheme is to count the number of episodes before a threshold level of average performance is exceeded. This threshold is chosen per-environment in an ad-hoc way, for example, as 90% of the maximum performance achievable by a very heavily trained agent. Both ï¬nal performance and sample complexity are very interesting, however, arbitrary amounts of computation can be used to boost ï¬nal performance, making it a comparison of computational resources rather than algorithm quality.
Encourage peer review, not competition. The OpenAI Gym website allows users to compare the performance of their algorithms. One of its inspiration is Kaggle, which hosts a set of machine learning contests with leaderboards. However, the aim of the OpenAI Gym scoreboards is not to create a competition, but rather to stimulate the sharing of code and ideas, and to be a meaningful benchmark for assessing different methods. RL presents new challenges for benchmarking. In the supervised learning setting, performance is measured by prediction accuracy on a test set, where the correct outputs are hidden from contestants. In RL, itâs less straightforward to measure generalization performance, except by running the usersâ code on a collection of unseen environments, which would be computationally expensive. Without a hidden test set, one must check that an algorithm did not âoverï¬tâ on the problems it was tested on (for example, through parameter tuning). We would like to encourage a peer review process for interpreting results submitted by users. Thus, OpenAI Gym asks users to create a Writeup describing their algorithm, parameters used, and linking to code. Writeups should allow other users to reproduce the results. With the source code available, it is possible to make a nuanced judgement about whether the algorithm âoverï¬tâ to the task at hand.
Strict versioning for environments. If an environment changes, results before and after the change would be incomparable. To avoid this problem, we guarantee than any changes to an environment will be accompanied by an increase in version number. For example, the initial version of the CartPole task is named Cartpole-v0, and if its functionality changes, the name will be updated to Cartpole-v1.
2
Figure 1: Images of some environments that are currently part of OpenAI Gym.
Monitoring by default. By default, environments are instrumented with a Monitor, which keeps track of every time step (one step of simulation) and reset (sampling a new initial state) are called. The Monitorâs behavior is conï¬gurable, and it can record a video periodically. It also is sufï¬cient to produce learning curves. The videos and learning curve data can be easily posted to the OpenAI Gym website.
# 4 Environments
OpenAI Gym contains a collection of Environments (POMDPs), which will grow over time. See Figure 1 for examples. At the time of Gymâs initial beta release, the following environments were included:
Classic control and toy text: small-scale tasks from the RL literature.
⢠Algorithmic: perform computations such as adding multi-digit numbers and reversing sequences. Most of these tasks require memory, and their difï¬culty can be chosen by varying the sequence length.
Atari: classic Atari games, with screen images or RAM as input, using the Arcade Learning Environment [5].
⢠Board games: currently, we have included the game of Go on 9x9 and 19x19 boards, where the Pachi engine [13] serves as an opponent.
⢠2D and 3D robots: control a robot in simulation. These tasks use the MuJoCo physics engine, which was designed for fast and accurate robot simulation [14]. A few of the tasks are adapted from RLLab [6].
Since the initial release, more environments have been created, including ones based on the open source physics engine Box2D or the Doom game engine via VizDoom [15].
# 5 Future Directions
In the future, we hope to extend OpenAI Gym in several ways.
⢠Multi-agent setting. It will be interesting to eventually include tasks in which agents must collaborate or compete with other agents.
⢠Curriculum and transfer learning. Right now, the tasks are meant to be solved from scratch. Later, it will be more interesting to consider sequences of tasks, so that the algorithm is trained on one task after the other. Here, we will create sequences of increasingly difï¬cult tasks, which are meant to be solved in order.
⢠Real-world operation. Eventually, we would like to integrate the Gym API with robotic hardware, validating reinforcement learning algorithms in the real world.
3
# References
[1] Dimitri P Bertsekas, Dimitri P Bertsekas, Dimitri P Bertsekas, and Dimitri P Bertsekas. Dynamic programming and optimal control. Athena Scientiï¬c Belmont, MA, 1995.
[2] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, Sadik Beattie, C., Antonoglou A., H. I., King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
[3] J. Schulman, S. Levine, P. Abbeel, M. I. Jordan, and P. Moritz. Trust region policy optimization. In ICML, pages 1889â1897, 2015.
[4] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783, 2016.
[5] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The Arcade Learning Environment: An evaluation platform for general agents. J. Artif. Intell. Res., 47:253â279, 2013.
[6] Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. arXiv preprint arXiv:1604.06778, 2016.
[7] A. Geramifard, C. Dann, R. H. Klein, W. Dabney, and J. P. How. RLPy: A value-function-based reinforcement learning framework for education and research. J. Mach. Learn. Res., 16:1573â1578, 2015.
[8] B. Tanner and A. White. RL-Glue: Language-independent software for reinforcement-learning experiments. J. Mach. Learn. Res., 10:2133â2136, 2009.
[9] T. Schaul, J. Bayer, D. Wierstra, Y. Sun, M. Felder, F. Sehnke, T. R¨uckstieÃ, and J. Schmidhuber. PyBrain. J. Mach. Learn. Res., 11:743â746, 2010.
[10] S. Abeyruwan. RLLib: Lightweight standard and on/off policy reinforcement learning library (C++). http://web.cs.miami.edu/home/saminda/rilib.html, 2013.
[11] Christos Dimitrakakis, Guangliang Li, and Nikoalos Tziortziotis. The reinforcement learning competition 2014. AI Magazine, 35(3):61â65, 2014.
[12] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
[13] Petr BaudiËs and Jean-loup Gailly. Pachi: State of the art open source go program. In Advances in Computer Games, pages 24â38. Springer, 2011.
[14] Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026â5033. IEEE, 2012.
[15] MichaŠKempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Ja´skowski. Vizdoom: A doom-based ai research platform for visual reinforcement learning. arXiv preprint arXiv:1605.02097, 2016.
4 | {
"id": "1602.01783"
} |
1606.01305 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | 7 1 0 2
p e S 2 2 ] E N . s c [
4 v 5 0 3 1 0 . 6 0 6 1 : v i X r a
Under review as a conference paper at ICLR 2017
# ZONEOUT: REGULARIZING RNNS BY RANDOMLY PRESERVING HIDDEN ACTIVATIONS
David Krueger!*, Tegan Maharaj?*, Janos Kramar? Mohammad Pezeshki! Nicolas Ballas', Nan Rosemary Keâ, Anirudh Goyalâ Yoshua Bengioââ, Aaron Courville'!, Christopher Pal? ! MILA, Université de Montréal, firstname. lastname@umontreal.ca. 2 Beole Polytechnique de Montréal, firstname. 1lastname@polymtl.ca. * Equal contributions. âCIFAR Senior Fellow. CIFAR Fellow.
# ABSTRACT
We propose zoneout, a novel method for regularizing RNNs. At each timestep, zoneout stochastically forces some hidden units to maintain their previous values. Like dropout, zoneout uses random noise to train a pseudo-ensemble, improving generalization. But by preserving instead of dropping hidden units, gradient information and state information are more readily propagated through time, as in feedforward stochastic depth networks. We perform an empirical investigation of various RNN regularizers, and ï¬nd that zoneout gives signiï¬cant performance improvements across tasks. We achieve competitive results with relatively simple models in character- and word-level language modelling on the Penn Treebank and Text8 datasets, and combining with recurrent batch normalization (Cooijmans et al., 2016) yields state-of-the-art results on permuted sequential MNIST.
# INTRODUCTION
Regularizing neural nets can signiï¬cantly improve performance, as indicated by the widespread use of early stopping, and success of regularization methods such as dropout and its recurrent variants (Hinton et al., 2012; Srivastava et al., 2014; Zaremba et al., 2014; Gal, 2015). In this paper, we address the issue of regularization in recurrent neural networks (RNNs) with a novel method called zoneout.
RNNs sequentially construct ï¬xed-length representations of arbitrary-length sequences by folding new observations into their hidden state using an input-dependent transition operator. The repeated application of the same transition operator at the different time steps of the sequence, however, can make the dynamics of an RNN sensitive to minor perturbations in the hidden state; the transition dynamics can magnify components of these perturbations exponentially. Zoneout aims to improve RNNsâ robustness to perturbations in the hidden state in order to regularize transition dynamics.
Like dropout, zoneout injects noise during training. But instead of setting some unitsâ activations to 0 as in dropout, zoneout randomly replaces some unitsâ activations with their activations from the previous timestep. As in dropout, we use the expectation of the random noise at test time. This results in a simple regularization approach which can be applied through time for any RNN architecture, and can be conceptually extended to any model whose state varies over time.
Compared with dropout, zoneout is appealing because it preserves information ï¬ow forwards and backwards through the network. This helps combat the vanishing gradient problem (Hochreiter, 1991; Bengio et al., 1994), as we observe experimentally.
We also empirically evaluate zoneout on classiï¬cation using the permuted sequential MNIST dataset, and on language modelling using the Penn Treebank and Text8 datasets, demonstrat- ing competitive or state of the art performance across tasks. In particular, we show that zo- neout performs competitively with other proposed regularization methods for RNNs, includ- ing recently-proposed dropout variants. Code for replicating all experiments can be found at: http://github.com/teganmaharaj/zoneout
1
Under review as a conference paper at ICLR 2017
2 RELATED WORK
2.1 RELATIONSHIP TO DROPOUT
Zoneout can be seen as a selective application of dropout to some of the nodes in a modiï¬ed computational graph, as shown in Figure 1. In zoneout, instead of dropping out (being set to 0), units zone out and are set to their previous value (ht = htâ1). Zoneout, like dropout, can be viewed as a way to train a pseudo-ensemble (Bachman et al., 2014), injecting noise using a stochastic âidentity-maskâ rather than a zero-mask. We conjecture that identity-masking is more appropriate for RNNs, since it makes it easier for the network to preserve information from previous timesteps going forward, and facilitates, rather than hinders, the ï¬ow of gradient information going backward, as we demonstrate experimentally.
ie M1
Figure 1: Zoneout as a special case of dropout; Ëht is the unit hâs hidden activation for the next time step (if not zoned out). Zoneout can be seen as applying dropout on the hidden state delta, Ëht â htâ1. When this update is dropped out (represented by the dashed line), ht becomes htâ1.
2.2 DROPOUT IN RNNS
Initially successful applications of dropout in RNNs (Pham et al., 2013; Zaremba et al., 2014) only applied dropout to feed-forward connections (âup the stackâ), and not recurrent connections (âforward through timeâ), but several recent works (Semeniuta et al., 2016; Moon et al., 2015; Gal, 2015) propose methods that are not limited in this way. Bayer et al. (2013) successfully apply fast dropout (Wang & Manning, 2013), a deterministic approximation of dropout, to RNNs.
# eas
Semeniuta et al. (2016) apply recurrent dropout to the updates to LSTM memory cells (or GRU states), i.e. they drop out the input/update gate in LSTM/GRU. Like zoneout, their approach prevents the loss of long-term memories built up in the states/cells of GRUs/LSTMS, but zoneout does this by preserving unitsâ activations exactly. This difference is most salient when zoning out the hidden states (not the memory cells) of an LSTM, for which there is no analogue in recurrent dropout. Whereas saturated output gates or output nonlinearities would cause recurrent dropout to suffer from vanishing gradients (Bengio et al., 1994), zoned-out units still propagate gradients effectively in this situation. Furthermore, while the recurrent dropout method is speciï¬c to LSTMs and GRUs, zoneout generalizes to any model that sequentially builds distributed representations of its input, including vanilla RNNs.
Also motivated by preventing memory loss, Moon et al. (2015) propose rnnDrop. This technique amounts to using the same dropout mask at every timestep, which the authors show results in improved performance on speech recognition in their experiments. Semeniuta et al. (2016) show, however, that past statesâ inï¬uence vanishes exponentially as a function of dropout probability when taking the expectation at test time in rnnDrop; this is problematic for tasks involving longer-term dependencies.
# carurs
Gal (2015) propose another technique which uses the same mask at each timestep. Motivated by variational inference, they drop out the rows of weight matrices in the input and output embeddings and LSTM gates, instead of dropping unitsâ activations. The proposed variational RNN technique achieves single-model state-of-the-art test perplexity of 73.4 on word-level language modelling of Penn Treebank.
2.3 RELATIONSHIP TO STOCHASTIC DEPTH
Zoneout can also be viewed as a per-unit version of stochastic depth (Huang et al., 2016), which randomly drops entire layers of feed-forward residual networks (ResNets (He et al., 2015)). This is
2
# Under review as a conference paper at ICLR 2017
equivalent to zoning out all of the units of a layer at the same time. In a typical RNN, there is a new input at each timestep, causing issues for a naive implementation of stochastic depth. Zoning out an entire layer in an RNN means the input at the corresponding timestep is completely ignored, whereas zoning out individual units allows the RNN to take each element of its input sequence into account. We also found that using residual connections in recurrent nets led to instability, presumably due to the parameter sharing in RNNs. Concurrent with our work, Singh et al. (2016) propose zoneout for ResNets, calling it SkipForward. In their experiments, zoneout is outperformed by stochastic depth, dropout, and their proposed Swapout technique, which randomly drops either or both of the identity or residual connections. Unlike Singh et al. (2016), we apply zoneout to RNNs, and ï¬nd it outperforms stochastic depth and recurrent dropout.
2.4 SELECTIVELY UPDATING HIDDEN UNITS
Like zoneout, clockwork RNNs (Koutnik et al., 2014) and hierarchical RNNs (Hihi & Bengio, 1996) update only some unitsâ activations at every timestep, but their updates are periodic, whereas zoneoutâs are stochastic. Inspired by clockwork RNNs, we experimented with zoneout variants that target different update rates or schedules for different units, but did not ï¬nd any performance beneï¬t. Hierarchical multiscale LSTMs (Chung et al., 2016) learn update probabilities for different units using the straight-through estimator (Bengio et al., 2013; Courbariaux et al., 2015), and combined with recently-proposed Layer Normalization (Ba et al., 2016), achieve competitive results on a variety of tasks. As the authors note, their method can be interpreted as an input-dependent form of adaptive zoneout.
In recent work, Ha et al. (2016) use a hypernetwork to dynamically rescale the row-weights of a primary LSTM network, achieving state-of-the-art 1.21 BPC on character-level Penn Treebank when combined with layer normalization (Ba et al., 2016) in a two-layer network. This scaling can be viewed as an adaptive, differentiable version of the variational LSTM (Gal, 2015), and could similarly be used to create an adaptive, differentiable version of zoneout. Very recent work conditions zoneout probabilities on suprisal (a measure of the discrepancy between the predicted and actual state), and sets a new state of the art on enwik8 (Rocki et al., 2016).
# 3 ZONEOUT AND PRELIMINARIES
We now explain zoneout in full detail, and compare with other forms of dropout in RNNs. We start by reviewing recurrent neural networks (RNNs).
3.1 RECURRENT NEURAL NETWORKS
Recurrent neural networks process data x1, x2, . . . , xT sequentially, constructing a corresponding sequence of representations, h1, h2, . . . , hT . Each hidden state is trained (implicitly) to remember and emphasize all task-relevant aspects of the preceding inputs, and to incorporate new inputs via a transition operator, T , which converts the present hidden state and input into a new hidden state: ht = T (htâ1, xt). Zoneout modiï¬es these dynamics by mixing the original transition operator ËT with the identity operator (as opposed to the null operator used in dropout), according to a vector of Bernoulli masks, dt: Zoneout:
3.2 LONG SHORT-TERM MEMORY
In long short-term memory RNNs (LSTMs) (Hochreiter & Schmidhuber, 1997), the hidden state is divided into memory cell ct, intended for internal long-term storage, and hidden state ht, used as a transient representation of state at timestep t. In the most widely used formulation of an LSTM (Gers et al., 2000), ct and ht are computed via a set of four âgatesâ, including the forget gate, ft, which directly connects ct to the memories of the previous timestep ctâ1, via an element-wise multiplication. Large values of the forget gate cause the cell to remember most (not all) of its previous value. The other gates control the ï¬ow of information in (it, gt) and out (ot) of the cell. Each gate has a weight matrix and bias vector; for example the forget gate has Wxf , Whf , and bf . For brevity, we will write these as Wx, Wh, b.
3
# Under review as a conference paper at ICLR 2017
An LSTM is deï¬ned as follows:
in, fe, Or = O(Wexe + Wrhe_1 + b) H = tanh(W,,2; + Wight-1 + bg) = frOarithog hy = 0, © tanh(c)
A naive application of dropout in LSTMs would zero-mask either or both of the memory cells and hidden states, without changing the computation of the gates (i, f, o, g). Dropping memory cells, for example, changes the computation of ct as follows:
= dO (frOG-1 +t © 9)
Alternatives abound, however; masks can be applied to any subset of the gates, cells, and states. Semeniuta et al. (2016), for instance, zero-mask the input gate:
c= (fr Oa-1 + di OO H)
When the input gate is masked like this, there is no additive contribution from the input or hidden state, and the value of the memory cell simply decays according to the forget gate.
(a) (b)
Figure 2: (a) Zoneout, vs (b) the recurrent dropout strategy of (Semeniuta et al., 2016) in an LSTM. Dashed lines are zero-masked; in zoneout, the corresponding dotted lines are masked with the corresponding opposite zero-mask. Rectangular nodes are embedding layers.
In zoneout, the values of the hidden state and memory cell randomly either maintain their previous value or are updated as usual. This introduces stochastic identity connections between subsequent time steps:
= df Ou-1+(l-di)O(frOartiOg) hy = dP? Oy + (l= di?) © (0 © tanh (fe O¢1tt%© g))
We usually use different zoneout masks for cells and hiddens. We also experiment with a variant of recurrent dropout that reuses the input dropout mask to zoneout the corresponding output gates:
ce = (fr © r-1 + dy Ot © H) hy = ((1â dk) © o + dy © 04-1) © tanh(c;)
The motivation for this variant is to prevent the network from being forced (by the output gate) to expose a memory cell which has not been updated, and hence may contain misleading information.
# 4 EXPERIMENTS AND DISCUSSION
We evaluate zoneoutâs performance on the following tasks: (1) Character-level language modelling on the Penn Treebank corpus (Marcus et al., 1993); (2) Word-level language modelling on the Penn Treebank corpus (Marcus et al., 1993); (3) Character-level language modelling on the Text8 corpus (Mahoney, 2011); (4) Classiï¬cation of hand-written digits on permuted sequential MNIST (pMNIST) (Le et al., 2015). We also investigate the gradient ï¬ow to past hidden states, using pMNIST.
4
Under review as a conference paper at ICLR 2017
4.1 PENN TREEBANK LANGUAGE MODELLING DATASET
The Penn Treebank language model corpus contains 1 million words. The model is trained to predict the next word (evaluated on perplexity) or character (evaluated on BPC: bits per character) in a sequence. 1
4.1.1 CHARACTER-LEVEL
For the character-level task, we train networks with one layer of 1000 hidden units. We train LSTMs with a learning rate of 0.002 on overlapping sequences of 100 in batches of 32, optimize using Adam, and clip gradients with threshold 1. These settings match those used in Cooijmans et al. (2016). We also train GRUs and tanh-RNNs with the same parameters as above, except sequences are non- overlapping and we use learning rates of 0.001, and 0.0003 for GRUs and tanh-RNNs respectively. Small values (0.1, 0.05) of zoneout signiï¬cantly improve generalization performance for all three models. Intriguingly, we ï¬nd zoneout increases training time for GRU and tanh-RNN, but decreases training time for LSTMs.
We focus our investigation on LSTM units, where the dynamics of zoning out states, cells, or both provide interesting insight into zoneoutâs behaviour. Figure 3 shows our exploration of zoneout in LSTMs, for various zoneout probabilities of cells and/or hiddens. Zoneout on cells with probability 0.5 or zoneout on states with probability 0.05 both outperform the best-performing recurrent dropout (p = 0.25). Combining zc = 0.5 and zh = 0.05 leads to our best-performing model, which achieves 1.27 BPC, competitive with recent state-of-the-art set by (Ha et al., 2016). We compare zoneout to recurrent dropout (for p â {0.05, 0.2, 0.25, 0.5, 0.7}), weight noise (Ï = 0.075), norm stabilizer (β = 50) (Krueger & Memisevic, 2015), and explore stochastic depth (Huang et al., 2016) in a recurrent setting (analagous to zoning out an entire timestep). We also tried a shared-mask variant of zoneout as used in pMNIST experiments, where the same mask is used for both cells and hiddens. Neither stochastic depth or shared-mask zoneout performed as well as separate masks, sampled per unit. Figure 3 shows the best performance achieved with each regularizer, as well as an unregularized LSTM baseline. Results are reported in Table 1, and learning curves shown in Figure 4.
Low zoneout probabilities (0.05-0.25) also improve over baseline in GRUs and tanh-RNNs, reducing BPC from 1.53 to 1.41 for GRU and 1.67 to 1.52 for tanh-RNN. Similarly, low zoneout probabilities work best on the hidden states of LSTMs. For memory cells in LSTMs, however, higher probabilities (around 0.5) work well, perhaps because large forget-gate values approximate the effect of cells zoning out. We conjecture that best performance is achieved with zoneout LSTMs because of the stability of having both state and cell. The probability that both will be zoned out is very low, but having one or the other zoned out carries information from the previous timestep forward, while having the other react ânormallyâ to new information.
# 4.1.2 WORD-LEVEL
For the word-level task, we replicate settings from Zaremba et al. (2014)âs best single-model perfor- mance. This network has 2 layers of 1500 units, with weights initialized uniformly [-0.04, +0.04]. The model is trained for 14 epochs with learning rate 1, after which the learning rate is reduced by a factor of 1.15 after each epoch. Gradient norms are clipped at 10.
With no dropout on the non-recurrent connections (i.e. zoneout as the only regularization), we do not achieve competitive results. We did not perform any search over models, and conjecture that the large model size requires regularization of the feed-forward connections. Adding zoneout (zc = 0.25 and zh = 0.025) on the recurrent connections to the model optimized for dropout on the non-recurrent connections however, we are able to improve test perplexity from 78.4 to 77.4. We report the best performance achieved with a given technique in Table 1.
1 These metrics are deterministic functions of negative log-likelihood (NLL). Speciï¬cally, perplexity is exponentiated NLL, and BPC (entropy) is NLL divided by the natural logarithm of 2.
5
# Under review as a conference paper at ICLR 2017
22 â Zh=05 â Unregularized LSTM â Z=05 22) â Zoneout ee Zh =0.05 © Stochastic depth _ âww Recurrent dropout y 2 2c = 0.05 20 4-4 Norm stabilizer § aa Zc =0.5,Zh =05 y 4 Weight noise 5 <4 Zc = 0.05, Zh = 0.05 â< gis b> Zc = 0.5, Zh = 0.05 gi g s 1.6| 16 14 4 1 6 11 16 21 26 1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 Epoch Epoch
Figure 3: Validation BPC (bits per character) on Character-level Penn Treebank, for different probabilities of zoneout on cells zc and hidden states zh (left), and comparison of an unregularized LSTM, zoneout zc = 0.5, zh = 0.05, stochastic depth zoneout z = 0.05, recurrent dropout p = 0.25, norm stabilizer β = 50, and weight noise Ï = 0.075 (right).
3.5 3.0 Unregularized LSTM (training) ~ Unregularized LSTM (training) â _Unregularized LSTM (validation) â Unregularized LSTM (validation) Recurrent dropout (training) Recurrent dropout (training) 3.0 ast Recurrent dropout (validation) § Recurrent dropout (validation) ONE as Zoneout (training) S - Zoneout (training) â Zoneout (validation) ro â Zoneout (validation) a 2.0 r zs 5 + z Pe Ey a 0 5 10 15 20 25 30 o 5 10 15 20 2 30 35 40 Epochs Epochs
3 8 s
2
Figure 4: Training and validation bits-per-character (BPC) comparing LSTM regularization methods on character-level Penn Treebank (left) and Text8. (right)
4.2 TEXT8
Enwik8 is a corpus made from the ï¬rst 109 bytes of Wikipedia dumped on Mar. 3, 2006. Text8 is a "clean text" version of this corpus; with html tags removed, numbers spelled out, symbols converted to spaces, all lower-cased. Both datasets were created and are hosted by Mahoney (2011).
We use a single-layer network of 2000 units, initialized orthogonally, with batch size 128, learning rate 0.001, and sequence length 180. We optimize with Adam (Kingma & Ba, 2014), clip gradients to a maximum norm of 1 (Pascanu et al., 2012), and use early stopping, again matching the settings of Cooijmans et al. (2016). Results are reported in Table 1, and Figure 4 shows training and validation learning curves for zoneout (zc = 0.5, zh = 0.05) compared to an unregularized LSTM and to recurrent dropout.
4.3 PERMUTED SEQUENTIAL MNIST
In sequential MNIST, pixels of an image representing a number [0-9] are presented one at a time, left to right, top to bottom. The task is to classify the number shown in the image. In pMNIST , the pixels are presented in a (ï¬xed) random order.
We compare recurrent dropout and zoneout to an unregularized LSTM baseline. All models have a single layer of 100 units, and are trained for 150 epochs using RMSProp (Tieleman & Hinton, 2012) with a decay rate of 0.5 for the moving average of gradient norms. The learning rate is set to 0.001 and the gradients are clipped to a maximum norm of 1 (Pascanu et al., 2012).
6
# Under review as a conference paper at ICLR 2017
As shown in Figure 5 and Table 2, zoneout gives a signiï¬cant performance boost compared to the LSTM baseline and outperforms recurrent dropout (Semeniuta et al., 2016), although recurrent batch normalization (Cooijmans et al., 2016) outperforms all three. However, by adding zoneout to the recurrent batch normalized LSTM, we achieve state of the art performance. For this setting, the zoneout mask is shared between cells and states, and the recurrent dropout probability and zoneout probabilities are both set to 0.15.
Table 1: Validation and test results of different models on the three language modelling tasks. Results are reported for the best-performing settings. Performance on Char-PTB and Text8 is measured in bits- per-character (BPC); Word-PTB is measured in perplexity. For Char-PTB and Text8 all models are 1-layer unless otherwise noted; for Word-PTB all models are 2-layer. Results above the line are from our own implementation and experiments. Models below the line are: NR-dropout (non-recurrent dropout), V-Dropout (variational dropout), RBN (recurrent batchnorm), H-LSTM+LN (HyperLSTM + LayerNorm), 3-HM-LSTM+LN (3-layer Hierarchical Multiscale LSTM + LayerNorm).
Char-PTB Word-PTB Text8 Model Valid Test Valid Test Valid Test Unregularized LSTM Weight noise Norm stabilizer Stochastic depth Recurrent dropout Zoneout 1.466 1.507 1.459 1.432 1.396 1.362 1.356 1.344 1.352 1.343 1.286 1.252 120.7 â â â 91.6 81.4 114.5 â â â 87.0 77.4 1.396 1.356 1.382 1.337 1.386 1.331 1.408 1.367 1.398 1.343 1.401 1.336 NR-dropout (Zaremba et al., 2014) V-dropout (Gal, 2015) RBN (Cooijmans et al., 2016) H-LSTM + LN (Ha et al., 2016) 3-HM-LSTM + LN (Chung et al., 2016) â â â 1.281 â â â 1.32 1.250 1.24 82.2 â â â â 78.4 73.4 â â â â â â â â â â 1.36 â 1.29
Table 2: Error rates on the pMNIST digit classiï¬cation task. Zoneout outperforms recurrent dropout, and sets state of the art when combined with recurrent batch normalization.
Model Valid Test Unregularized LSTM Recurrent dropout p = 0.5 Zoneout zc = zh = 0.15 Recurrent batchnorm Recurrent batchnorm & Zoneout zc = zh = 0.15 0.092 0.083 0.063 - 0.045 0.102 0.075 0.069 0.046 0.041
1.0 Vanilla LSTM (Train) âVanilla LSTM (Validation) =0.1 15 (Train) 0.15 ( 08 0.6 Error Rate 04 02 0.0 0 20 40 60 go 100 120 «+140 160 Epochs
Figure 5: Training and validation error rates for an unregularized LSTM, recurrent dropout, and zoneout on the task of permuted sequential MNIST digit classiï¬cation.
7
# Under review as a conference paper at ICLR 2017
4.4 GRADIENT FLOW
We investigate the hypothesis that identity connections introduced by zoneout facilitate gradient ï¬ow to earlier timesteps. Vanishing gradients are a perennial issue in RNNs. As effective as many techniques are for mitigating vanishing gradients (notably the LSTM architecture Hochreiter & Schmidhuber (1997)), we can always imagine a longer sequence to train on, or a longer-term dependence we want to capture.
We compare gradient flow in an unregularized LSTM to zoning out (stochastic identity-mapping) and dropping out (stochastic zero-mapping) the recurrent connections after one epoch of training on pMNIST. We compute the average gradient norms || oe || of loss L with respect to cell activations c; at each timestep t, and for each method, normalize the average gradient norms by the sum of average gradient norms for all timesteps.
Figure 6 shows that zoneout propagates gradient information to early timesteps much more effectively than dropout on the recurrent connections, and even more effectively than an unregularized LSTM. The same effect was observed for hidden states ht.
â= Dropout â Zoneout â_ Unregularized STM 0100 200 +300 ~-400~=500~~<600~â~«700 timestep
Figure 6: Normalized >> || oe || of loss L with respect to cell activations c, at each timestep zoneout (z, = 0.5), dropout (z. = 0.5), and an unregularized LSTM on one epoch of pMNIST
.
# 5 CONCLUSION
We have introduced zoneout, a novel and simple regularizer for RNNs, which stochastically preserves hidden unitsâ activations. Zoneout improves performance across tasks, outperforming many alterna- tive regularizers to achieve results competitive with state of the art on the Penn Treebank and Text8 datasets, and state of the art results on pMNIST. While searching over zoneout probabilites allows us to tune zoneout to each task, low zoneout probabilities (0.05 - 0.2) on states reliably improve performance of existing models.
We perform no hyperparameter search to achieve these results, simply using settings from the previous state of the art. Results on pMNIST and word-level Penn Treebank suggest that Zoneout works well in combination with other regularizers, such as recurrent batch normalization, and dropout on feedforward/embedding layers. We conjecture that the beneï¬ts of zoneout arise from two main factors: (1) Introducing stochasticity makes the network more robust to changes in the hidden state; (2) The identity connections improve the ï¬ow of information forward and backward through the network.
ACKNOWLEDGMENTS
We are grateful to Hugo Larochelle, Jan Chorowski, and students at MILA, especially ÃaËglar Gülçehre, Marcin Moczulski, Chiheb Trabelsi, and Christopher Beckham, for helpful feedback and discussions. We thank the developers of Theano (Theano Development Team, 2016), Fuel, and Blocks (van Merriënboer et al., 2015). We acknowledge the computing resources provided by ComputeCanada and CalculQuebec. We also thank IBM and Samsung for their support. We would also like to acknowledge the work of Pranav Shyam on learning RNN hierarchies. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA) and the Air
8
# t for
# Under review as a conference paper at ICLR 2017
Force Research Laboratory (AFRL). The views, opinions and/or ï¬ndings expressed are those of the authors and should not be interpreted as representing the ofï¬cial views or policies of the Department of Defense or the U.S. Government.
# REFERENCES
Lei Jimmy Ba, Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. CoRR, abs/1607.06450, 2016. URL http://arxiv.org/abs/1607.06450.
Philip Bachman, Ouais Alsharif, and Doina Precup. Learning with pseudo-ensembles. In Advances in Neural Information Processing Systems, pp. 3365â3373, 2014.
J. Bayer, C. Osendorfer, D. Korhammer, N. Chen, S. Urban, and P. van der Smagt. On Fast Dropout and its Applicability to Recurrent Networks. ArXiv e-prints, November 2013.
Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difï¬cult. Neural Networks, IEEE Transactions on, 5(2):157â166, 1994.
Yoshua Bengio, Nicholas Léonard, and Aaron C. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. CoRR, abs/1308.3432, 2013. URL http://arxiv.org/abs/1308.3432.
Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks. CoRR, abs/1609.01704, 2016. URL http://arxiv.org/abs/1609.01704.
Tim Cooijmans, Nicolas Ballas, César Laurent, Caglar Gulcehre, and Aaron Courville. Recurrent batch normalization. arXiv preprint arXiv:1603.09025, 2016.
Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In NIPS, pp. 3123â3131, 2015.
Yarin Gal. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks. ArXiv e-prints, December 2015.
Felix A. Gers, Jürgen Schmidhuber, and Fred A. Cummins. Learning to forget: Continual prediction with LSTM. Neural Computation, 12(10):2451â2471, 2000.
David Ha, Andrew M. Dai, and Quoc V. Le. Hypernetworks. CoRR, abs/1609.09106, 2016. URL http://arxiv.org/abs/1609.09106.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
Salah El Hihi and Yoshua Bengio. Hierarchical recurrent neural networks for long-term dependencies. In Advances in Neural Information Processing Systems. 1996.
Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012.
Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Masterâs thesis, Institut fur Informatik, Technische Universitat, Munchen, 1991.
Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735â1780, 1997.
Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochastic depth. arXiv preprint arXiv:1603.09382, 2016.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Jan Koutnik, Klaus Greff, Faustino Gomez, and Juergen Schmidhuber. A clockwork rnn. arXiv preprint arXiv:1402.3511, 2014.
9
# Under review as a conference paper at ICLR 2017
David Krueger and Roland Memisevic. Regularizing rnns by stabilizing activations. arXiv preprint arXiv:1511.08400, 2015.
Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networks of rectiï¬ed linear units. arXiv preprint arXiv:1504.00941, 2015.
# Matt Mahoney. About the test data, 2011. URL http://mattmahoney.net/dc/textdata.
Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313â330, 1993.
Taesup Moon, Heeyoul Choi, Hoshik Lee, and Inchul Song. Rnndrop: A novel dropout for rnns in asr. Automatic Speech Recognition and Understanding (ASRU), 2015.
Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. Understanding the exploding gradient problem. CoRR, abs/1211.5063, 2012. URL http://arxiv.org/abs/1211.5063.
V. Pham, T. Bluche, C. Kermorvant, and J. Louradour. Dropout improves Recurrent Neural Networks for Handwriting Recognition. ArXiv e-prints, November 2013.
Kamil Rocki, Tomasz Kornuta, and Tegan Maharaj. Surprisal-driven zoneout. CoRR, abs/1610.07675, 2016. URL http://arxiv.org/abs/1610.07675.
Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. Recurrent dropout without memory loss. arXiv preprint arXiv:1603.05118, 2016.
S. Singh, D. Hoiem, and D. Forsyth. Swapout: Learning an ensemble of deep architectures. ArXiv e-prints, May 2016.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overï¬tting. The Journal of Machine Learning Research, 15(1):1929â1958, 2014.
Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016.
Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4:2, 2012.
Bart van Merriënboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde-Farley, Jan Chorowski, and Yoshua Bengio. Blocks and fuel: Frameworks for deep learning. CoRR, abs/1506.00619, 2015.
Sida Wang and Christopher Manning. Fast dropout training. In Proceedings of the 30th International Conference on Machine Learning, pp. 118â126, 2013.
Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.
10
# Under review as a conference paper at ICLR 2017
6 APPENDIX
6.1 STATIC IDENTITY CONNECTIONS EXPERIMENT
This experiment was suggested by AnonReviewer2 during the ICLR review process with the goal of disentangling the effects zoneout has (1) through noise injection in the training process and (2) through identity connections. Based on these results, we observe that noise injection is essential for obtaining the regularization beneï¬ts of zoneout.
In this experiment, one zoneout mask is sampled at the beginning of training, and used for all examples. This means the identity connections introduced are static across training examples (but still different for each timestep). Using static identity connections resulted in slightly lower training (but not validation) error than zoneout, but worse performance than an unregularized LSTM on both train and validation sets, as shown in Figure 7.
2.2 â Vanilla LSTM (validation) Vanilla LSTM (training) 2.0 â Zoneout (validation) â Zoneout (training) H 18 â Static identity connections (validation) â Static identity connections (training) BPC 1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96101
1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96101 Epoch
Figure 7: Training and validation curves for an LSTM with static identity connections compared to zoneout (both Zc = 0.5 and Zh = 0.05) and compared to a vanilla LSTM, showing that static identity connections fail to capture the beneï¬ts of zoneout.
11 | {
"id": "1603.05118"
} |
1605.09782 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | 7 1 0 2
r p A 3 ] G L . s c [
7 v 2 8 7 9 0 . 5 0 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# ADVERSARIAL FEATURE LEARNING
# Jeff Donahue jdonahue@cs.berkeley.edu Computer Science Division University of California, Berkeley
# Philipp Krähenbühl philkr@utexas.edu Department of Computer Science University of Texas, Austin
# Trevor Darrell trevor@eecs.berkeley.edu Computer Science Division University of California, Berkeley
# ABSTRACT
The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing that the latent space of such generators captures semantic variation in the data distribution. Intuitively, models trained to predict these semantic latent representations given data may serve as useful feature representations for auxiliary problems where semantics are relevant. However, in their existing form, GANs have no means of learning the inverse mapping â projecting data back into the latent space. We propose Bidirectional Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.
# INTRODUCTION
Deep convolutional networks (convnets) have become a staple of the modern computer vision pipeline. After training these models on a massive database of image-label pairs like ImageNet (Russakovsky et al., 2015), the network easily adapts to a variety of similar visual tasks, achieving impressive results on image classiï¬cation (Donahue et al., 2014; Zeiler & Fergus, 2014; Razavian et al., 2014) or localization (Girshick et al., 2014; Long et al., 2015) tasks. In other perceptual domains such as natural language processing or speech recognition, deep networks have proven highly effective as well (Bahdanau et al., 2015; Sutskever et al., 2014; Vinyals et al., 2015; Graves et al., 2013). However, all of these recent results rely on a supervisory signal from large-scale databases of hand-labeled data, ignoring much of the useful information present in the structure of the data itself.
Meanwhile, Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) have emerged as a powerful framework for learning generative models of arbitrarily complex data distributions. The GAN framework learns a generator mapping samples from an arbitrary latent distribution to data, as well as an adversarial discriminator which tries to distinguish between real and generated samples as accurately as possible. The generatorâs goal is to âfoolâ the discriminator by producing samples which are as close to real data as possible. When trained on databases of natural images, GANs produce impressive results (Radford et al., 2016; Denton et al., 2015).
Interpolations in the latent space of the generator produce smooth and plausible semantic variations, and certain directions in this space correspond to particular semantic attributes along which the data distribution varies. For example, Radford et al. (2016) showed that a GAN trained on a database of human faces learns to associate particular latent directions with gender and the presence of eyeglasses.
A natural question arises from this ostensible âsemantic juiceâ ï¬owing through the weights of generators learned using the GAN framework: can GANs be used for unsupervised learning of rich feature representations for arbitrary data distributions? An obvious issue with doing so is that the
1
Published as a conference paper at ICLR 2017
features data z G G(z) G(z), z D x, E(x) E(x) E x P (y)
Figure 1: The structure of Bidirectional Generative Adversarial Networks (BiGAN).
generator maps latent samples to generated data, but the framework does not include an inverse mapping from data to latent representation.
Hence, we propose a novel unsupervised feature learning framework, Bidirectional Generative Adversarial Networks (BiGAN). The overall model is depicted in Figure 1. In short, in addition to the generator G from the standard GAN framework (Goodfellow et al., 2014), BiGAN includes an encoder E which maps data x to latent representations z. The BiGAN discriminator D discriminates not only in data space (x versus G(z)), but jointly in data and latent space (tuples (x, E(x)) versus (G(z), z)), where the latent component is either an encoder output E(x) or a generator input z.
It may not be obvious from this description that the BiGAN encoder E should learn to invert the generator G. The two modules cannot directly âcommunicateâ with one another: the encoder never âseesâ generator outputs (E(G(z)) is not computed), and vice versa. Yet, in Section 3, we will both argue intuitively and formally prove that the encoder and generator must learn to invert one another in order to fool the BiGAN discriminator.
Because the BiGAN encoder learns to predict features z given data x, and prior work on GANs has demonstrated that these features capture semantic attributes of the data, we hypothesize that a trained BiGAN encoder may serve as a useful feature representation for related semantic tasks, in the same way that fully supervised visual models trained to predict semantic âlabelsâ given images serve as powerful feature representations for related visual tasks. In this context, a latent representation z may be thought of as a âlabelâ for x, but one which came for âfree,â without the need for supervision.
An alternative approach to learning the inverse mapping from data to latent representation is to directly model p(z|G(z)), predicting generator input z given generated data G(z). Weâll refer to this alternative as a latent regressor, later arguing (Section 4.1) that the BiGAN encoder may be preferable in a feature learning context, as well as comparing the approaches empirically.
BiGANs are a robust and highly generic approach to unsupervised feature learning, making no assumptions about the structure or type of data to which they are applied, as our theoretical results will demonstrate. Our empirical studies will show that despite their generality, BiGANs are competitive with contemporary approaches to self-supervised and weakly supervised feature learning designed speciï¬cally for a notoriously complex data distribution â natural images.
Dumoulin et al. (2016) independently proposed an identical model in their concurrent work, exploring the case of a stochastic encoder E and the ability of such models to learn in a semi-supervised setting.
# 2 PRELIMINARIES
Let pX(x) be the distribution of our data for x â â¦X (e.g. natural images). The goal of generative modeling is capture this data distribution using a probabilistic model. Unfortunately, exact modeling of this probability density function is computationally intractable (Hinton et al., 2006; Salakhutdinov & Hinton, 2009) for all but the most trivial models. Generative Adversarial Networks (GANs) (Good-
2
Published as a conference paper at ICLR 2017
fellow et al., 2014) instead model the data distribution as a transformation of a ï¬xed latent distribution pZ(z) for z â â¦Z. This transformation, called a generator, is expressed as a deterministic feed forward network G : â¦Z â â¦X with pG(x|z) = δ (x â G(z)) and pG(x) = Ezâ¼pZ [pG(x|z)]. The goal is to train a generator such that pG(x) â pX(x).
The GAN framework trains a generator, such that no discriminative model D : Qx ++ [0,1] can distinguish samples of the data distribution from samples of the generative distribution. Both generator and discriminator are learned using the adversarial (minimax) objective min max V(D,G), where
learned using the adversarial (minimax) objective min max V(D, G) = Exnpx [log D(x)] + Ex~pe [log (1 - D())]
V(D, G) = Exnpx [log D(x)] + Ex~pe [log (1 - D())] () Ez~pg [log(1âD(G(z)))]
:= Goodfellow et al. (2014) showed that for an ideal discriminator the objective C(G) maxD V (D, G) is equivalent to the Jensen-Shannon divergence between the two distributions pG and pX.
The adversarial objective 1 does not directly lend itself to an efï¬cient optimization, as each step in the generator G requires a full discriminator D to be learned. Furthermore, a perfect discriminator no longer provides any gradient information to the generator, as the gradient of any global or local maximum of V (D, G) is 0. To provide a strong gradient signal nonetheless, Goodfellow et al. (2014) slightly alter the objective between generator and discriminator updates, while keeping the same ï¬xed point characteristics. They also propose to optimize (1) using an alternating optimization switching between updates to the generator and discriminator. While this optimization is not guaranteed to converge, empirically it works well if the discriminator and generator are well balanced.
Despite the empirical strength of GANs as generative models of arbitrary data distributions, it is not clear how they can be applied as an unsupervised feature representation. One possibility for learning such representations is to learn an inverse mapping regressing from generated data G(z) back to the latent input z. However, unless the generator perfectly models the data distribution pX, a nearly impossible objective for a complex data distribution such as that of high-resolution natural images, this idea may prove insufï¬cient.
# 3 BIDIRECTIONAL GENERATIVE ADVERSARIAL NETWORKS
In Bidirectional Generative Adversarial Networks (BiGANs) we not only train a generator, but additionally train an encoder E : â¦X â â¦Z. The encoder induces a distribution pE(z|x) = δ(z â E(x)) mapping data points x into the latent feature space of the generative model. The discriminator is also modiï¬ed to take input from the latent space, predicting PD(Y |x, z), where Y = 1 if x is real (sampled from the real data distribution pX), and Y = 0 if x is generated (the output of G(z), z â¼ pZ).
The BiGAN training objective is deï¬ned as a minimax objective
min G,E max D V (D, E, G) (2)
where
V(D, E, G) := Exnpy [Ez~py(-|x) log D(x, 2)] | + Exxpz [ Ex~po(-|z) [log (1 â D(x, 2))} }. â_-el_â__ââ___--__âââ â_â_â_ââââOC___ââ" log D(x,E(x)) log(1âD(G(z),z))
We optimize this minimax objective using the same alternating gradient based optimization as Goodfellow et al. (2014). See Section 3.4 for details.
BiGANs share many of the theoretical properties of GANs (Goodfellow et al.}[2014), while addition- ally guaranteeing that at the global optimum, G and E are each otherâs inverse. BiGANs are also closely related to autoencoders with an ¢p loss function. In the following sections we highlight some of the appealing theoretical properties of BiGANs.
Deï¬nitions Let pGZ(x, z) := pG(x|z)pZ(z) and pEX(x, z) := pE(z|x)pX(x) be the joint distri- butions modeled by the generator and encoder respectively. ⦠:= â¦X à â¦Z is the joint latent and
3
(3)
Published as a conference paper at ICLR 2017
data space. For a region R â â¦,
Prx(R) := fo Pex(X,2)1x,2)er) Ux, 2) = fo, Px(X) Jo, PE(ZIX)1x,2)eR] dz dx Pea(R) = fo paz(x, 2)1\(«,2)eR] U(x, 2) = Jon pz(z) Jox Da(X|2Z)1x,2)eR] 1x dz
â¦Z are probability measures over that region. We also deï¬ne
# Px(Rx) :=
# Pz(Rz) =
â¦X pX(x)1[xâRX] dx â¦Z pZ(z)1[zâRZ] dz
as measures over regions RX â â¦X and RZ â â¦Z. We refer to the set of features and data samples in the support of PX and PZ as Ëâ¦X := supp(PX) and Ëâ¦Z := supp(PZ) respectively. DKL ( P || Q ) and DJS ( P || Q ) respectively denote the Kullback-Leibler (KL) and Jensen-Shannon divergences between probability measures P and Q. By deï¬nition, DKL ( P || Q ) := Exâ¼P [log fP Q(x)] P +Q DJS ( P || Q ) := 1 2 2
where fpg := ra is the Radon-Nikodym (RN) derivative of measure P with respect to measure Q, with the defining property that P(R) = [;, fpq dQ. The RN derivative fpg : 2+ Ryo is defined for any measures P and Q on space 2 such that P is absolutely continuous with respect to Q: i.e., for any R CQ, P(R) > 0 = > Q(R) > 0.
3.1 OPTIMAL DISCRIMINATOR, GENERATOR, & ENCODER
We start by characterizing the optimal discriminator for any generator and encoder, following Good- fellow et al. (2014). This optimal discriminator then allows us to reformulate objective (3), and show that it reduces to the Jensen-Shannon divergence between the joint distributions PEX and PGZ.
Proposition 1 For any E and G, the optimal discriminator Dig := arg max p V (D, E,G) is the dPpx : Q + [0,1] of measure Px with respect to Radon-Nikodym derivative fq := W(Poxt Pon) measure Pex + Pez.
Proof. Given in Appendix A.1.
This optimal discriminator now allows us to characterize the optimal generator and encoder.
Proposition 2 The encoder and generatorâs objective for an optimal discriminator C(E, G) := maxD V (D, E, G) = V (Dâ EG, E, G) can be rewritten in terms of the Jensen-Shannon divergence between measures PEX and PGZ as C(E, G) = 2 DJS ( PEX || PGZ ) â log 4.
Proof. Given in Appendix A.2.
Theorem 1 The global minimum of C(E, G) is achieved if and only if PEX = PGZ. At that point, C(E, G) = â log 4 and Dâ
Proof. From Proposition 2, we have that C(E, G) = 2 DJS ( PEX || PGZ ) â log 4. The Jensen- Shannon divergence DJS ( P || Q ) ⥠0 for any P and Q, and DJS ( P || Q ) = 0 if and only if P = Q. Therefore, the global minimum of C(E, G) occurs if and only if PEX = PGZ, and at this point the value is C(E, G) = â log 4. Finally, PEX = PGZ implies that the optimal discriminator is chance: Dâ
The optimal discriminator, encoder, and generator of BiGAN are similar to the optimal discriminator and generator of the GAN framework (Goodfellow et al., 2014). However, an important difference is that BiGAN optimizes a Jensen-Shannon divergence between a joint distribution over both data X and latent features Z. This joint divergence allows us to further characterize properties of G and E, as shown below.
3.2 OPTIMAL GENERATOR & ENCODER ARE INVERSES
We ï¬rst present an intuitive argument that, in order to âfoolâ a perfect discriminator, a deterministic BiGAN encoder and generator must invert each other. (Later we will formally state and prove this
4
Published as a conference paper at ICLR 2017
property.) Consider a BiGAN discriminator input pair (x, z). Due to the sampling procedure, (x, z) must satisfy at least one of the following two properties: (a) x â Ëâ¦X ⧠E(x) = z
(b) z â Ëâ¦Z ⧠G(z) = x
If only one of these properties is satisï¬ed, a perfect discriminator can infer the source of (x, z) with certainty: if only (a) is satisï¬ed, (x, z) must be an encoder pair (x, E(x)) and Dâ EG(x, z) = 1; if only (b) is satisï¬ed, (x, z) must be a generator pair (G(z), z) and Dâ EG(x, z) = 0. Therefore, in order to fool a perfect discriminator at (x, z) (so that 0 < Dâ EG(x, z) < 1), E and G must satisfy both (a) and (b). In this case, we can substitute the equality E(x) = z required by (a) into the equality G(z) = x required by (b), and vice versa, giving the inversion properties x = G(E(x)) and z = E(G(z)).
Formally, we show in Theorem 2 that the optimal generator and encoder invert one another almost everywhere on the support Ëâ¦X and Ëâ¦Z of PX and PZ.
Theorem 2 If E and G are an optimal encoder and generator, then E = Gâ1 almost everywhere; that is, G(E(x)) = x for PX-almost every x â â¦X, and E(G(z)) = z for PZ-almost every z â â¦Z.
Proof. Given in Appendix A.4.
While Theorem|2|characterizes the encoder and decoder at their optimum, due to the non-convex nature of the optimization, this optimum might never be reached. Experimentally, Section|4]shows that on standard datasets, the two are approximate inverses; however, they are rarely exact inverses. It is thus also interesting to show what objective BiGAN optimizes in terms of E and G. Next we show that BiGANs are closely related to autoencoders with an £ loss function.
3.3 RELATIONSHIP TO AUTOENCODERS
As argued in Section 1, a model trained to predict features z given data x should learn useful semantic representations. Here we show that the BiGAN objective forces the encoder E to do exactly this: in order to fool the discriminator at a particular z, the encoder must invert the generator at that z, such that E(G(z)) = z.
Theorem 3 The encoder and generator objective given an optimal discriminator C(E,G) := maxp V(D, E,G) can be rewritten as an â¬y autoencoder loss function
C(E,G) = Exnpx [1 peopeencteea)=x] log fra(x, E(x))| + Eapz [tate ecinen (Cte) =e] log (1 â fra(G(2),2))|
with log fEG â (ââ, 0) and log (1 â fEG) â (ââ, 0) PEX-almost and PGZ-almost everywhere.
Proof. Given in Appendix A.5.
Here the indicator function 1{q((x))=x| in the first term is equivalent to an autoencoder with Co loss, while the indicator 1(j(c@(z))=z) in the second term shows that the BiGAN encoder must invert the generator, the desired property for feature learning. The objective further encourages the functions E(x) and G(z) to produce valid outputs in the support of Pz, and Px respectively. Unlike regular autoencoders, the fy loss function does not make any assumptions about the structure or distribution of the data itself; in fact, all the structural properties of BiGAN are learned as part of the discriminator.
3.4 LEARNING
In practice, as in the GAN framework (Goodfellow et al., 2014), each BiGAN module D, G, and E is a parametric function (with parameters θD, θG, and θE, respectively). As a whole, BiGAN can be optimized using alternating stochastic gradient steps. In one iteration, the discriminator parameters θD are updated by taking one or more steps in the positive gradient direction âθD V (D, E, G), then the encoder parameters θE and generator parameters θG are together updated by taking a step in the negative gradient direction ââθE ,θG V (D, E, G). In both cases, the expectation terms of
5
Published as a conference paper at ICLR 2017
V (D, E, G) are estimated using mini-batches of n samples {x(i) â¼ pX}n drawn independently for each update step. i=1 and {z(i) â¼ pZ}n i=1
Goodfellow et al. (2014) found that an objective in which the real and generated labels Y are swapped provides stronger gradient signal to G. We similarly observed in BiGAN training that an âinverseâ objective provides stronger gradient signal to G and E. For efï¬ciency, we also update all modules D, G, and E simultaneously at each iteration, rather than alternating between D updates and G, E updates. See Appendix B for details.
3.5 GENERALIZED BIGAN
It is often useful to parametrize the output of the generator G and encoder F in a different, usually smaller, space { & and Oy rather than the original Qx and Qz,. For example, for visual feature learning, the images input to the encoder should be of similar resolution to images used in the evaluation. On the other hand, generating high resolution images remains difficult for current generative models. In this situation, the encoder may take higher resolution input while the generator output and discriminator input remain low resolution. We generalize the BiGAN objective V(D, G, E) (3) with functions gx : Qx + OX and gz : Az QZ, and encoder E : Nx ++ Nz, generator G : Az 4 NX, and discriminator D : OX x 0, + [0, 1]: Exnpx [ Een pn(-lx) [log D(9x(x), z')| ] + EL wpz [ Been pg (-lz) [log (1 _ D(xâ, gz(z)))] ] SS ââââ log D(gx (x), E(x)) log(1âD(G(2),gz(2)))
and gz : Az OX x 0, + [0, 1]: gz(z)))] ]
Nz, generator G : Az 4 NX, D(9x(x), z')| ] + EL wpz [
An identity gx (x) = x and gz(z) = z (and Q = Nx, NZ = Oz) yields the original objective. For visual feature learning with higher resolution encoder inputs, gx is an image resizing function that downsamples a high resolution image x ⬠(x to a lower resolution image xâ ⬠OQ, as output by the generator. (gz, is identity.)
In this case, the encoder and generator respectively induce probability measures Pgxâ and Paz over regions R C of the joint space OY := OX x OF, with Pex/(R) := Joy. Say, Sng, PEXCS 2) Loe 2ryer9(9x x) â x') da! dxâ dx = Jo, Px) 1 (x(x), BG9)ER) IX and Pez; defined analogously. For optimal E and G, we can show Pex: = Paz: a generalization of Theorem|]] When E and G are deterministic and optimal, Theorem|2|- that £ and G invert one another â can also be generalized: 4,9, {E(x) = gz(z) \ G(z) = 9x(x)} for Px-almost every x ⬠Ox, and 5,29, {E(x) = gz(z) A Gz) = gx(x)} for Pz-almost every z ⬠Oz.
# 4 EVALUATION
We evaluate the feature learning capabilities of BiGANs by ï¬rst training them unsupervised as described in Section 3.4, then transferring the encoderâs learned feature representations for use in auxiliary supervised learning tasks. To demonstrate that BiGANs are able to learn meaningful feature representations both on arbitrary data vectors, where the model is agnostic to any underlying structure, as well as very high-dimensional and complex distributions, we evaluate on both permutation-invariant MNIST (LeCun et al., 1998) and on the high-resolution natural images of ImageNet (Russakovsky et al., 2015).
In all experiments, each module D, G, and E is a parametric deep (multi-layer) network. The BiGAN discriminator D(x, z) takes data x as its initial input, and at each linear layer thereafter, the latent representation z is transformed using a learned linear transformation to the hidden layer dimension and added to the non-linearity input.
4.1 BASELINE METHODS
Besides the BiGAN framework presented above, we considered alternative approaches to learning feature representations using different GAN variants.
Discriminator The discriminator D in a standard GAN takes data samples x â¼ pX as input, making its learned intermediate representations natural candidates as feature representations for related tasks.
6
Published as a conference paper at ICLR 2017
BiGAN D LR JLR- AE(é:) AE (¢,) 97.39 9730 97.44 97.13 97.58 97.63
Table 1: One Nearest Neighbors (1NN) classification accuracy (%) on the permutation-invariant MNIST (LeCun et al.|/1998) test set in the feature space learned by BiGAN, Latent Regressor (LR), Joint Latent Regressor (JLR), and an autoencoder (AE) using an ¢, or £3 distance.
G(z) x G(E(x))
21
GI
7
Figure 2: Qualitative results for permutation-invariant MNIST BiGAN training, including generator samples G(z), real data x, and corresponding reconstructions G(E(x)).
This alternative is appealing as it requires no additional machinery, and is the approach used for unsupervised feature learning in Radford et al. (2016). On the other hand, it is not clear that the task of distinguishing between real and generated data requires or beneï¬ts from intermediate representations that are useful as semantic feature representations. In fact, if G successfully generates the true data distribution pX(x), D may ignore the input data entirely and predict P (Y = 1) = P (Y = 1|x) = 1 2 unconditionally, not learning any meaningful intermediate representations.
Latent regressor We consider an alternative encoder training by minimizing a reconstruction loss L(z, E(G(z))), after or jointly during a regular GAN training, called latent regressor or joint latent regressor respectively. We use a sigmoid cross entropy loss L as it naturally maps to a uniformly distributed output space. Intuitively, a drawback of this approach is that, unlike the encoder in a BiGAN, the latent regressor encoder E is trained only on generated samples G(z), and never âseesâ real data x â¼ pX. While this may not be an issue in the theoretical optimum where pG(x) = pX(x) exactly â i.e., G perfectly generates the data distribution pX â in practice, for highly complex data distributions pX, such as the distribution of natural images, the generator will almost never achieve this perfect result. The fact that the real data x are never input to this type of encoder limits its utility as a feature representation for related tasks, as shown later in this section.
4.2 PERMUTATION-INVARIANT MNIST
We ï¬rst present results on permutation-invariant MNIST (LeCun et al., 1998). In the permutation- invariant setting, each 28Ã28 digit image must be treated as an unstructured 784D vector (Goodfellow et al., 2013). In our case, this condition is met by designing each module as a multi-layer perceptron (MLP), agnostic to the underlying spatial structure in the data (as opposed to a convnet, for example). See Appendix C.1 for more architectural and training details. We set the latent distribution pZ = [U(â1, 1)]50 â a 50D continuous uniform distribution.
Table[I]compares the encoding learned by a BiGAN-trained encoder F with the baselines described in Section]. I] as well as autoencoders trained directly to minimize either £2 or £; reconstruction error. The same architecture and optimization algorithm is used across all methods. All methods, including BiGAN, perform at roughly the same level. This result is not overly surprising given the relative simplicity of MNIST digits. For example, digits generated by G ina GAN nearly perfectly match the data distribution (qualitatively), making the latent regressor (LR) baseline method a reasonable choice, as argued in Section/4.1] Qualitative results are presented in Figure[2]
4.3 IMAGENET
Next, we present results from training BiGANs on ImageNet LSVRC (Russakovsky et al., 2015), a large-scale database of natural images. GANs trained on ImageNet cannot perfectly reconstruct
7
Published as a conference paper at ICLR 2017
D E Noroozi & Favaro (2016) G AlexNet-based D Krizhevsky et al. (2012)
Figure 3: The convolutional ï¬lters learned by the three modules (D, G, and E) of a BiGAN (left, top-middle) trained on the ImageNet (Russakovsky et al., 2015) database. We compare with the ï¬lters learned by a discriminator D trained with the same architecture (bottom-middle), as well as the ï¬lters reported by Noroozi & Favaro (2016), and by Krizhevsky et al. (2012) for fully supervised ImageNet training (right).
G(z) x G(E(x)) x G(E(x)) x G(E(x))
Figure 4: Qualitative results for ImageNet BiGAN training, including generator samples G(z), real data x, and corresponding reconstructions G(E(x)).
the data, but often capture some interesting aspects. Here, each of D, G, and E is a convnet. In all experiments, the encoder E architecture follows AlexNet (Krizhevsky et al., 2012) through the ï¬fth and last convolution layer (conv5). We also experiment with an AlexNet-based discriminator D as a baseline feature learning approach. We set the latent distribution pZ = [U(â1, 1)]200 â a 200D continuous uniform distribution. Additionally, we experiment with higher resolution encoder input images â 112 à 112 rather than the 64 à 64 used elsewhere â using the generalization described in Section 3.5. See Appendix C.2 for more architectural and training details.
Qualitative results The convolutional ï¬lters learned by each of the three modules are shown in Figure 3. We see that the ï¬lters learned by the encoder E have clear Gabor-like structure, similar to those originally reported for the fully supervised AlexNet model (Krizhevsky et al., 2012). The ï¬lters also have similar âgroupingâ structure where one half (the bottom half, in this case) is more color sensitive, and the other half is more edge sensitive. (This separation of the ï¬lters occurs due to the AlexNet architecture maintaining two separate ï¬lter paths for computational efï¬ciency.)
In Figure 4 we present sample generations G(z), as well as real data samples x and their BiGAN re- constructions G(E(x)). The reconstructions, while certainly imperfect, demonstrate empirically that
8
Published as a conference paper at ICLR 2017
conv1 conv2 conv3 conv4 conv5 Random (Noroozi & Favaro, 2016) Wang & Gupta (2015) Doersch et al. (2015) Noroozi & Favaro (2016)* 48.5 51.8 53.1 57.1 41.0 46.9 47.6 56.0 34.8 42.8 48.7 52.4 27.1 38.8 45.6 48.3 12.0 29.8 30.4 38.1 BiGAN (ours) BiGAN, 112 Ã 112 E (ours) 56.2 55.3 54.4 53.2 49.4 49.3 43.9 44.4 33.3 34.8
Table 2: Classiï¬cation accuracy (%) for the ImageNet LSVRC (Russakovsky et al., 2015) validation set with various portions of the network frozen, or reinitialized and trained from scratch, following the evaluation from Noroozi & Favaro (2016). In, e.g., the conv3 column, the ï¬rst three layers â conv1 through conv3 â are transferred and frozen, and the last layers â conv4, conv5, and fully connected layers â are reinitialized and trained fully supervised for ImageNet classiï¬cation. BiGAN is competitive with these contemporary visual feature learning methods, despite its generality. (*Results from Noroozi & Favaro (2016) are not directly comparable to those of the other methods as a different base convnet architecture with larger intermediate feature maps is used.)
the BiGAN encoder E and generator G learn approximate inverse mappings, as shown theoretically in Theorem 2. In Appendix C.2, we present nearest neighbors in the BiGAN learned feature space.
ImageNet classiï¬cation Following Noroozi & Favaro (2016), we evaluate by freezing the ï¬rst N layers of our pretrained network and randomly reinitializing and training the remainder fully supervised for ImageNet classiï¬cation. Results are reported in Table 2.
VOC classiï¬cation, detection, and segmentation We evaluate the transferability of BiGAN rep- resentations to the PASCAL VOC (Everingham et al., 2014) computer vision benchmark tasks, including classiï¬cation, object detection, and semantic segmentation. The classiï¬cation task involves simple binary prediction of presence or absence in a given image for each of 20 object categories. The object detection and semantic segmentation tasks go a step further by requiring the objects to be localized, with semantic segmentation requiring this at the ï¬nest scale: pixelwise prediction of object identity. For detection, the pretrained model is used as the initialization for Fast R-CNN (Gir- shick, 2015) (FRCN) training; and for semantic segmentation, the model is used as the initialization for Fully Convolutional Network (Long et al., 2015) (FCN) training, in each case replacing the AlexNet (Krizhevsky et al., 2012) model trained fully supervised for ImageNet classiï¬cation. We report results on each of these tasks in Table 3, comparing BiGANs with contemporary approaches to unsupervised (Krähenbühl et al., 2016) and self-supervised (Doersch et al., 2015; Agrawal et al., 2015; Wang & Gupta, 2015; Pathak et al., 2016) feature learning in the visual domain, as well as the baselines discussed in Section 4.1.
# 4.4 DISCUSSION
Despite making no assumptions about the underlying structure of the data, the BiGAN unsupervised feature learning framework offers a representation competitive with existing self-supervised and even weakly supervised feature learning approaches for visual feature learning, while still being a purely generative model with the ability to sample data x and predict latent representation z. Furthermore, BiGANs outperform the discriminator (D) and latent regressor (LR) baselines discussed in Section 4.1, conï¬rming our intuition that these approaches may not perform well in the regime of highly complex data distributions such as that of natural images. The version in which the encoder takes a higher resolution image than output by the generator (BiGAN 112 à 112 E) performs better still, and this strategy is not possible under the LR and D baselines as each of those modules take generator outputs as their input.
Although existing self-supervised approaches have shown impressive performance and thus far tended to outshine purely unsupervised approaches in the complex domain of high-resolution images, purely unsupervised approaches to feature learning or pre-training have several potential beneï¬ts.
9
Published as a conference paper at ICLR 2017
FRCN FCN Classification Detection Segmentation (% mAP) (% mAP) (% mlIU) trained layers fe8 = fc6-8 ~â all all all ImageNet (Krizhevsky et al. 770 788 78.3 56.8 48.0 [Agrawal et al.|(2015) 31.2 31.0 54.2 43.9 - self-su âathe a i20T6) 30.5 34.6 56.5 44.5 30.0 sem sup. 28.4 55.6 63.1 474 - Doers al. 44.7 55.1 65.3 51.1 - k-means 32.0 39.2 566 45.6 32.6 Discriminator (D 30.7 40.5 564 - - Latent Regressor (LR) 36.9 47.9 57.1 - - unsup. Joint LR 37.1 47.9 56.5 - - Autoencoder (¢2) 24.8 160 53.8 41.9 - BiGAN (ours) 37.5 48.7 58.9 46.2 34.9 BiGAN, 112 x 112 E (ours) 41.7 525 60.3 46.9 35.2
Table 3: Classiï¬cation and Fast R-CNN (Girshick, 2015) detection results for the PASCAL VOC 2007 (Everingham et al., 2014) test set, and FCN (Long et al., 2015) segmentation results on the PASCAL VOC 2012 validation set, under the standard mean average precision (mAP) or mean intersection over union (mIU) metrics for each task. Classiï¬cation models are trained with various portions of the AlexNet (Krizhevsky et al., 2012) model frozen. In the fc8 column, only the linear classiï¬er (a multinomial logistic regression) is learned â in the case of BiGAN, on top of randomly initialized fully connected (FC) layers fc6 and fc7. In the fc6-8 column, all three FC layers are trained fully supervised with all convolution layers frozen. Finally, in the all column, the entire network is âï¬ne-tunedâ. BiGAN outperforms other unsupervised (unsup.) feature learning approaches, including the GAN-based baselines described in Section 4.1, and despite its generality, is competitive with contemporary self-supervised (self-sup.) feature learning approaches speciï¬c to the visual domain.
BiGAN and other unsupervised learning approaches are agnostic to the domain of the data. The self-supervised approaches are speciï¬c to the visual domain, in some cases requiring weak super- vision from video unavailable in images alone. For example, the methods are not applicable in the permutation-invariant MNIST setting explored in Section 4.2, as the data are treated as ï¬at vectors rather than 2D images.
Furthermore, BiGAN and other unsupervised approaches neednât suffer from domain shift between the pre-training task and the transfer task, unlike self-supervised methods in which some aspect of the data is normally removed or corrupted in order to create a non-trivial prediction task. In the context prediction task (Doersch et al., 2015), the network sees only small image patches â the global image structure is unobserved. In the context encoder or inpainting task (Pathak et al., 2016), each image is corrupted by removing large areas to be ï¬lled in by the prediction network, creating inputs with dramatically different appearance from the uncorrupted natural images seen in the transfer tasks.
Other approaches (Agrawal et al., 2015; Wang & Gupta, 2015) rely on auxiliary information un- available in the static image domain, such as video, egomotion, or tracking. Unlike BiGAN, such approaches cannot learn feature representations from unlabeled static images.
We ï¬nally note that the results presented here constitute only a preliminary exploration of the space of model architectures possible under the BiGAN framework, and we expect results to improve sig- niï¬cantly with advancements in generative image models and discriminative convolutional networks alike.
# ACKNOWLEDGMENTS
The authors thank Evan Shelhamer, Jonathan Long, and other Berkeley Vision labmates for helpful discussions throughout this work. This work was supported by DARPA, AFRL, DoD MURI award N000141110688, NSF awards IIS-1427425 and IIS-1212798, and the Berkeley Artiï¬cial Intelligence Research laboratory. The GPUs used for this work were donated by NVIDIA.
10
Published as a conference paper at ICLR 2017
# REFERENCES
Pulkit Agrawal, Joao Carreira, and Jitendra Malik. Learning to see by moving. In ICCV, 2015.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2015.
Emily L. Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. Deep generative image models using a Laplacian pyramid of adversarial networks. In NIPS, 2015.
Carl Doersch, Abhinav Gupta, and Alexei A. Efros. Unsupervised visual representation learning by context prediction. In ICCV, 2015.
Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. DeCAF: A deep convolutional activation feature for generic visual recognition. In ICML, 2014.
Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. arXiv:1606.00704, 2016.
Mark Everingham, S. M. Ali Eslami, Luc Van Gool, Christopher K. I. Williams, John Winn, and Andrew Zisserman. The PASCAL Visual Object Classes challenge: A retrospective. IJCV, 2014.
Ross Girshick. Fast R-CNN. In ICCV, 2015.
Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014.
Ian Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. In ICML, 2013.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014.
Alex Graves, Abdel-rahman Mohamed, and Geoffrey E. Hinton. Speech recognition with deep recurrent neural networks. In ICASSP, 2013.
Geoffrey E. Hinton and Ruslan R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 2006.
Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural Computation, 2006.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv:1408.5093, 2014.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
Philipp Krähenbühl, Carl Doersch, Jeff Donahue, and Trevor Darrell. Data-dependent initializations of convolutional neural networks. In ICLR, 2016.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet classiï¬cation with deep convolu- tional neural networks. In NIPS, 2012.
Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proc. IEEE, 1998.
Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
Andrew L. Maas, Awni Y. Hannun, and Andrew Y. Ng. Rectiï¬er nonlinearities improve neural network acoustic models. In ICML, 2013.
11
Published as a conference paper at ICLR 2017
Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In ECCV, 2016.
Deepak Pathak, Philipp Krähenbühl, Jeff Donahue, Trevor Darrell, and Alexei A. Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016.
Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016.
Ali Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. CNN features off-the-shelf: an astounding baseline for recognition. In CVPR Workshops, 2014.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Fei-Fei Li. ImageNet large scale visual recognition challenge. IJCV, 2015.
Ruslan Salakhutdinov and Geoffrey E. Hinton. Deep Boltzmann machines. In AISTATS, 2009.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In NIPS, 2014.
Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv:1605.02688, 2016.
Oriol Vinyals, Åukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey E. Hinton. Grammar as a foreign language. In NIPS, 2015.
Xiaolong Wang and Abhinav Gupta. Unsupervised learning of visual representations using videos. In ICCV, 2015.
Matthew D. Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In ECCV, 2014.
12
Published as a conference paper at ICLR 2017
# APPENDIX A ADDITIONAL PROOFS
A.1 PROOF OF PROPOSITION 1 (OPTIMAL DISCRIMINATOR)
Proposition 1 For any E and G, the optimal discriminator Digg := arg max p V (D, E,G) is the dPex. : Q ++ [0,1] of measure Pex with respect to Radon-Nikodym derivative fig := Pax tPoz) measure Pex + Pez.
Proof. For measures P and Q on space â¦, with P absolutely continuous with respect to Q, the RN derivative fP Q := dP
exists, and we have [9(x)] = fog dP = Jo
Ex~p [9(x)] = fog dP = Jo 956 1Q = Jo gfrg dQ = Exxg [frq(x)g(x)]- (4)
Let the probability measure PEG := PEX+PGZ denote the average of measures PEX and PGZ. Both PEX and PGZ are each absolutely continuous with respect to PEG. Hence the RN derivatives fEG :=
fEG + fGE = dPEX d(PEX+PGZ) + d(PEX+PGZ) = d(PEX+PGZ) dPGZ d(PEX+PGZ) = 1. (5)
We use (4) and (5) to rewrite the objective V (3) as a single expectation under measure PEG:
V(D,E, G) = E(x,2)~Pex [log D(x, z)| + E(x2)~Poz [log (1â D(x, z))] = Eqz)~Pre faa (x, 2) log D(x, 2)] + Ex,2)~Pec l2fou(x, 2) log (1 â D(x, z))] Se Sa dPpx dPez dPra dPgc = 2E(,2)~Pro [fec(x,2) log D(x, 2) + faz(x, 2) log (1 â D(x, z))] = 2E(x,2)~Pre [fea(x, z) log D(x, z) + (1 â fea(x,z)) log (1 â D(x, z))].
Note that arg maxy {a log y + (1 â a) log(1 â y)} = a for any a â [0, 1]. Thus, Dâ
# Dig = fea.
A.2 PROOF OF PROPOSITION 2 (ENCODER AND GENERATOR OBJECTIVE)
Proposition 2 The encoder and generatorâs objective for an optimal discriminator C(E, G) := maxD V (D, E, G) = V (Dâ EG, E, G) can be rewritten in terms of the Jensen-Shannon divergence between measures PEX and PGZ as C(E, G) = 2 DJS ( PEX || PGZ ) â log 4.
Proof. Using Proposition 1 along with (5) (1 â Dâ EG = 1 â fEG = fGE) we rewrite the objective
C(E,G) = maxpV(D, E,G) = V (Dig, E.G) = Ex,2)~Prx [log Ding (x, 2)] + Eqx,2)~Pez [log (1 â Digg (x, 2))] = Ex,2)~Prx [log fec(x, Z)] + Eqx,2)~Pez [log fan(x, 2) = Eexz)~Ppx [log (2fea(x, Z))] + Ex.z)~Pez [log (2fen(x,z))] â log 4 = Dut (Pex || Pea) + Dix (Paz || Pea) â log 4 = Dy (Pox || 22422) + Dax. (Po || 42) â log =2Djs (Pex l| Paz) âlog4.
# A.3 MEASURE DEFINITIONS FOR DETERMINISTIC E AND G
While Theorem 1 and Propositions 1 and 2 hold for any encoder pE(z|x) and generator pG(x|z), stochastic or deterministic, Theorems 2 and 3 assume the encoder E and generator G are deterministic functions; i.e., with conditionals pE(z|x) = δ(z â E(x)) and pG(x|z) = δ(x â G(z)) deï¬ned as δ functions.
13
Published as a conference paper at ICLR 2017
For use in the proofs of those theorems, we simplify the deï¬nitions of measures PEX and PGZ given in Section 3 for the case of deterministic functions E and G below:
Pex(R) = Joy Px(®) fog PE(ZzIX)Ux,2)eR) dz dx = Joye PX(®) (Jog (2 â E(®))1oc2yen| a2) ax = Jog PX) 1x, 2o0) ER] EX Paz(R) = Jo, P2(2) Jog PG(XI2)1 [(,2) eR] Ix dz = Jog P22) (Jog 5(X â G2) )Upx2)en) 4x) da = Jo, pal2)1a(z)z)er) de
A.4 PROOF OF THEOREM 2 (OPTIMAL GENERATOR AND ENCODER ARE INVERSES)
Theorem 2 If E and G are an optimal encoder and generator, then E = Gâ1 almost everywhere; that is, G(E(x)) = x for PX-almost every x â â¦X, and E(G(z)) = z for PZ-almost every z â â¦Z.
Proof. Let R0 x = G(E(x)) does not hold. We will show that, for optimal E and G, R0 PX (i.e., PX(R0 Let R0 := {(x, z) â ⦠: z = E(x) ⧠x â R0 only if x â R0 and the fact that PEX = PGZ for optimal E and G (Theorem 1).
Proof. Let R& := {x ⬠Nx : x # G(E(x))} be the region of Qx in which the inversion property x = G(E(x)) does not hold. We will show that, for optimal E and G, R& has measure zero under Px (i.e., Px (RX) = 0) and therefore x = G(E(x)) holds Px-almost everywhere.
X} be the region of ⦠such that (x, E(x)) â R0 if and X. Weâll use the deï¬nitions of PEX and PGZ for deterministic E and G (Appendix A.3),
= Jox Px(X)1 erg dx = fay PXC)1(x,B60) eR] 4 = Ppx(R°) = Poz(R°) = Jo, P2(Z)1a(2) 2)eR0] I = Jor p2(2)1 (2 2(G(2)) AG(2) ERS] dz = Joe pz (2) 1, =0 for any z, as z= E(G(z)) => G(z)=G(B(G(z))) E(G(2)) A G(2)#G(B(G(2)))] dz =0.
# PX(R0
# Px(RX)
= 0.
X has measure zero (PX(R0 X) = 0), and the inversion property x = G(E(x)) holds
Hence region R0 PX-almost everywhere. An analogous argument shows that R0 PZ(R0
An analogous argument shows that RZ := {z ⬠Qz :z # E(G(z))} has measure zero on Pz (ie., Pz(Rz) = 0) and therefore z = E(G(z)) holds Pz-almost everywhere.
# A.5 PROOF OF THEOREM 3 (RELATIONSHIP TO AUTOENCODERS)
As shown in Proposition 2 (Section 3), the BiGAN objective is equivalent to the Jensen-Shannon divergence between PEX and PGZ. We now go a step further and show that this Jensen-Shannon divergence is closely related to a standard autoencoder loss. Omitting the 1 2 scale factor, a KL divergence term of the Jensen-Shannon divergence is given as
dP; Der (P; Pex+Pez) â Joe 2 4 [ 1 EX ap xu (Pex || pt) = log Q °8 d(Pex + Pox) ** = log2+ [ log f dPex, (6) 2
where we abbreviate as f the Radon-Nikodym derivative fEG := Proposition 1 for most of this proof. dPEX d(PEX+PGZ) â [0, 1] deï¬ned in
14
Published as a conference paper at ICLR 2017
Weâll make use of the deï¬nitions of PEX and PGZ for deterministic E and G found in Appendix A.3. The integral term of the KL divergence expression given in (6) over a particular region R â ⦠will be denoted by
F (R) := R log dPEX d (PEX + PGZ) dPEX = R log f dPEX.
Next we will show that f > 0 holds PEX-almost everywhere, and hence F is always well deï¬ned and ï¬nite. We then show that F is equivalent to an autoencoder-like reconstruction loss function.
Proposition 3 f > 0 PEX-almost everywhere.
Proof. Let RS=° := {(x,z) ⬠Q: f(x,z) = 0} be the region of Q in which f = 0. Using the definition of the Radon-Nikodym derivative f, the measure Prx(R! =) = fi pi-o f A(Pex + Paz) = Jnr-o 0d(Pex + Paz) = Vis zero. Hence f > 0 Pzx-almost everywhere. Proposition[3]ensures that log f is defined Pex-almost everywhere, and F'(R) is well-defined. Next we will show that F(R) mimics an autoencoder with @p loss, meaning F is zero for any region in which G(E(x)) 4 x, and non-zero otherwise.
Proposition 4 The KL divergence F outside the support of PGZ is zero: F (⦠\ supp(PGZ)) = 0.
Weâll first show that in region Rs := 2 \ supp(Pez), we have f = 1 Pex-almost everywhere. Let R/<! := {(x,z) ⬠Rg: f(x,z) < 1} be the region of Rg in which f < 1. Letâs assume that Prx(R! <1) > 0 has non-zero measure. Then, using the definition of the Radon-Nikodym derivative, Pex(RIS") = fraser f U(Pex + Pox) = Sera f dPax + Sara f dPaw < â¬Pex(R!s?)
Pex(RIS") = fraser f U(Pex + Pox) = Sera f dPax + Sara f dPaw < â¬Pex(R!s?) â~~ ee <e<l 0
< PEX(Rf <1),
where ¢ is a constant smaller than 1. But Pex(R/<!) < Pex(R/<') is a contradiction; hence Prex(Rf<!) = 0 and f = 1 Pex-almost everywhere in Rs, implying log f = 0 Pex-almost everywhere in Rg. Hence F(Rs) = 0. By definition, F(Q \ supp(Pzx)) = 0 is also zero. The only region where F' might be non-zero is R! := supp(Pex) Nsupp(Paz).
Proposition 5 f < 1 PEX-almost everywhere in R1.
Let Rf! := {(x, z) ER: f(x,z)= 1} be the region in which f = 1. Letâs assume the set RS=" F is not empty. By definition of the support] Prx(Rf=1) > 0 and Pez(R/=!) > 0. The Radon-Nikodym derivative on R=" is then given by Ppx(RIâ') = Spin f Pex + Pox) = Spr 1d(Pex + Pez) = Pux(RI) + Pao(R!=),
# Spin f Pex + Pox) =
Ppx(RIâ') = Spin f Pex + Pox) = Spr 1d(Pex + Pez) = Pux(RI) + Pao(R!=), which implies Pez(R/=!) = 0 and contradicts the definition of support. Hence R/=! = ( and f <1 Prx-almost everywhere on R', implying log f < 0 Pex-almost everywhere.
Theorem 3 The encoder and generator objective given an optimal discriminator C(E,G) := maxp V(D, E,G) can be rewritten as an ⬠autoencoder loss function
C(E, G) = Exxpx [1 peopeencteea)=x] log fra(x, E(x))| + Eapz [tate ecinen (Cte) =e] log (1 â fra(G(2),2))|
with log fEG â (ââ, 0) and log (1 â fEG) â (ââ, 0) PEX-almost and PGZ-almost everywhere.
Proof. Proposition 4 (F (⦠\ supp(PGZ)) = 0) and F (⦠\ supp(PEX)) = 0 imply that R1 := supp(PEX) â© supp(PGZ) is the only region of ⦠where F may be non-zero; hence F (â¦) = F (R1).
"We use the definition UNC #0 => p(UNC) > Ohere.
15
Published as a conference paper at ICLR 2017
Note that
supp(PEX) = {(x, E(x)) : x â Ëâ¦X} supp(PGZ) = {(G(z), z) : z â Ëâ¦Z}
=â R1 := supp(PEX) â© supp(PGZ) = {(x, z) : E(x) = z ⧠x â Ëâ¦X ⧠G(z) = x ⧠z â Ëâ¦Z}
So a point (x, E(x)) is in R1 if x â Ëâ¦X, E(x) â Ëâ¦Z, and G(E(x)) = x. (We can omit the x â Ëâ¦X condition from inside an expectation over PX, as PX-almost all x /â Ëâ¦X have 0 probability.) Therefore,
Dut (Pex || 72*$"e2) â log2 = F(Q) = F(Râ) = Sra log f(x, z) dPex = fo lezyers log f(x, 2) dPax = Eq2)~Pex [L[(x.2)eR1] log f(x, 2)] = Exxpx [1Ge2())eR') log f(x, E(x))] = Exxpx [tee retenc(eooy=x] log f(x, E(x))|
Finally, with Propositions 3 and 5, we have f â (0, 1) PEX-almost everywhere in R1, and therefore log f â (ââ, 0), taking a ï¬nite and strictly negative value PEX-almost everywhere.
An analogous argument (along with the fact that fEG + fGE = 1) lets us rewrite the other KL divergence term
Dux (Paz || 788 $"84 ) â log 2 = Exnpe [feta eirenz(orn))=2] log far(G(z), 2)| = Exxpz [feta eirenz(orn))=2] log (1 â fra(G(2),2))|
# DKL
The Jensen-Shannon divergence is the mean of these two KL divergences, giving C(E, G):
C(E,G) = 2Djs (Pex || Paz) â log 4 = Dxi (Pex || Pext Pon ) + Dkr (Pez || PextPoz ) âlog4 = Exxpx [2 pemetenc era )=x] log fra(x, B(x))| + Ex~pz [1,etaetrxass(ate))=3 log (1 â fec(G(z), z))|
# APPENDIX B LEARNING DETAILS
In this section we provide additional details on the BiGAN learning protocol summarized in Sec- tion 3.4. Goodfellow et al. (2014) found for GAN training that an objective in which the real and generated labels Y are swapped provides stronger gradient signal to G. We similarly observed in BiGAN training that an âinverseâ objective Î (with the same ï¬xed point characteristics as V ) provides stronger gradient signal to G and E, where
A(D, G, E) = Exnpx [| Ez~pp(-|x) [log (1 â D(x, 2))] ] + Exepz [ Exxpe(-|z) log D(x, z)]]. ââ.ââ EE ae log(1â D(x, E(x))) log D(G(z),z)
In practice, θG and θE are updated by moving in the positive gradient direction of this inverse objective âθE ,θGÎ, rather than the negative gradient direction of the original objective. We also observed that learning behaved similarly when all parameters θD, θG, θE were updated simultaneously at each iteration rather than alternating between θD updates and θG, θE updates, so we took the simultaneous updating (non-alternating) approach for computational efï¬ciency. (For standard GAN training, simultaneous updates of θD, θG performed similarly well, so our standard GAN experiments also follow this protocol.)
16
.
Published as a conference paper at ICLR 2017
# APPENDIX C MODEL AND TRAINING DETAILS
In the following sections we present additional details on the models and training protocols used in the permutation-invariant MNIST and ImageNet evaluations presented in Section 4.
Optimization For unsupervised training of BiGANs and baseline methods, we use the Adam optimizer to compute parameter updates, following the hyperparameters (initial step size a = 2 x 10~*, momentum f, = 0.5 and 8 = 0.999) used by [Radford et al 2016). The step size a is decayed exponentially to a = 2 x 10~° starting halfway through training. The mini-batch size is 128. â¬2 weight decay of 2.5 x 10~° is applied to all multiplicative weights in linear layers (but not to the learned bias { or scale 7 parameters applied after batch normalization). Weights are initialized from a zero-mean normal distribution with a standard deviation of 0.02, with one notable exception: BiGAN discriminator weights that directly multiply z inputs to be added to spatial convolution outputs have initializations scaled by the convolution kernel size âe.g., fora 5 x 5 kernel, weights are initialized with a standard deviation of 0.5, 25 times the standard initialization.
Software & hardware We implement BiGANs and baseline feature learning methods using the Theano (Theano Development Team, 2016) framework, based on the convolutional GAN implemen- tation provided by Radford et al. (2016). ImageNet transfer learning experiments (Section 4.3) use the Caffe (Jia et al., 2014) framework, per the Fast R-CNN (Girshick, 2015) and FCN (Long et al., 2015) reference implementations. Most computation is performed on an NVIDIA Titan X or Tesla K40 GPU.
C.1 PERMUTATION-INVARIANT MNIST
In all permutation-invariant MNIST experiments (Section 4.2), D, G, and E each consist of two hidden layers with 1024 units. The ï¬rst hidden layer is followed by a non-linearity; the second is followed by (parameter-free) batch normalization (Ioffe & Szegedy, 2015) and a non-linearity. The second hidden layer in each case is the input to a linear prediction layer of the appropriate size. In D and E, a leaky ReLU (Maas et al., 2013) non-linearity with a âleakâ of 0.2 is used; in G, a standard ReLU non-linearity is used. All models are trained for 400 epochs.
C.2 IMAGENET
In all ImageNet experiments (Section 4.3), the encoder E architecture follows AlexNet (Krizhevsky et al., 2012) through the ï¬fth and last convolution layer (conv5), with local response normalization (LRN) layers removed and batch normalization (Ioffe & Szegedy, 2015) (including the learned scaling and bias) with leaky ReLU non-linearity applied to the output of each convolution at unsupervised training time. (For supervised evaluation, batch normalization is not used, and the pre-trained scale and bias is merged into the preceding convolutionâs weights and bias.)
In most experiments, both the discriminator D and generator G architecture are those used by Radford et al. (2016), consisting of a series of four 5 à 5 convolutions (or âdeconvolutionsâ â fractionally- strided convolutions â for the generator G) applied with 2 pixel stride, each followed by batch normalization and rectiï¬ed non-linearity.
The sole exception is our discriminator baseline feature learning experiment, in which we let the discriminator D be the AlexNet variant described above. Generally, using AlexNet (or similar convnet architecture) as the discriminator D is detrimental to the visual ï¬delity of the resulting generated images, likely due to the relatively large convolutional ï¬lter kernel size applied to the input image, as well as the max-pooling layers, which explicitly discard information in the input. However, for fair comparison of the discriminatorâs feature learning abilities with those of BiGANs, we use the same architecture as used in the BiGAN encoder.
Preprocessing To produce a data sample x, we ï¬rst sample an image from the database, and resize it proportionally such that its shorter edge has a length of 72 pixels. Then, a 64 à 64 crop is randomly selected from the resized image. The crop is ï¬ipped horizontally with probability 1 2 . Finally, the crop is scaled to [â1, 1], giving the sample x.
17
Published as a conference paper at ICLR 2017
Query #1 #2 #3 #4
Figure 5: For the query images used in Krähenbühl et al. (2016) (left), nearest neighbors (by minimum cosine distance) from the ImageNet LSVRC (Russakovsky et al., 2015) training set in the fc6 feature space of the ImageNet-trained BiGAN encoder E. (The fc6 weights are set randomly; this space is a random projection of the learned conv5 feature space.)
Timing A single epoch (one training pass over the 1.2 million images) of BiGAN training takes roughly 40 minutes on a Titan X GPU. Models are trained for 100 epochs, for a total training time of under 3 days.
Nearest neighbors encoder E learned in unsupervised ImageNet training. In Figure 5 we present nearest neighbors in the feature space of the BiGAN
18 | {
"id": "1605.02688"
} |